Person:
ÖZTOP, Erhan

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Erhan

Last Name

ÖZTOP
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 62
  • Placeholder
    ArticlePublication
    Robotic grasping and manipulation through human visuomotor learning
    (Elsevier, 2012-03) Moore, B.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan
    A major goal of robotics research is to develop techniques that allow non-experts to teach robots dexterous skills. In this paper, we report our progress on the development of a framework which exploits human sensorimotor learning capability to address this aim. The idea is to place the human operator in the robot control loop where he/she can intuitively control the robot, and by practice, learn to perform the target task with the robot. Subsequently, by analyzing the robot control obtained by the human, it is possible to design a controller that allows the robot to autonomously perform the task. First, we introduce this framework with the ball-swapping task where a robot hand has to swap the position of the balls without dropping them, and present new analyses investigating the intrinsic dimension of the ballswapping skill obtained through this framework. Then, we present new experiments toward obtaining an autonomous grasp controller on an anthropomorphic robot. In the experiments, the operator directly controls the (simulated) robot using visual feedback to achieve robust grasping with the robot. The data collected is then analyzed for inferring the grasping strategy discovered by the human operator. Finally, a method to generalize grasping actions using the collected data is presented, which allows the robot to autonomously generate grasping actions for different orientations of the target object.
  • Placeholder
    ArticlePublication
    Human adaptation to human–robot shared control
    (IEEE, 2019-04) Amirshirzad, Negin; Kumru, Asiye; Öztop, Erhan; Computer Science; Psychology; KUMRU, Asiye; ÖZTOP, Erhan; Amirshirzad, Negin
    Human-in-the-loop robot control systems naturally provide the means for synergistic human-robot collaboration through control sharing. The expectation in such a system is that the strengths of each partner are combined to achieve a task performance higher than that can be achieved by the individual partners alone. However, there is no general established rule to ensure a synergistic partnership. In particular, it is not well studied how humans adapt to a nonstationary robot partner whose behavior may change in response to human actions. If the human is not given the choice to turn on or off the control sharing, the robot-human system can even be unstable depending on how the shared control is implemented. In this paper, we instantiate a human-robot shared control system with the "ball balancing task," where a hall must be brought to a desired position on a tray held by the robot partner. The experimental setup is used to assess the effectiveness of the system and to find out the differences in human sensorimotor learning when the robot is a control sharing partner, as opposed to being a passive teleoperated robot. The results of the four-day 20-subject experiments conducted show that 1) after a short human learning phase, task execution performance is significantly improved when both human and robot are in charge. Moreover, 2) even though the subjects are not instructed about the role of the robot, they do learn faster despite the nonstationary behavior of the robot caused by the goal estimation mechanism built in.
  • Placeholder
    ArticlePublication
    Action and language mechanisms in the brain: data, models and neuroinformatics
    (Springer Science+Business Media, 2014-01) Arbib, M. A.; Bonaiuto, J. J.; Bornkessel-Schlesewsky, I.; Kemmerer, D.; MacWhinney, B.; Årup Nielsen, F.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan
    We assess the challenges of studying action and language mechanisms in the brain, both singly and in relation to each other to provide a novel perspective on neuroinformatics, integrating the development of databases for encoding – separately or together – neurocomputational models and empirical data that serve systems and cognitive neuroscience.
  • ArticlePublicationOpen Access
    Symbol emergence in cognitive developmental systems: A survey
    (IEEE, 2019-12) Taniguchi, T.; Uğur, E.; Hoffmann, M.; Jamone, L.; Nagai, T.; Rosman, B.; Matsuka, T.; Iwahashi, N.; Öztop, Erhan; Piater, J.; Worgotter, F.; Computer Science; ÖZTOP, Erhan
    Humans use signs, e.g., sentences in a spoken language, for communication and thought. Hence, symbol systems like language are crucial for our communication with other agents and adaptation to our real-world environment. The symbol systems we use in our human society adaptively and dynamically change over time. In the context of artificial intelligence (AI) and cognitive systems, the symbol grounding problem has been regarded as one of the central problems related to symbols. However, the symbol grounding problem was originally posed to connect symbolic AI and sensorimotor information and did not consider many interdisciplinary phenomena in human communication and dynamic symbol systems in our society, which semiotics considered. In this paper, we focus on the symbol emergence problem, addressing not only cognitive dynamics but also the dynamics of symbol systems in society, rather than the symbol grounding problem. We first introduce the notion of a symbol in semiotics from the humanities, to leave the very narrow idea of symbols in symbolic AI. Furthermore, over the years, it became more and more clear that symbol emergence has to be regarded as a multifaceted problem. Therefore, second, we review the history of the symbol emergence problem in different fields, including both biological and artificial systems, showing their mutual relations. We summarize the discussion and provide an integrative viewpoint and comprehensive overview of symbol emergence in cognitive systems. Additionally, we describe the challenges facing the creation of cognitive systems that can be part of symbol emergence systems.
  • Placeholder
    ArticlePublication
    Teaching robots to cooperate with humans in dynamic manipulation tasks based on multi-modal human-in-the-loop approach
    (Springer Science+Business Media, 2014-01) Peternel, L.; Petric, T.; Öztop, Erhan; Babic, J.; Computer Science; ÖZTOP, Erhan
    We propose an approach to efficiently teach robots how to perform dynamic anipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person crosscut saw. The challenge of this experiment is that it requires precise coordination of the robot’s motion and complianceaccording to the partner’s actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner’s motion.
  • Placeholder
    ArticlePublication
    Exploration with intrinsic motivation using object–action–outcome latent space
    (IEEE, 2023-06) Sener, M. İ.; Nagai, Y.; Öztop, Erhan; Uğur, E.; Computer Science; ÖZTOP, Erhan
    One effective approach for equipping artificial agents with sensorimotor skills is to use self-exploration. To do this efficiently is critical, as time and data collection are costly. In this study, we propose an exploration mechanism that blends action, object, and action outcome representations into a latent space, where local regions are formed to host forward model learning. The agent uses intrinsic motivation to select the forward model with the highest learning progress to adopt at a given exploration step. This parallels how infants learn, as high learning progress indicates that the learning problem is neither too easy nor too difficult in the selected region. The proposed approach is validated with a simulated robot in a table-top environment. The simulation scene comprises a robot and various objects, where the robot interacts with one of them each time using a set of parameterized actions and learns the outcomes of these interactions. With the proposed approach, the robot organizes its curriculum of learning as in existing intrinsic motivation approaches and outperforms them in learning speed. Moreover, the learning regime demonstrates features that partially match infant development; in particular, the proposed system learns to predict the outcomes of different skills in a staged manner.
  • ArticlePublicationOpen Access
    Trust in robot–robot scaffolding
    (IEEE, 2023-12-01) Kırtay, M.; Hafner, V. V. V.; Asada, Minoru; Öztop, Erhan; Computer Science; ÖZTOP, Erhan
    The study of robot trust in humans and other agents is not explored widely despite its importance for the near future human-robot symbiotic societies. Here, we propose that robots should trust partners that tend to reduce their computational load, which is analogous to human cognitive load. We test this idea by adopting an interactive visual recalling task. In the first set of experiments, the robot can get help from online instructors with different guiding strategies to decide which one it should trust based on the computational load it experiences during the experiments. The second set of experiments involves robot-robot interactions. Akin to the robot-online instructor case, the Pepper robot is asked to scaffold the learning of a less capable 'infant' robot (Nao) with or without being equipped with the cognitive abilities of theory of mind and task experience memory to assess the contribution of these cognitive abilities to scaffolding performance. Overall, the results show that robot trust based on computational/cognitive load within a sequential decision-making framework leads to effective partner selection and robot-robot scaffolding. Thus, using the computational load incurred by the cognitive processing of a robot may serve as an internal signal for assessing the trustworthiness of interaction partners.
  • Placeholder
    ArticlePublication
    Minimal sign representation of boolean functions: algorithms and exact results for low dimensions
    (MIT Press, 2015-08) Sezener, Can Eren; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Sezener, Can Eren
    Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties.
  • Placeholder
    ArticlePublication
    Effect regulated projection of robot’s action space for production and prediction of manipulation primitives through learning progress and predictability based exploration
    (IEEE, 2021-06) Bugur, S.; Öztop, Erhan; Nagai, Y.; Ugur, E.; Computer Science; ÖZTOP, Erhan
    In this study, we propose an effective action parameter exploration mechanism that enables efficient discovery of robot actions through interacting with objects in a simulated table-top environment. For this, the robot organizes its action parameter space based on the generated effects in the environment and learns forward models for predicting consequences of its actions. Following the Intrinsic Motivation approach, the robot samples the action parameters from the regions that are expected to yield high learning progress (LP). In addition to the LP-based action sampling, our method uses a novel parameter space organization scheme to form regions that naturally correspond to qualitatively different action classes, which might be also called action primitives. The proposed method enabled the robot to discover a number of lateralized movement primitives and to acquire the capability of prediction the consequences of these primitives. Furthermore our results suggest the reasons behind the earlier development of grasp compared to push action in infants. Finally, our findings show some parallels with data from infant development where correspondence between action production and prediction is observed.
  • Placeholder
    ArticlePublication
    Mirror neurons: Functions, mechanisms and models
    (Elsevier, 2013-04-12) Öztop, Erhan; Kawato, M.; Arbib, M. A.; Computer Science; ÖZTOP, Erhan
    Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a “mirror neuron system,” putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system.