Person: ÖZTOP, Erhan
Name
Job Title
First Name
Erhan
Last Name
ÖZTOP
62 results
Publication Search Results
Now showing 1 - 10 of 62
ArticlePublication Open Access Human-in-the-loop control and task learning for pneumatically actuated muscle based robots(Frontiers Media, 2018-11-06) Teramae, T.; Ishihara, K.; Babič, J.; Morimoto, J.; Öztop, Erhan; Computer Science; Conradt, J.; ÖZTOP, ErhanPneumatically actuated muscles (PAMs) provide a low cost, lightweight, and high power-To-weight ratio solution for many robotic applications. In addition, the antagonist pair configuration for robotic arms make it open to biologically inspired control approaches. In spite of these advantages, they have not been widely adopted in human-in-The-loop control and learning applications. In this study, we propose a biologically inspired multimodal human-in-The-loop control system for driving a one degree-of-freedom robot, and realize the task of hammering a nail into a wood block under human control. We analyze the human sensorimotor learning in this system through a set of experiments, and show that effective autonomous hammering skill can be readily obtained through the developed human-robot interface. The results indicate that a human-in-The-loop learning setup with anthropomorphically valid multi-modal human-robot interface leads to fast learning, thus can be used to effectively derive autonomous robot skills for ballistic motor tasks that require modulation of impedance.ArticlePublication Metadata only Exploration with intrinsic motivation using object–action–outcome latent space(IEEE, 2023-06) Sener, M. İ.; Nagai, Y.; Öztop, Erhan; Uğur, E.; Computer Science; ÖZTOP, ErhanOne effective approach for equipping artificial agents with sensorimotor skills is to use self-exploration. To do this efficiently is critical, as time and data collection are costly. In this study, we propose an exploration mechanism that blends action, object, and action outcome representations into a latent space, where local regions are formed to host forward model learning. The agent uses intrinsic motivation to select the forward model with the highest learning progress to adopt at a given exploration step. This parallels how infants learn, as high learning progress indicates that the learning problem is neither too easy nor too difficult in the selected region. The proposed approach is validated with a simulated robot in a table-top environment. The simulation scene comprises a robot and various objects, where the robot interacts with one of them each time using a set of parameterized actions and learns the outcomes of these interactions. With the proposed approach, the robot organizes its curriculum of learning as in existing intrinsic motivation approaches and outperforms them in learning speed. Moreover, the learning regime demonstrates features that partially match infant development; in particular, the proposed system learns to predict the outcomes of different skills in a staged manner.Conference ObjectPublication Metadata only A shared control method for online human-in-the-loop robot learning based on Locally Weighted Regression(IEEE, 2016) Peternel, L.; Öztop, Erhan; Babič, J.; Computer Science; ÖZTOP, ErhanWe propose a novel method that arbitrates the control between the human and the robot actors in a teaching-by-demonstration setting to form synergy between the two and facilitate effective skill synthesis on the robot. We employed the human-in-the-loop teaching paradigm to teleoperate and demonstrate a complex task execution to the robot in real-time. As the human guides the robot to perform the task, the robot obtains the skill online during the demonstration. To encode the robotic skill we employed Locally Weighted Regression that fits local models to specific state region of the task based on the human demonstration. If the robot is in the state region where no local models exist, the control over the robotic mechanism is given to the human to perform the teaching. When local models are gradually obtained in that region, the control is given to the robot so that the human can examine its performance already during the demonstration stage, and take actions accordingly. This enables a co-adaptation between the agents and contributes to a faster and more efficient teaching. As a proof-of-concept, we realised the proposed robot teaching system on a haptic robot with the task of generation of a desired vertical force on a horizontal plane with unknown stiffness properties.ArticlePublication Metadata only Robotic grasping and manipulation through human visuomotor learning(Elsevier, 2012-03) Moore, B.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanA major goal of robotics research is to develop techniques that allow non-experts to teach robots dexterous skills. In this paper, we report our progress on the development of a framework which exploits human sensorimotor learning capability to address this aim. The idea is to place the human operator in the robot control loop where he/she can intuitively control the robot, and by practice, learn to perform the target task with the robot. Subsequently, by analyzing the robot control obtained by the human, it is possible to design a controller that allows the robot to autonomously perform the task. First, we introduce this framework with the ball-swapping task where a robot hand has to swap the position of the balls without dropping them, and present new analyses investigating the intrinsic dimension of the ballswapping skill obtained through this framework. Then, we present new experiments toward obtaining an autonomous grasp controller on an anthropomorphic robot. In the experiments, the operator directly controls the (simulated) robot using visual feedback to achieve robust grasping with the robot. The data collected is then analyzed for inferring the grasping strategy discovered by the human operator. Finally, a method to generalize grasping actions using the collected data is presented, which allows the robot to autonomously generate grasping actions for different orientations of the target object.ArticlePublication Metadata only Human adaptation to human–robot shared control(IEEE, 2019-04) Amirshirzad, Negin; Kumru, Asiye; Öztop, Erhan; Computer Science; Psychology; KUMRU, Asiye; ÖZTOP, Erhan; Amirshirzad, NeginHuman-in-the-loop robot control systems naturally provide the means for synergistic human-robot collaboration through control sharing. The expectation in such a system is that the strengths of each partner are combined to achieve a task performance higher than that can be achieved by the individual partners alone. However, there is no general established rule to ensure a synergistic partnership. In particular, it is not well studied how humans adapt to a nonstationary robot partner whose behavior may change in response to human actions. If the human is not given the choice to turn on or off the control sharing, the robot-human system can even be unstable depending on how the shared control is implemented. In this paper, we instantiate a human-robot shared control system with the "ball balancing task," where a hall must be brought to a desired position on a tray held by the robot partner. The experimental setup is used to assess the effectiveness of the system and to find out the differences in human sensorimotor learning when the robot is a control sharing partner, as opposed to being a passive teleoperated robot. The results of the four-day 20-subject experiments conducted show that 1) after a short human learning phase, task execution performance is significantly improved when both human and robot are in charge. Moreover, 2) even though the subjects are not instructed about the role of the robot, they do learn faster despite the nonstationary behavior of the robot caused by the goal estimation mechanism built in.Conference ObjectPublication Metadata only Context based echo state networks for robot movement primitives(IEEE, 2023) Amirshirzad, Negin; Asada, M.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Amirshirzad, NeginReservoir Computing, in particular Echo State Networks (ESNs) offer a lightweight solution for time series representation and prediction. An ESN is based on a discrete time random dynamical system that is used to output a desired time series with the application of a learned linear readout weight vector. The simplicity of the learning suggests that an ESN can be used as a lightweight alternative for movement primitive representation in robotics. In this study, we explore this possibility and develop Context-based Echo State Networks (CESNs), and demonstrate their applicability to robot movement generation. The CESNs are designed for generating joint or Cartesian trajectories based on a user definable context input. The context modulates the dynamics represented by the ESN involved. The linear read-out weights then can pick up the context-dependent dynamics for generating different movement patterns for different contexts. To achieve robust movement execution and generalization over unseen contexts, we introduce a novel data augmentation mechanism for ESN training. We show the effectiveness of our approach in a learning from demonstration setting. To be concrete, we teach the robot reaching and obstacle avoidance tasks in simulation and in real-world, which shows that the developed system, CESN provides a lightweight movement primitive representation system that facilitate robust task execution with generalization ability for unseen seen contexts, including extrapolated ones.ArticlePublication Open Access Trust in robot–robot scaffolding(IEEE, 2023-12-01) Kırtay, M.; Hafner, V. V. V.; Asada, Minoru; Öztop, Erhan; Computer Science; ÖZTOP, ErhanThe study of robot trust in humans and other agents is not explored widely despite its importance for the near future human-robot symbiotic societies. Here, we propose that robots should trust partners that tend to reduce their computational load, which is analogous to human cognitive load. We test this idea by adopting an interactive visual recalling task. In the first set of experiments, the robot can get help from online instructors with different guiding strategies to decide which one it should trust based on the computational load it experiences during the experiments. The second set of experiments involves robot-robot interactions. Akin to the robot-online instructor case, the Pepper robot is asked to scaffold the learning of a less capable 'infant' robot (Nao) with or without being equipped with the cognitive abilities of theory of mind and task experience memory to assess the contribution of these cognitive abilities to scaffolding performance. Overall, the results show that robot trust based on computational/cognitive load within a sequential decision-making framework leads to effective partner selection and robot-robot scaffolding. Thus, using the computational load incurred by the cognitive processing of a robot may serve as an internal signal for assessing the trustworthiness of interaction partners.ArticlePublication Open Access Reinforcement learning to adjust parametrized motor primitives to new situations(Springer Science+Business Media, 2012-11) Kober, J.; Wilhelm, A.; Öztop, Erhan; Peters, J.; Computer Science; ÖZTOP, ErhanHumans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this paper, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6.Conference ObjectPublication Metadata only Real-time decoding of arm kinematics during grasping based on F5 neural spike data(Springer International Publishing, 2017) Ashena, Narges; Papadourakis, V.; Raos, V.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Ashena, NargesSeveral studies have shown that the information related to grip type, object identity and kinematics of monkey grasping actions is available in macaque cortical areas of F5, MI, and AIP. In particular, these studies show that the neural discharge patterns of the neuron populations from the aforementioned areas can be used for accurate decoding of action parameters. In this study, we focus on single neuron decoding capacity of neurons in a given region, F5, considering their functional classification, i.e. as to whether they show the mirror property or not. To this end, we recorded neural spike data and arm kinematics from a monkey that performed grasping actions. The spikes were then used as a regressor to predict the kinematic parameters. Results show that single neuron real-time decoding of the kinematics is not perfect, but reasonable performance can be achieved with selected neurons from both populations. Considering the neurons that we have studied (N:32), non-mirror neurons seem to act as better single-neuron decoders. Although it is clear that population-level activity is needed for robust decoding, single-neuron decoding capacity may be used as a quantitative means to classify neurons in a given region.ArticlePublication Open Access Symbol emergence in cognitive developmental systems: A survey(IEEE, 2019-12) Taniguchi, T.; Uğur, E.; Hoffmann, M.; Jamone, L.; Nagai, T.; Rosman, B.; Matsuka, T.; Iwahashi, N.; Öztop, Erhan; Piater, J.; Worgotter, F.; Computer Science; ÖZTOP, ErhanHumans use signs, e.g., sentences in a spoken language, for communication and thought. Hence, symbol systems like language are crucial for our communication with other agents and adaptation to our real-world environment. The symbol systems we use in our human society adaptively and dynamically change over time. In the context of artificial intelligence (AI) and cognitive systems, the symbol grounding problem has been regarded as one of the central problems related to symbols. However, the symbol grounding problem was originally posed to connect symbolic AI and sensorimotor information and did not consider many interdisciplinary phenomena in human communication and dynamic symbol systems in our society, which semiotics considered. In this paper, we focus on the symbol emergence problem, addressing not only cognitive dynamics but also the dynamics of symbol systems in society, rather than the symbol grounding problem. We first introduce the notion of a symbol in semiotics from the humanities, to leave the very narrow idea of symbols in symbolic AI. Furthermore, over the years, it became more and more clear that symbol emergence has to be regarded as a multifaceted problem. Therefore, second, we review the history of the symbol emergence problem in different fields, including both biological and artificial systems, showing their mutual relations. We summarize the discussion and provide an integrative viewpoint and comprehensive overview of symbol emergence in cognitive systems. Additionally, we describe the challenges facing the creation of cognitive systems that can be part of symbol emergence systems.