Person:
ÖZTOP, Erhan

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Erhan

Last Name

ÖZTOP
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 62
  • ArticlePublicationOpen Access
    Deepsym: Deep symbol generation and rule learning for planning from unsupervised robot interaction
    (AI Access Foundation, 2022) Ahmetoglu, A.; Seker, M. Y.; Piater, J.; Öztop, Erhan; Ugur, E.; Computer Science; ÖZTOP, Erhan
    Symbolic planning and reasoning are powerful tools for robots tackling complex tasks. However, the need to manually design the symbols restrict their applicability, especially for robots that are expected to act in open-ended environments. Therefore symbol formation and rule extraction should be considered part of robot learning, which, when done properly, will offer scalability, flexibility, and robustness. Towards this goal, we propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning. Our robot interacts with objects using an initial action repertoire that is assumed to be acquired earlier and observes the effects it can create in the environment. To form action-grounded object, effect, and relational categories, we employ a binary bottleneck layer in a predictive, deep encoder-decoder network that takes the image of the scene and the action applied as input, and generates the resulting effects in the scene in pixel coordinates. After learning, the binary latent vector represents action-driven object categories based on the interaction experience of the robot. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, a decision tree is trained to reproduce its decoder function. Probabilistic rules are extracted from the decision paths of the tree and are represented in the Probabilistic Planning Domain Definition Language (PPDDL), allowing off-the-shelf planners to operate on the knowledge extracted from the sensorimotor experience of the robot. The deployment of the proposed approach for a simulated robotic manipulator enabled the discovery of discrete representations of object properties such as 'rollable' and 'insertable'. In turn, the use of these representations as symbols allowed the generation of effective plans for achieving goals, such as building towers of the desired height, demonstrating the effectiveness of the approach for multi-step object manipulation. Finally, we demonstrate that the system is not only restricted to the robotics domain by assessing its applicability to the MNIST 8-puzzle domain in which learned symbols allow for the generation of plans that move the empty tile into any given position.
  • ArticlePublicationOpen Access
    Human-in-the-loop control and task learning for pneumatically actuated muscle based robots
    (Frontiers Media, 2018-11-06) Teramae, T.; Ishihara, K.; Babič, J.; Morimoto, J.; Öztop, Erhan; Computer Science; Conradt, J.; ÖZTOP, Erhan
    Pneumatically actuated muscles (PAMs) provide a low cost, lightweight, and high power-To-weight ratio solution for many robotic applications. In addition, the antagonist pair configuration for robotic arms make it open to biologically inspired control approaches. In spite of these advantages, they have not been widely adopted in human-in-The-loop control and learning applications. In this study, we propose a biologically inspired multimodal human-in-The-loop control system for driving a one degree-of-freedom robot, and realize the task of hammering a nail into a wood block under human control. We analyze the human sensorimotor learning in this system through a set of experiments, and show that effective autonomous hammering skill can be readily obtained through the developed human-robot interface. The results indicate that a human-in-The-loop learning setup with anthropomorphically valid multi-modal human-robot interface leads to fast learning, thus can be used to effectively derive autonomous robot skills for ballistic motor tasks that require modulation of impedance.
  • Placeholder
    ArticlePublication
    Exploration with intrinsic motivation using object–action–outcome latent space
    (IEEE, 2023-06) Sener, M. İ.; Nagai, Y.; Öztop, Erhan; Uğur, E.; Computer Science; ÖZTOP, Erhan
    One effective approach for equipping artificial agents with sensorimotor skills is to use self-exploration. To do this efficiently is critical, as time and data collection are costly. In this study, we propose an exploration mechanism that blends action, object, and action outcome representations into a latent space, where local regions are formed to host forward model learning. The agent uses intrinsic motivation to select the forward model with the highest learning progress to adopt at a given exploration step. This parallels how infants learn, as high learning progress indicates that the learning problem is neither too easy nor too difficult in the selected region. The proposed approach is validated with a simulated robot in a table-top environment. The simulation scene comprises a robot and various objects, where the robot interacts with one of them each time using a set of parameterized actions and learns the outcomes of these interactions. With the proposed approach, the robot organizes its curriculum of learning as in existing intrinsic motivation approaches and outperforms them in learning speed. Moreover, the learning regime demonstrates features that partially match infant development; in particular, the proposed system learns to predict the outcomes of different skills in a staged manner.
  • Placeholder
    Conference paperPublication
    A shared control method for online human-in-the-loop robot learning based on Locally Weighted Regression
    (IEEE, 2016) Peternel, L.; Öztop, Erhan; Babič, J.; Computer Science; ÖZTOP, Erhan
    We propose a novel method that arbitrates the control between the human and the robot actors in a teaching-by-demonstration setting to form synergy between the two and facilitate effective skill synthesis on the robot. We employed the human-in-the-loop teaching paradigm to teleoperate and demonstrate a complex task execution to the robot in real-time. As the human guides the robot to perform the task, the robot obtains the skill online during the demonstration. To encode the robotic skill we employed Locally Weighted Regression that fits local models to specific state region of the task based on the human demonstration. If the robot is in the state region where no local models exist, the control over the robotic mechanism is given to the human to perform the teaching. When local models are gradually obtained in that region, the control is given to the robot so that the human can examine its performance already during the demonstration stage, and take actions accordingly. This enables a co-adaptation between the agents and contributes to a faster and more efficient teaching. As a proof-of-concept, we realised the proposed robot teaching system on a haptic robot with the task of generation of a desired vertical force on a horizontal plane with unknown stiffness properties.
  • Placeholder
    ArticlePublication
    Robotic grasping and manipulation through human visuomotor learning
    (Elsevier, 2012-03) Moore, B.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan
    A major goal of robotics research is to develop techniques that allow non-experts to teach robots dexterous skills. In this paper, we report our progress on the development of a framework which exploits human sensorimotor learning capability to address this aim. The idea is to place the human operator in the robot control loop where he/she can intuitively control the robot, and by practice, learn to perform the target task with the robot. Subsequently, by analyzing the robot control obtained by the human, it is possible to design a controller that allows the robot to autonomously perform the task. First, we introduce this framework with the ball-swapping task where a robot hand has to swap the position of the balls without dropping them, and present new analyses investigating the intrinsic dimension of the ballswapping skill obtained through this framework. Then, we present new experiments toward obtaining an autonomous grasp controller on an anthropomorphic robot. In the experiments, the operator directly controls the (simulated) robot using visual feedback to achieve robust grasping with the robot. The data collected is then analyzed for inferring the grasping strategy discovered by the human operator. Finally, a method to generalize grasping actions using the collected data is presented, which allows the robot to autonomously generate grasping actions for different orientations of the target object.
  • Placeholder
    ArticlePublication
    Human adaptation to human–robot shared control
    (IEEE, 2019-04) Amirshirzad, Negin; Kumru, Asiye; Öztop, Erhan; Computer Science; Psychology; KUMRU, Asiye; ÖZTOP, Erhan; Amirshirzad, Negin
    Human-in-the-loop robot control systems naturally provide the means for synergistic human-robot collaboration through control sharing. The expectation in such a system is that the strengths of each partner are combined to achieve a task performance higher than that can be achieved by the individual partners alone. However, there is no general established rule to ensure a synergistic partnership. In particular, it is not well studied how humans adapt to a nonstationary robot partner whose behavior may change in response to human actions. If the human is not given the choice to turn on or off the control sharing, the robot-human system can even be unstable depending on how the shared control is implemented. In this paper, we instantiate a human-robot shared control system with the "ball balancing task," where a hall must be brought to a desired position on a tray held by the robot partner. The experimental setup is used to assess the effectiveness of the system and to find out the differences in human sensorimotor learning when the robot is a control sharing partner, as opposed to being a passive teleoperated robot. The results of the four-day 20-subject experiments conducted show that 1) after a short human learning phase, task execution performance is significantly improved when both human and robot are in charge. Moreover, 2) even though the subjects are not instructed about the role of the robot, they do learn faster despite the nonstationary behavior of the robot caused by the goal estimation mechanism built in.
  • Placeholder
    Conference paperPublication
    Context based echo state networks for robot movement primitives
    (IEEE, 2023) Amirshirzad, Negin; Asada, M.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Amirshirzad, Negin
    Reservoir Computing, in particular Echo State Networks (ESNs) offer a lightweight solution for time series representation and prediction. An ESN is based on a discrete time random dynamical system that is used to output a desired time series with the application of a learned linear readout weight vector. The simplicity of the learning suggests that an ESN can be used as a lightweight alternative for movement primitive representation in robotics. In this study, we explore this possibility and develop Context-based Echo State Networks (CESNs), and demonstrate their applicability to robot movement generation. The CESNs are designed for generating joint or Cartesian trajectories based on a user definable context input. The context modulates the dynamics represented by the ESN involved. The linear read-out weights then can pick up the context-dependent dynamics for generating different movement patterns for different contexts. To achieve robust movement execution and generalization over unseen contexts, we introduce a novel data augmentation mechanism for ESN training. We show the effectiveness of our approach in a learning from demonstration setting. To be concrete, we teach the robot reaching and obstacle avoidance tasks in simulation and in real-world, which shows that the developed system, CESN provides a lightweight movement primitive representation system that facilitate robust task execution with generalization ability for unseen seen contexts, including extrapolated ones.
  • ArticlePublicationOpen Access
    Trust in robot–robot scaffolding
    (IEEE, 2023-12-01) Kırtay, M.; Hafner, V. V. V.; Asada, Minoru; Öztop, Erhan; Computer Science; ÖZTOP, Erhan
    The study of robot trust in humans and other agents is not explored widely despite its importance for the near future human-robot symbiotic societies. Here, we propose that robots should trust partners that tend to reduce their computational load, which is analogous to human cognitive load. We test this idea by adopting an interactive visual recalling task. In the first set of experiments, the robot can get help from online instructors with different guiding strategies to decide which one it should trust based on the computational load it experiences during the experiments. The second set of experiments involves robot-robot interactions. Akin to the robot-online instructor case, the Pepper robot is asked to scaffold the learning of a less capable 'infant' robot (Nao) with or without being equipped with the cognitive abilities of theory of mind and task experience memory to assess the contribution of these cognitive abilities to scaffolding performance. Overall, the results show that robot trust based on computational/cognitive load within a sequential decision-making framework leads to effective partner selection and robot-robot scaffolding. Thus, using the computational load incurred by the cognitive processing of a robot may serve as an internal signal for assessing the trustworthiness of interaction partners.
  • ArticlePublicationOpen Access
    Reinforcement learning to adjust parametrized motor primitives to new situations
    (Springer Science+Business Media, 2012-11) Kober, J.; Wilhelm, A.; Öztop, Erhan; Peters, J.; Computer Science; ÖZTOP, Erhan
    Humans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this paper, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6.
  • Placeholder
    Conference paperPublication
    Real-time decoding of arm kinematics during grasping based on F5 neural spike data
    (Springer International Publishing, 2017) Ashena, Narges; Papadourakis, V.; Raos, V.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Ashena, Narges
    Several studies have shown that the information related to grip type, object identity and kinematics of monkey grasping actions is available in macaque cortical areas of F5, MI, and AIP. In particular, these studies show that the neural discharge patterns of the neuron populations from the aforementioned areas can be used for accurate decoding of action parameters. In this study, we focus on single neuron decoding capacity of neurons in a given region, F5, considering their functional classification, i.e. as to whether they show the mirror property or not. To this end, we recorded neural spike data and arm kinematics from a monkey that performed grasping actions. The spikes were then used as a regressor to predict the kinematic parameters. Results show that single neuron real-time decoding of the kinematics is not perfect, but reasonable performance can be achieved with selected neurons from both populations. Considering the neurons that we have studied (N:32), non-mirror neurons seem to act as better single-neuron decoders. Although it is clear that population-level activity is needed for robust decoding, single-neuron decoding capacity may be used as a quantitative means to classify neurons in a given region.