Browsing by Author "Nagai, Y."
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
ArticlePublication Open Access Affordance-based altruistic robotic architecture for human–robot collaboration(Sage, 2019-08) Imre, M.; Öztop, Erhan; Nagai, Y.; Ugur, E.; Computer Science; ÖZTOP, ErhanThis article proposes a computational model for altruistic behavior, shows its implementation on a physical robot, and presents the results of human-robot interaction experiments conducted with the implemented system. Inspired from the sensorimotor mechanisms of the primate brain, object affordances are utilized for both intention estimation and action execution, in particular, to generate altruistic behavior. At the core of the model is the notion that sensorimotor systems developed for movement generation can be used to process the visual stimuli generated by actions of the others, infer the goals behind, and take the necessary actions to help achieving these goals, potentially leading to the emergence of altruistic behavior. Therefore, we argue that altruistic behavior is not necessarily a consequence of deliberate cognitive processing but may emerge through basic sensorimotor processes such as error minimization, that is, minimizing the difference between the observed and expected outcomes. In the model, affordances also play a key role by constraining the possible set of actions that an observed actor might be engaged in, enabling a fast and accurate intention inference. The model components are implemented on an upper-body humanoid robot. A set of experiments are conducted validating the workings of the components of the model, such as affordance extraction and task execution. Significantly, to assess how human partners interact with our altruistic model deployed robot, extensive experiments with naive subjects are conducted. Our results indicate that the proposed computational model can explain emergent altruistic behavior in reference to its biological counterpart and moreover engage human partners to exploit this behavior when implemented on an anthropomorphic robot.ArticlePublication Metadata only Effect regulated projection of robot’s action space for production and prediction of manipulation primitives through learning progress and predictability based exploration(IEEE, 2021-06) Bugur, S.; Öztop, Erhan; Nagai, Y.; Ugur, E.; Computer Science; ÖZTOP, ErhanIn this study, we propose an effective action parameter exploration mechanism that enables efficient discovery of robot actions through interacting with objects in a simulated table-top environment. For this, the robot organizes its action parameter space based on the generated effects in the environment and learns forward models for predicting consequences of its actions. Following the Intrinsic Motivation approach, the robot samples the action parameters from the regions that are expected to yield high learning progress (LP). In addition to the LP-based action sampling, our method uses a novel parameter space organization scheme to form regions that naturally correspond to qualitatively different action classes, which might be also called action primitives. The proposed method enabled the robot to discover a number of lateralized movement primitives and to acquire the capability of prediction the consequences of these primitives. Furthermore our results suggest the reasons behind the earlier development of grasp compared to push action in infants. Finally, our findings show some parallels with data from infant development where correspondence between action production and prediction is observed.ArticlePublication Metadata only Exploration with intrinsic motivation using object–action–outcome latent space(IEEE, 2023-06) Sener, M. İ.; Nagai, Y.; Öztop, Erhan; Uğur, E.; Computer Science; ÖZTOP, ErhanOne effective approach for equipping artificial agents with sensorimotor skills is to use self-exploration. To do this efficiently is critical, as time and data collection are costly. In this study, we propose an exploration mechanism that blends action, object, and action outcome representations into a latent space, where local regions are formed to host forward model learning. The agent uses intrinsic motivation to select the forward model with the highest learning progress to adopt at a given exploration step. This parallels how infants learn, as high learning progress indicates that the learning problem is neither too easy nor too difficult in the selected region. The proposed approach is validated with a simulated robot in a table-top environment. The simulation scene comprises a robot and various objects, where the robot interacts with one of them each time using a set of parameterized actions and learns the outcomes of these interactions. With the proposed approach, the robot organizes its curriculum of learning as in existing intrinsic motivation approaches and outperforms them in learning speed. Moreover, the learning regime demonstrates features that partially match infant development; in particular, the proposed system learns to predict the outcomes of different skills in a staged manner.ArticlePublication Open Access Imitation and mirror systems in robots through Deep Modality Blending Networks(Elsevier, 2022-02) Seker, M. Y.; Ahmetoglu, A.; Nagai, Y.; Asada, M.; Öztop, Erhan; Ugur, E.; Computer Science; ÖZTOP, ErhanLearning to interact with the environment not only empowers the agent with manipulation capability but also generates information to facilitate building of action understanding and imitation capabilities. This seems to be a strategy adopted by biological systems, in particular primates, as evidenced by the existence of mirror neurons that seem to be involved in multi-modal action understanding. How to benefit from the interaction experience of the robots to enable understanding actions and goals of other agents is still a challenging question. In this study, we propose a novel method, deep modality blending networks (DMBN), that creates a common latent space from multi-modal experience of a robot by blending multi-modal signals with a stochastic weighting mechanism. We show for the first time that deep learning, when combined with a novel modality blending scheme, can facilitate action recognition and produce structures to sustain anatomical and effect-based imitation capabilities. Our proposed system, which is based on conditional neural processes, can be conditioned on any desired sensory/motor value at any time step, and can generate a complete multi-modal trajectory consistent with the desired conditioning in one-shot by querying the network for all the sampled time points in parallel avoiding the accumulation of prediction errors. Based on simulation experiments with an arm-gripper robot and an RGB camera, we showed that DMBN could make accurate predictions about any missing modality (camera or joint angles) given the available ones outperforming recent multimodal variational autoencoder models in terms of long-horizon high-dimensional trajectory predictions. We further showed that given desired images from different perspectives, i.e. images generated by the observation of other robots placed on different sides of the table, our system could generate image and joint angle sequences that correspond to either anatomical or effect-based imitation behavior. To achieve this mirror-like behavior, our system does not perform a pixel-based template matching but rather benefits from and relies on the common latent space constructed by using both joint and image modalities, as shown by additional experiments. Moreover, we showed that mirror learning (in our system) does not only depend on visual experience and cannot be achieved without proprioceptive experience. Our experiments showed that out of ten training scenarios with different initial configurations, the proposed DMBN model could achieve mirror learning in all of the cases where the model that only uses visual information failed in half of them. Overall, the proposed DMBN architecture not only serves as a computational model for sustaining mirror neuron-like capabilities, but also stands as a powerful machine learning architecture for high-dimensional multi-modal temporal data with robust retrieval capabilities operating with partial information in one or multiple modalities.Conference paperPublication Metadata only Learning to grasp with parental scaffolding(IEEE, 2011) Ugur, E.; Celikkanat, H.; Şahin, E.; Nagai, Y.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanParental scaffolding is an important mechanism utilized by infants during their development. Infants, for example, pay stronger attention to the features of objects highlighted by parents and learn the way of manipulating an object while being supported by parents. In this paper, a robot with the basic ability of reaching for an object, closing fingers and lifting its hand lacks knowledge of which parts of the object affords grasping, and in which hand orientation should the object be grasped. During reach and grasp attempts, the movement of the robot hand is modified by the human caregiver's physical interaction to enable successful grasping. The object regions that the robot fingers contact first are detected and stored as potential graspable object regions along with the trajectory of the hand. In the experiments, we showed that although the human caregiver did not directly show the graspable regions, the robot was able to find regions such as handles of the mugs after its action execution was partially guided by the human. Later, this experience was used to find graspable regions of never seen objects. At the end, the robot was able to grasp objects based on the position of the graspable part and stored action execution trajectories.Conference paperPublication Metadata only Modeling the development of infant imitation using inverse reinforcement learning(IEEE, 2018-09) Tekden, A. E.; Ugur, E.; Nagai, Y.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanLittle is known about the computational mechanisms of how imitation skills develop along with infant sensorimotor learning. In robotics, there are several well developed frameworks for imitation learning or so called learning by demonstration. Two paradigms dominate: Direct Learning (DL) and Inverse Reinforcement Learning (IRL). The former is a simple mechanism where the observed state and action pairs are associated to construct a copy of the action policy of the demonstrator. In the latter, an optimality principle or reward structure is sought that would explain the observed behavior as the optimal solution governed by the optimality principle or the reward function found. In this study, we explore the plausibility of whether some form of IRL mechanism in infants can facilitate imitation learning and understanding of others' behaviours. We propose that infants project the events taking place in the environment into their internal representations through a set of features that evolve during development. We implement this idea on a grid world environment, which can be considered as a simple model for reaching with obstacle avoidance. The observing infant has to imitate the demonstrator's reaching behavior through IRL by using various set of features that correspond to different stages of development. Our simulation results indicate that the U-shape performance change during imitation development observed in infants can be reproduced with the proposed model.ArticlePublication Metadata only Parental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills(Cambridge University Press, 2015-06) Ugur, E.; Nagai, Y.; Celikkanat, H.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanParental scaffolding is an important mechanism that speeds up infant sensorimotor development. Infants pay stronger attention to the features of the objects highlighted by parents, and their manipulation skills develop earlier than they would in isolation due to caregivers' support. Parents are known to make modifications in infant-directed actions, which are often called “motionese”. The features that might be associated with motionese are amplification, repetition and simplification in caregivers' movements, which are often accompanied by increased social signalling. In this paper, we extend our previously developed affordances learning framework to enable our hand-arm robot equipped with a range camera to benefit from parental scaffolding and motionese. We first present our results on how parental scaffolding can be used to guide the robot learning and to modify its crude action execution to speed up the learning of complex skills. For this purpose, an interactive human caregiver-infant scenario was realized with our robotic setup. This setup allowed the caregiver's modification of the ongoing reach and grasp movement of the robot via physical interaction. This enabled the caregiver to make the robot grasp the target object, which in turn could be used by the robot to learn the grasping skill. In addition to this, we also show how parental scaffolding can be used in speeding up imitation learning. We present the details of our work that takes the robot beyond simple goal-level imitation, making it a better imitator with the help of motionese.ArticlePublication Open Access Staged development of robot skills: behavior formation, affordance learning and imitation(IEEE, 2015-06) Ugur, E.; Nagai, Y.; Sahin, E.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanInspired by infant development, we propose a three staged developmental framework for an anthropomorphic robot manipulator. In the first stage, the robot is initialized with a basic reach-and- enclose-on-contact movement capability, and discovers a set of behavior primitives by exploring its movement parameter space. In the next stage, the robot exercises the discovered behaviors on different objects, and learns the caused effects; effectively building a library of affordances and associated predictors. Finally, in the third stage, the learned structures and predictors are used to bootstrap complex imitation and action learning with the help of a cooperative tutor. The main contribution of this paper is the realization of an integrated developmental system where the structures emerging from the sensorimotor experience of an interacting real robot are used as the sole building blocks of the subsequent stages that generate increasingly more complex cognitive capabilities. The proposed framework includes a number of common features with infant sensorimotor development. Furthermore, the findings obtained from the self-exploration and motionese guided human-robot interaction experiments allow us to reason about the underlying mechanisms of simple-to-complex sensorimotor skill progression in human infants.