Browsing by Author "Hafner, V. V."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Conference ObjectPublication Metadata only Forming robot trust in heterogeneous agents during a multimodal interactive game(IEEE, 2022) Kırtay, M.; Öztop, Erhan; Kuhlen, A. K.; Asada, M.; Hafner, V. V.; Computer Science; ÖZTOP, ErhanThis study presents a robot trust model based on cognitive load that uses multimodal cues in a learning setting to assess the trustworthiness of heterogeneous interaction partners. As a test-bed, we designed an interactive task where a small humanoid robot, Nao, is asked to perform a sequential audio-visual pattern recall task while minimizing its cognitive load by receiving help from its interaction partner, either a robot, Pepper, or a human. The partner displayed one of three guiding strategies, reliable, unreliable, or random. The robot is equipped with two cognitive modules: a multimodal auto-associative memory and an internal reward module. The former represents the multimodal cognitive processing of the robot and allows a 'cognitive load' or 'cost' to be assigned to the processing that takes place, while the latter converts the cognitive processing cost to an internal reward signal that drives the cost-based behavior learning. Here, the robot asks for help from its interaction partner when its action leads to a high cognitive load. Then the robot receives an action suggestion from the partner and follows it. After performing interactive experiments with each partner, the robot uses the cognitive load yielded during the interaction to assess the trustworthiness of the partners -i.e., it associates high trustworthiness with low cognitive load. We then give a free choice to the robot to select the trustworthy interaction partner to perform the next task. Our results show that, overall, the robot selects partners with reliable guiding strategies. Moreover, the robot's ability to identify a trustworthy partner was unaffected by whether the partner was a human or a robot.Conference ObjectPublication Metadata only Interplay between neural computational energy and multimodal processing in robot-robot interaction(IEEE, 2023) Kırtay, M.; Hafner, V. V.; Asada, M.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanMultimodal learning is an active research area that is gaining importance in human-robot interaction. Despite the obvious benefit of levering multiple sensors for perceiving the world, its neural computational cost has not been addressed in robotics, especially in Robot-Robot Interaction (RRI). This study addresses the role of computational cost in multimodal processing by considering robot-robot interaction in a sequential multimodal memory recall task. In this setting, the learner (Nao) robot receives auditory-only, visual-only, or audio-visual information from an instructor (Pepper) robot and the environment regarding previously learned memory items. The goal of the learner robot is to perform the interactive task with as low as possible neural computational cost. The learner robot has two cognitive modules: a multimodal auto-associative network that stands for the perceptual-cognitive processing of the robot and an internal reward mechanism that monitors the changes in neural energy incurred for two consecutive steps by the processing of the attended stimuli. The reward computed is used to build an action policy for minimizing the neural energy consumption over the sequential memory recall task. The experimental results show that having access to both auditory and visual information is beneficial not only for better memory recall but also for minimizing the cost of neural computation.Conference ObjectPublication Metadata only Modeling robot trust based on emergent emotion in an interactive task(IEEE, 2021) Kırtay, M.; Öztop, Erhan; Asada, M.; Hafner, V. V.; Computer Science; ÖZTOP, ErhanTrust is an essential component in human-human and human-robot interactions. The factors that play potent roles in these interactions have been an attractive issue in robotics. However, the studies that aim at developing a computational model of robot trust in interaction partners remain relatively limited. In this study, we extend our emergent emotion model to propose that the robot's trust in the interaction partner (i.e., trustee) can be established by the effect of the interactions on the computational energy budget of the robot (i.e., trustor). To be concrete, we show how high-level emotions (e.g., wellbeing) of an agent can be modeled by the computational cost of perceptual processing (e.g., visual stimulus processing for visual recalling) in a decision-making framework. To realize this approach, we endow the Pepper humanoid robot with two modules: an auto-associative memory that extracts the required computational energy to perform a visual recalling, and an internal reward mechanism guiding model-free reinforcement learning to yield computational energy cost-aware behaviors. With this setup, the robot interacts with online instructors with different guiding strategies, namely reliable, less reliable, and random. Through interaction with the instructors, the robot associates the cumulative reward values based on the cost of perceptual processing to evaluate the instructors and determine which one should be trusted. Overall the results indicate that the robot can differentiate the guiding strategies of the instructors. Additionally, in the case of free choice, the robot trusts the reliable one that increases the total reward - and therefore reduces the required computational energy (cognitive load)- to perform the next task.Conference ObjectPublication Metadata only Multimodal reinforcement learning for partner specific adaptation in robot-multi-robot interaction(IEEE, 2022) Kırtay, M.; Hafner, V. V.; Asada, M.; Kuhlen, A. K.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanSuccessful and efficient teamwork requires knowledge of the individual team members' expertise. Such knowledge is typically acquired in social interaction and forms the basis for socially intelligent, partner-Adapted behavior. This study aims to implement this ability in teams of multiple humanoid robots. To this end, a humanoid robot, Nao, interacted with three Pepper robots to perform a sequential audio-visual pattern recall task that required integrating multimodal information. Nao outsourced its decisions (i.e., action selections) to its robot partners to perform the task efficiently in terms of neural computational cost by applying reinforcement learning. During the interaction, Nao learned its partners' specific expertise, which allowed Nao to turn for guidance to the partner who has the expertise corresponding to the current task state. The cognitive processing of Nao included a multimodal auto-Associative memory that allowed the determination of the cost of perceptual processing (i.e., cognitive load) when processing audio-visual stimuli. In turn, the processing cost is converted into a reward signal by an internal reward generation module. In this setting, the learner robot Nao aims to minimize cognitive load by turning to the partner whose expertise corresponds to a given task state. Overall, the results indicate that the learner robot discovers the expertise of partners and exploits this information to execute its task with low neural computational cost or cognitive load.Conference ObjectPublication Metadata only Trustworthiness assessment in multimodal human-robot interaction based on cognitive load(IEEE, 2022) Kırtay, M.; Öztop, Erhan; Kuhlen, A. K.; asa, M.; Hafner, V. V.; Computer Science; ÖZTOP, ErhanIn this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.