Browsing by Author "Kuhlen, A. K."
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Conference ObjectPublication Metadata only Forming robot trust in heterogeneous agents during a multimodal interactive game(IEEE, 2022) Kırtay, M.; Öztop, Erhan; Kuhlen, A. K.; Asada, M.; Hafner, V. V.; Computer Science; ÖZTOP, ErhanThis study presents a robot trust model based on cognitive load that uses multimodal cues in a learning setting to assess the trustworthiness of heterogeneous interaction partners. As a test-bed, we designed an interactive task where a small humanoid robot, Nao, is asked to perform a sequential audio-visual pattern recall task while minimizing its cognitive load by receiving help from its interaction partner, either a robot, Pepper, or a human. The partner displayed one of three guiding strategies, reliable, unreliable, or random. The robot is equipped with two cognitive modules: a multimodal auto-associative memory and an internal reward module. The former represents the multimodal cognitive processing of the robot and allows a 'cognitive load' or 'cost' to be assigned to the processing that takes place, while the latter converts the cognitive processing cost to an internal reward signal that drives the cost-based behavior learning. Here, the robot asks for help from its interaction partner when its action leads to a high cognitive load. Then the robot receives an action suggestion from the partner and follows it. After performing interactive experiments with each partner, the robot uses the cognitive load yielded during the interaction to assess the trustworthiness of the partners -i.e., it associates high trustworthiness with low cognitive load. We then give a free choice to the robot to select the trustworthy interaction partner to perform the next task. Our results show that, overall, the robot selects partners with reliable guiding strategies. Moreover, the robot's ability to identify a trustworthy partner was unaffected by whether the partner was a human or a robot.Conference ObjectPublication Metadata only Multimodal reinforcement learning for partner specific adaptation in robot-multi-robot interaction(IEEE, 2022) Kırtay, M.; Hafner, V. V.; Asada, M.; Kuhlen, A. K.; Öztop, Erhan; Computer Science; ÖZTOP, ErhanSuccessful and efficient teamwork requires knowledge of the individual team members' expertise. Such knowledge is typically acquired in social interaction and forms the basis for socially intelligent, partner-Adapted behavior. This study aims to implement this ability in teams of multiple humanoid robots. To this end, a humanoid robot, Nao, interacted with three Pepper robots to perform a sequential audio-visual pattern recall task that required integrating multimodal information. Nao outsourced its decisions (i.e., action selections) to its robot partners to perform the task efficiently in terms of neural computational cost by applying reinforcement learning. During the interaction, Nao learned its partners' specific expertise, which allowed Nao to turn for guidance to the partner who has the expertise corresponding to the current task state. The cognitive processing of Nao included a multimodal auto-Associative memory that allowed the determination of the cost of perceptual processing (i.e., cognitive load) when processing audio-visual stimuli. In turn, the processing cost is converted into a reward signal by an internal reward generation module. In this setting, the learner robot Nao aims to minimize cognitive load by turning to the partner whose expertise corresponds to a given task state. Overall, the results indicate that the learner robot discovers the expertise of partners and exploits this information to execute its task with low neural computational cost or cognitive load.Conference ObjectPublication Metadata only Trustworthiness assessment in multimodal human-robot interaction based on cognitive load(IEEE, 2022) Kırtay, M.; Öztop, Erhan; Kuhlen, A. K.; asa, M.; Hafner, V. V.; Computer Science; ÖZTOP, ErhanIn this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.