Publication: Modeling robot trust based on emergent emotion in an interactive task
dc.contributor.author | Kırtay, M. | |
dc.contributor.author | Öztop, Erhan | |
dc.contributor.author | Asada, M. | |
dc.contributor.author | Hafner, V. V. | |
dc.contributor.department | Computer Science | |
dc.contributor.ozuauthor | ÖZTOP, Erhan | |
dc.date.accessioned | 2023-05-19T13:41:21Z | |
dc.date.available | 2023-05-19T13:41:21Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Trust is an essential component in human-human and human-robot interactions. The factors that play potent roles in these interactions have been an attractive issue in robotics. However, the studies that aim at developing a computational model of robot trust in interaction partners remain relatively limited. In this study, we extend our emergent emotion model to propose that the robot's trust in the interaction partner (i.e., trustee) can be established by the effect of the interactions on the computational energy budget of the robot (i.e., trustor). To be concrete, we show how high-level emotions (e.g., wellbeing) of an agent can be modeled by the computational cost of perceptual processing (e.g., visual stimulus processing for visual recalling) in a decision-making framework. To realize this approach, we endow the Pepper humanoid robot with two modules: an auto-associative memory that extracts the required computational energy to perform a visual recalling, and an internal reward mechanism guiding model-free reinforcement learning to yield computational energy cost-aware behaviors. With this setup, the robot interacts with online instructors with different guiding strategies, namely reliable, less reliable, and random. Through interaction with the instructors, the robot associates the cumulative reward values based on the cost of perceptual processing to evaluate the instructors and determine which one should be trusted. Overall the results indicate that the robot can differentiate the guiding strategies of the instructors. Additionally, in the case of free choice, the robot trusts the reliable one that increases the total reward - and therefore reduces the required computational energy (cognitive load)- to perform the next task. | en_US |
dc.description.sponsorship | Deutsche Forschungsgemeinschaft | |
dc.identifier.doi | 10.1109/ICDL49984.2021.9515645 | en_US |
dc.identifier.isbn | 978-172816242-3 | |
dc.identifier.uri | http://hdl.handle.net/10679/8281 | |
dc.identifier.uri | https://doi.org/10.1109/ICDL49984.2021.9515645 | |
dc.language.iso | eng | en_US |
dc.publicationstatus | Published | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartof | 2021 IEEE International Conference on Development and Learning (ICDL) | |
dc.relation.publicationcategory | International | |
dc.rights | restrictedAccess | |
dc.subject.keywords | Emotions | en_US |
dc.subject.keywords | HRI | en_US |
dc.subject.keywords | Internal reward | en_US |
dc.subject.keywords | Trust | en_US |
dc.subject.keywords | Visual recalling | en_US |
dc.title | Modeling robot trust based on emergent emotion in an interactive task | en_US |
dc.type | conferenceObject | en_US |
dspace.entity.type | Publication | |
relation.isOrgUnitOfPublication | 85662e71-2a61-492a-b407-df4d38ab90d7 | |
relation.isOrgUnitOfPublication.latestForDiscovery | 85662e71-2a61-492a-b407-df4d38ab90d7 |
Files
License bundle
1 - 1 of 1
- Name:
- license.txt
- Size:
- 1.45 KB
- Format:
- Item-specific license agreed upon to submission
- Description: