Show simple item record

dc.contributor.authorKırtay, M.
dc.contributor.authorÖztop, Erhan
dc.contributor.authorAsada, M.
dc.contributor.authorHafner, V. V.
dc.date.accessioned2023-05-19T13:41:21Z
dc.date.available2023-05-19T13:41:21Z
dc.date.issued2021
dc.identifier.isbn978-172816242-3
dc.identifier.urihttp://hdl.handle.net/10679/8281
dc.identifier.urihttps://ieeexplore.ieee.org/document/9515645
dc.description.abstractTrust is an essential component in human-human and human-robot interactions. The factors that play potent roles in these interactions have been an attractive issue in robotics. However, the studies that aim at developing a computational model of robot trust in interaction partners remain relatively limited. In this study, we extend our emergent emotion model to propose that the robot's trust in the interaction partner (i.e., trustee) can be established by the effect of the interactions on the computational energy budget of the robot (i.e., trustor). To be concrete, we show how high-level emotions (e.g., wellbeing) of an agent can be modeled by the computational cost of perceptual processing (e.g., visual stimulus processing for visual recalling) in a decision-making framework. To realize this approach, we endow the Pepper humanoid robot with two modules: an auto-associative memory that extracts the required computational energy to perform a visual recalling, and an internal reward mechanism guiding model-free reinforcement learning to yield computational energy cost-aware behaviors. With this setup, the robot interacts with online instructors with different guiding strategies, namely reliable, less reliable, and random. Through interaction with the instructors, the robot associates the cumulative reward values based on the cost of perceptual processing to evaluate the instructors and determine which one should be trusted. Overall the results indicate that the robot can differentiate the guiding strategies of the instructors. Additionally, in the case of free choice, the robot trusts the reliable one that increases the total reward - and therefore reduces the required computational energy (cognitive load)- to perform the next task.en_US
dc.description.sponsorshipDeutsche Forschungsgemeinschaft
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.ispartof2021 IEEE International Conference on Development and Learning (ICDL)
dc.rightsrestrictedAccess
dc.titleModeling robot trust based on emergent emotion in an interactive tasken_US
dc.typeConference paperen_US
dc.publicationstatusPublisheden_US
dc.contributor.departmentÖzyeğin University
dc.contributor.authorID(ORCID 0000-0002-3051-6038 & YÖK ID 45227) Öztop, Erhan
dc.contributor.ozuauthorÖztop, Erhan
dc.identifier.doi10.1109/ICDL49984.2021.9515645en_US
dc.subject.keywordsEmotionsen_US
dc.subject.keywordsHRIen_US
dc.subject.keywordsInternal rewarden_US
dc.subject.keywordsTrusten_US
dc.subject.keywordsVisual recallingen_US
dc.relation.publicationcategoryConference Paper - International - Institutional Academic Staff


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record


Share this page