Publication:
ACNMP: skill transfer and task extrapolation through learning from demonstration and reinforcement learning via representation sharing

dc.contributor.authorAkbulut, M. T.
dc.contributor.authorÖztop, Erhan
dc.contributor.authorXue, H.
dc.contributor.authorTekden, A. E.
dc.contributor.authorŞeker, M. Y.
dc.contributor.authorUğur, E.
dc.contributor.departmentComputer Science
dc.contributor.ozuauthorÖZTOP, Erhan
dc.date.accessioned2024-03-06T04:58:52Z
dc.date.available2024-03-06T04:58:52Z
dc.date.issued2020
dc.description.abstractTo equip robots with dexterous skills, an effective approach is to first transfer the desired skill via Learning from Demonstration (LfD), then let the robot improve it by self-exploration via Reinforcement Learning (RL). In this paper, we propose a novel LfD+RL framework, namely Adaptive Conditional Neural Movement Primitives (ACNMP), that allows efficient policy improvement in novel environments and effective skill transfer between different agents. This is achieved through exploiting the latent representation learned by the underlying Conditional Neural Process (CNP) model, and simultaneous training of the model with supervised learning (SL) for acquiring the demonstrated trajectories and via RL for new trajectory discovery. Through simulation experiments, we show that (i) ACNMP enables the system to extrapolate to situations where pure LfD fails; (ii) Simultaneous training of the system through SL and RL preserves the shape of demonstrations while adapting to novel situations due to the shared representations used by both learners; (iii) ACNMP enables order-of-magnitude sample-efficient RL in extrapolation of reaching tasks compared to the existing approaches; (iv) ACNMPs can be used to implement skill transfer between robots having different morphology, with competitive learning speeds and importantly with less number of assumptions compared to the state-of-the-art approaches. Finally, we show the real-world suitability of ACNMPs through real robot experiments that involve obstacle avoidance, pick and place and pouring actions.
dc.description.sponsorshipHorizon 2020 Framework Programme ; Core Research for Evolutional Science and Technology ; Osaka University ; TÜBİTAK
dc.identifier.endpage1907
dc.identifier.issn2640-3498
dc.identifier.scopus2-s2.0-85175852693
dc.identifier.startpage1896
dc.identifier.urihttp://hdl.handle.net/10679/9264
dc.identifier.volume155
dc.language.isoeng
dc.publicationstatusPublished
dc.publisherML Research Press
dc.relation.ispartofProceedings of Machine Learning Research
dc.relation.projectinfo:eu-repo/grantAgreement/EC/H2020/731761
dc.relation.publicationcategoryInternational
dc.rightsopenAccess
dc.subject.keywordsDeep learning
dc.subject.keywordsLearning from demonstration
dc.subject.keywordsReinforcement learning
dc.subject.keywordsRepresentation learning
dc.titleACNMP: skill transfer and task extrapolation through learning from demonstration and reinforcement learning via representation sharing
dc.typeconferenceObject
dc.type.subtypeConference paper
dspace.entity.typePublication
relation.isOrgUnitOfPublication85662e71-2a61-492a-b407-df4d38ab90d7
relation.isOrgUnitOfPublication.latestForDiscovery85662e71-2a61-492a-b407-df4d38ab90d7

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ACNMP skill transfer and task extrapolation through learning from demonstration and reinforcement learning via representation sharing.pdf
Size:
3.83 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Placeholder
Name:
license.txt
Size:
1.45 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections