Publication: Reinforcement learning to adjust parametrized motor primitives to new situations
dc.contributor.author | Kober, J. | |
dc.contributor.author | Wilhelm, A. | |
dc.contributor.author | Öztop, Erhan | |
dc.contributor.author | Peters, J. | |
dc.contributor.department | Computer Science | |
dc.contributor.ozuauthor | ÖZTOP, Erhan | |
dc.date.accessioned | 2014-05-31T12:31:58Z | |
dc.date.available | 2014-05-31T12:31:58Z | |
dc.date.issued | 2012-11 | |
dc.description.abstract | Humans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this paper, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6. | en_US |
dc.description.sponsorship | European Community | en_US |
dc.identifier.doi | 10.1007/s10514-012-9290-3 | |
dc.identifier.endpage | 379 | |
dc.identifier.issn | 1573-7527 | |
dc.identifier.issue | 4 | |
dc.identifier.scopus | 2-s2.0-84868358933 | |
dc.identifier.startpage | 361 | |
dc.identifier.uri | http://hdl.handle.net/10679/364 | |
dc.identifier.uri | https://doi.org/10.1007/s10514-012-9290-3 | |
dc.identifier.volume | 33 | |
dc.identifier.wos | 000307766800002 | |
dc.language.iso | eng | en_US |
dc.peerreviewed | yes | en_US |
dc.publicationstatus | published | en_US |
dc.publisher | Springer Science+Business Media | en_US |
dc.relation | info:eurepo/grantAgreement/EC/FP7/270327 | en_US |
dc.relation.ispartof | Autonomous Robots | |
dc.relation.publicationcategory | International Refereed Journal | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject.keywords | Skill learning | en_US |
dc.subject.keywords | Motor primitives | en_US |
dc.subject.keywords | Reinforcement learning | en_US |
dc.subject.keywords | Meta-parameters | en_US |
dc.subject.keywords | Policy learning | en_US |
dc.title | Reinforcement learning to adjust parametrized motor primitives to new situations | en_US |
dc.type | Article | en_US |
dspace.entity.type | Publication | |
relation.isOrgUnitOfPublication | 85662e71-2a61-492a-b407-df4d38ab90d7 | |
relation.isOrgUnitOfPublication.latestForDiscovery | 85662e71-2a61-492a-b407-df4d38ab90d7 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Reinforcement learning to adjust parametrized motor primitives to new situations.pdf
- Size:
- 2.64 MB
- Format:
- Adobe Portable Document Format
- Description:
- Reinforcement Learning to Adjust Parametrized Motor Primitives to New Situations
License bundle
1 - 1 of 1
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: