Kober, J.Wilhelm, A.Öztop, ErhanPeters, J.2014-05-312014-05-312012-111573-7527http://hdl.handle.net/10679/364https://doi.org/10.1007/s10514-012-9290-3Humans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this paper, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6.enginfo:eu-repo/semantics/openAccessReinforcement learning to adjust parametrized motor primitives to new situationsArticle33436137900030776680000210.1007/s10514-012-9290-3Skill learningMotor primitivesReinforcement learningMeta-parametersPolicy learning2-s2.0-84868358933