Ahmetoglu, A.Uğur, E.Asada, M.Öztop, Erhan2023-08-172023-08-170169-1864http://hdl.handle.net/10679/8715https://doi.org/10.1080/01691864.2021.2019613Abstraction is an important aspect of intelligence which enables agents to construct robust representations for effective and efficient decision making. Although, deep neural networks are proven to be effective learning systems due to their ability to form increasingly complex abstractions at successive layers these abstractions are mostly distributed over many neurons, making the re-use of a learned skill costly and blind to the insights that can be obtained on the emergent representations. For avoiding designer bias and unsparing resource use, we propose to exploit neural response dynamics to form compact representations to use in skill transfer. For this, we consider two competing methods based on (1) maximum information compression principle and (2) the notion that abstract events tend to generate slowly changing signals, and apply them to the neural signals generated during task execution. To be concrete, in our simulation experiments, we either apply principal component analysis (PCA) or slow feature analysis (SFA) on the signals collected from the last hidden layer of a deep neural network while it performs a source task, and use these features for skill transfer in a new, target, task. We then compare the generalization and learning performance of these alternatives with the baselines of skill transfer with full layer output and no-transfer settings. Our experimental results on a simulated tabletop robot arm navigation task show that units that are created with SFA are the most successful for skill transfer. SFA as well as PCA, incur less resources compared to usual skill transfer where full layer outputs are used in the new task learning, whereby many units formed show a localized response reflecting end-effector-obstacle-goal relations. Finally, SFA units with the lowest eigenvalues resemble symbolic representations that highly correlate with high-level features such as joint angles and end-effector position which might be thought of as precursors for fully symbolic systems.enginfo:eu-repo/semantics/openAccessHigh-level features for resource economy and fast learning in skill transferArticle365-629130300074229820000110.1080/01691864.2021.2019613Reinforcement learningSymbol emergenceTransfer learning2-s2.0-85122870929