Computer Science
Permanent URI for this collectionhttps://hdl.handle.net/10679/43
Browse
Browsing by Author "Ahmetoğlu, A."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference ObjectPublication Metadata only Deep multi-object symbol learning with self-attention based predictors(IEEE, 2023) Ahmetoğlu, A.; Öztop, Erhan; Uğur, E.; Computer Science; ÖZTOP, ErhanThis paper proposes an architecture that can learn symbolic representations from the continuous sensorimotor experience of a robot interacting with a varying number of objects. Unlike previous works, this work aims to remove constraints on the learned symbols such as a fixed number of interacted objects or pre-defined symbolic structures. The proposed architecture can learn symbols for single objects and relations between them in a unified manner. The architecture is an encoder-decoder network with a binary activation layer followed by self-attention layers. Experiments are conducted in a robotic manipulation setup with a varying number of objects. Results showed that the robot successfully encodes the interaction dynamics between a varying number of objects using the discovered symbols. We also showed that the discovered symbols can be used for planning to reach symbolic goal states by training a higher-level neural network.Conference ObjectPublication Metadata only Hierarchical mixtures of generators for adversarial learning(IEEE, 2019) Ahmetoğlu, A.; Alpaydın, Ahmet İbrahim Ethem; Computer Science; ALPAYDIN, Ahmet Ibrahim EthemGenerative adversarial networks (GANs) are deep neural networks that allow us to sample from an arbitrary probability distribution without explicitly estimating the distribution. There is a generator that takes a latent vector as input and transforms it into a valid sample from the distribution. There is also a discriminator that is trained to discriminate such fake samples from true samples of the distribution; at the same time, the generator is trained to generate fakes that the discriminator cannot tell apart from the true samples. Instead of learning a global generator, a recent approach involves training multiple generators each responsible from one part of the distribution. In this work, we review such approaches and propose the hierarchical mixture of generators, inspired from the hierarchical mixture of experts model, that learns a tree structure implementing a hierarchical clustering with soft splits in the decision nodes and local generators in the leaves. Since the generators are combined softly, the whole model is continuous and can be trained using gradient-based optimization, just like the original GAN model. Our experiments on five image data sets, namely, MNIST, FashionMNIST, UTZap50K, Oxford Flowers, and CelebA, show that our proposed model generates samples of high quality and diversity in terms of popular GAN evaluation metrics. The learned hierarchical structure also leads to knowledge extraction.