Person:
ALPAYDIN, Ahmet Ibrahim Ethem

Loading...
Profile Picture

Email Address

Birth Date

WoSScopusGoogle ScholarORCID

Name

Job Title

First Name

Ahmet Ibrahim Ethem

Last Name

ALPAYDIN
Organizational Unit

Publication Search Results

Now showing 1 - 3 of 3
  • Placeholder
    ArticlePublication
    Dropout regularization in hierarchical mixture of experts
    (Elsevier, 2021-01-02) Alpaydın, Ahmet İbrahim Ethem; Computer Science; ALPAYDIN, Ahmet Ibrahim Ethem
    Dropout is a very effective method in preventing overfitting and has become the go-to regularizer for multi-layer neural networks in recent years. Hierarchical mixture of experts is a hierarchically gated model that defines a soft decision tree where leaves correspond to experts and decision nodes correspond to gating models that softly choose between its children, and as such, the model defines a soft hierarchical partitioning of the input space. In this work, we propose a variant of dropout for hierarchical mixture of experts that is faithful to the tree hierarchy defined by the model, as opposed to having a flat, unitwise independent application of dropout as one has with multi-layer perceptrons. We show that on a synthetic regression data and on MNIST, CIFAR-10, and SSTB datasets, our proposed dropout mechanism prevents overfitting on trees with many levels improving generalization and providing smoother fits.
  • Placeholder
    Conference ObjectPublication
    Distributed decision trees
    (Springer, 2022) Irsoy, O.; Alpaydın, Ahmet İbrahim Ethem; Computer Science; ALPAYDIN, Ahmet Ibrahim Ethem
    In a budding tree, every node is part internal node and part leaf. This allows representing the tree in a continuous parameter space and training it with backpropagation, like a neural network. Unlike a traditional tree whose construction is composed of two distinct stages of growing and pruning, “bud” nodes grow into subtrees or are pruned back dynamically during learning. In this work, we extend the budding tree and propose the distributed tree where the children use different and independent splits; hence, multiple paths in a tree can be traversed at the same time. In a traditional tree, the learned representations are local, that is, activation makes a soft selection among all the root-to-leaf paths in a tree, but the ability to combine multiple paths of the distributed tree gives it the power of a distributed representation, as in a traditional perceptron layer. Our experimental results show that distributed trees perform comparably or better than budding and traditional hard trees.
  • Placeholder
    Conference ObjectPublication
    Hierarchical mixtures of generators for adversarial learning
    (IEEE, 2019) Ahmetoğlu, A.; Alpaydın, Ahmet İbrahim Ethem; Computer Science; ALPAYDIN, Ahmet Ibrahim Ethem
    Generative adversarial networks (GANs) are deep neural networks that allow us to sample from an arbitrary probability distribution without explicitly estimating the distribution. There is a generator that takes a latent vector as input and transforms it into a valid sample from the distribution. There is also a discriminator that is trained to discriminate such fake samples from true samples of the distribution; at the same time, the generator is trained to generate fakes that the discriminator cannot tell apart from the true samples. Instead of learning a global generator, a recent approach involves training multiple generators each responsible from one part of the distribution. In this work, we review such approaches and propose the hierarchical mixture of generators, inspired from the hierarchical mixture of experts model, that learns a tree structure implementing a hierarchical clustering with soft splits in the decision nodes and local generators in the leaves. Since the generators are combined softly, the whole model is continuous and can be trained using gradient-based optimization, just like the original GAN model. Our experiments on five image data sets, namely, MNIST, FashionMNIST, UTZap50K, Oxford Flowers, and CelebA, show that our proposed model generates samples of high quality and diversity in terms of popular GAN evaluation metrics. The learned hierarchical structure also leads to knowledge extraction.