Publication:
Distributed decision trees

Placeholder

Research Projects

Organizational Unit

Journal Title

Journal ISSN

Volume Title

Type

conferenceObject

Access

restrictedAccess

Publication Status

Published

Journal Issue

Abstract

In a budding tree, every node is part internal node and part leaf. This allows representing the tree in a continuous parameter space and training it with backpropagation, like a neural network. Unlike a traditional tree whose construction is composed of two distinct stages of growing and pruning, “bud” nodes grow into subtrees or are pruned back dynamically during learning. In this work, we extend the budding tree and propose the distributed tree where the children use different and independent splits; hence, multiple paths in a tree can be traversed at the same time. In a traditional tree, the learned representations are local, that is, activation makes a soft selection among all the root-to-leaf paths in a tree, but the ability to combine multiple paths of the distributed tree gives it the power of a distributed representation, as in a traditional perceptron layer. Our experimental results show that distributed trees perform comparably or better than budding and traditional hard trees.

Date

2022

Publisher

Springer

Description

Keywords

Citation

Collections


Page Views

0

File Download

0