Distributed decision trees
Type :
Conference paper
Publication Status :
Published
Access :
restrictedAccess
Abstract
In a budding tree, every node is part internal node and part leaf. This allows representing the tree in a continuous parameter space and training it with backpropagation, like a neural network. Unlike a traditional tree whose construction is composed of two distinct stages of growing and pruning, “bud” nodes grow into subtrees or are pruned back dynamically during learning. In this work, we extend the budding tree and propose the distributed tree where the children use different and independent splits; hence, multiple paths in a tree can be traversed at the same time. In a traditional tree, the learned representations are local, that is, activation makes a soft selection among all the root-to-leaf paths in a tree, but the ability to combine multiple paths of the distributed tree gives it the power of a distributed representation, as in a traditional perceptron layer. Our experimental results show that distributed trees perform comparably or better than budding and traditional hard trees.
Source :
Structural, Syntactic, and Statistical Pattern Recognition, Part of the Lecture Notes in Computer Science book series (LNCS,volume 13813)
Date :
2022
Volume :
13813
Publisher :
Springer
Collections
Share this page