Forward Thinking: Building and Training Neural Networks One Layer at a Time
Abstract
We present a general framework for training deep neural networks without backpropagation. This substantially decreases training time and also allows for construction of deep networks with many sorts of learners, including networks whose layers are defined by functions that are not easily differentiated, like decision trees. The main idea is that layers can be trained one at a time, and once they are trained, the input data are mapped forward through the layer to create a new learning problem. The process is repeated, transforming the data through multiple layers, one at a time, rendering a new data set, which is expected to be better behaved, and on which a final output layer can achieve good performance. We call this forward thinking and demonstrate a proof of concept by achieving state-of-the-art accuracy on the MNIST dataset for convolutional neural networks. We also provide a general mathematical formulation of forward thinking that allows for other types of deep learning problems to be considered.
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2017
- DOI:
- arXiv:
- arXiv:1706.02480
- Bibcode:
- 2017arXiv170602480H
- Keywords:
-
- Statistics - Machine Learning;
- Computer Science - Machine Learning