Abstract: Recurrent neural networks (RNNs) are widely used to model sequential data but
their non-linear dependencies between sequence elements prevent parallelizing
training over sequence length. We show the training of RNNs with only linear
sequential dependencies can be parallelized over the sequence length using the
parallel scan algorithm, leading to rapid training on long sequences even with
small minibatch size. We develop a parallel linear recurrence CUDA kernel and
show that it can be applied to immediately speed up training and inference of
several state of the art RNN architectures by up to 9x. We abstract recent work
on linear RNNs into a new framework of linear surrogate RNNs and develop a
linear surrogate model for the long short-term memory unit, the GILR-LSTM, that
utilizes parallel linear recurrence. We extend sequence learning to new
extremely long sequence regimes that were previously out of reach by
successfully training a GILR-LSTM on a synthetic sequence classification task
with a one million timestep dependency.
TL;DR: use parallel scan to parallelize linear recurrent neural nets. train model on length 1 million dependency
Keywords: rnn, sequence, parallel, qrnn, sru, gilr, gilr-lstm
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/parallelizing-linear-recurrent-neural-nets/code)
Code: [![github](/images/github_icon.svg) eamartin/parallelizing_linear_rnns](https://github.com/eamartin/parallelizing_linear_rnns)
7 Replies
Loading