Shizhen Xu, Carnegie Mellon University, Tsinghua University; Hao Zhang, Graham Neubig, and Wei Dai, Carnegie Mellon University, Petuum Inc.; Jin Kyu Kim, Carnegie Mellon University; Zhijie Deng, Tsinghua University; Qirong Ho, Petuum Inc.; Guangwen Yang, Tsinghua University; Eric P. Xing, Petuum Inc.
Recent deep learning (DL) models are moving more and more to dynamic neural network (NN) architectures, where the NN structure changes for every data sample. However, existing DL programming models are inefficient in handling dynamic network architectures because of: (1) substantial overhead caused by repeating dataflow graph construction and processing every example; (2) difficulties in batched execution of multiple samples; (3) inability to incorporate graph optimization techniques such as those used in static graphs. In this paper, we present ``Cavs'', a runtime system that overcomes these bottlenecks and achieves efficient training and inference of dynamic NNs. Cavs represents a dynamic NN as a static vertex function $\mathcal{F}$ and a dynamic instance-specific graph $\mathcal{G}$. It avoids the overhead of repeated graph construction by only declaring and constructing $\mathcal{F}$ once, and allows for the use of static graph optimization techniques on pre-defined operations in $\mathcal{F}$. Cavs performs training and inference by scheduling the execution of $\mathcal{F}$ following the dependencies in $\mathcal{G}$, hence naturally exposing batched execution opportunities over different samples. Experiments comparing Cavs to state-of-the-art frameworks for dynamic NNs (TensorFlow Fold, PyTorch and DyNet) demonstrate the efficacy of our approach: Cavs achieves a near one order of magnitude speedup on training of dynamic NN architectures, and ablations verify the effectiveness of our proposed design and optimizations.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Shizhen Xu and Hao Zhang and Graham Neubig and Wei Dai and Jin Kyu Kim and Zhijie Deng and Qirong Ho and Guangwen Yang and Eric P. Xing},
title = {Cavs: An Efficient Runtime System for Dynamic Neural Networks},
booktitle = {2018 USENIX Annual Technical Conference (USENIX ATC 18)},
year = {2018},
isbn = {978-1-939133-01-4},
address = {Boston, MA},
pages = {937--950},
url = {https://www.usenix.org/conference/atc18/presentation/xu-shizen},
publisher = {USENIX Association},
month = jul
}