Yuekai Zhao
2021
Multi-split Reversible Transformers Can Enhance Neural Machine Translation
Yuekai Zhao
|
Shuchang Zhou
|
Zhihua Zhang
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Large-scale transformers have been shown the state-of-the-art on neural machine translation. However, training these increasingly wider and deeper models could be tremendously memory intensive. We reduce the memory burden by employing the idea of reversible networks that a layer’s input can be reconstructed from its output. We design three types of multi-split based reversible transformers. We also devise a corresponding backpropagation algorithm, which does not need to store activations for most layers. Furthermore, we present two fine-tuning techniques: splits shuffle and self ensemble, to boost translation accuracy. Specifically, our best models surpass the vanilla transformer by at least 1.4 BLEU points in three datasets. Our large-scale reversible models achieve 30.0 BLEU in WMT’14 En-De and 43.5 BLEU in WMT’14 En-Fr, beating several very strong baselines with less than half of the training memory.
Memory-Efficient Differentiable Transformer Architecture Search
Yuekai Zhao
|
Li Dong
|
Yelong Shen
|
Zhihua Zhang
|
Furu Wei
|
Weizhu Chen
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
Active Learning Approaches to Enhancing Neural Machine Translation
Yuekai Zhao
|
Haoran Zhang
|
Shuchang Zhou
|
Zhihua Zhang
Findings of the Association for Computational Linguistics: EMNLP 2020
Active learning is an efficient approach for mitigating data dependency when training neural machine translation (NMT) models. In this paper, we explore new training frameworks by incorporating active learning into various techniques such as transfer learning and iterative back-translation (IBT) under a limited human translation budget. We design a word frequency based acquisition function and combine it with a strong uncertainty based method. The combined method steadily outperforms all other acquisition functions in various scenarios. As far as we know, we are the first to do a large-scale study on actively training Transformer for NMT. Specifically, with a human translation budget of only 20% of the original parallel corpus, we manage to surpass Transformer trained on the entire parallel corpus in three language pairs.
Search
Co-authors
- Zhihua Zhang 3
- Shuchang Zhou 2
- Li Dong 1
- Yelong Shen 1
- Furu Wei 1
- show all...