Multi-split Reversible Transformers Can Enhance Neural Machine Translation

Yuekai Zhao, Shuchang Zhou, Zhihua Zhang

Machine Translation Long paper Paper

Zoom-3D: Apr 22, Zoom-3D: Apr 22 (07:00-08:00 UTC) [Join Zoom Meeting]
Gather-3E: Apr 23, Gather-3E: Apr 23 (13:00-15:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in separate windows.

Abstract: Large-scale transformers have been shown the state-of-the-art on neural machine translation. However, training these increasingly wider and deeper models could be tremendously memory intensive. We reduce the memory burden by employing the idea of reversible networks that a layer's input can be reconstructed from its output. We design three types of multi-split based reversible transformers. We also devise a corresponding backpropagation algorithm, which does not need to store activations for most layers. Furthermore, we present two fine-tuning techniques: splits shuffle and self ensemble, to boost translation accuracy. Specifically, our best models surpass the vanilla transformer by at least 1.4 BLEU points in three datasets. Our large-scale reversible models achieve 30.0 BLEU in WMT'14 En-De and 43.5 BLEU in WMT'14 En-Fr, beating several very strong baselines with less than half of the training memory.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EACL2021

Similar Papers