%0 Conference Proceedings %T Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting %A Zhou, Wangchunshu %A Ge, Tao %A Xu, Canwen %A Xu, Ke %A Wei, Furu %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F zhou-etal-2021-improving-sequence %X In this paper, we propose Sequence Span Rewriting (SSR), a self-supervised task for sequence-to-sequence (Seq2Seq) pre-training. SSR learns to refine the machine-generated imperfect text spans into ground truth text. SSR provides more fine-grained and informative supervision in addition to the original text-infilling objective. Compared to the prevalent text infilling objectives for Seq2Seq pre-training, SSR is naturally more consistent with many downstream generation tasks that require sentence rewriting (e.g., text summarization, question generation, grammatical error correction, and paraphrase generation). We conduct extensive experiments by using SSR to improve the typical Seq2Seq pre-trained model T5 in a continual pre-training setting and show substantial improvements over T5 on various natural language generation tasks. %R 10.18653/v1/2021.emnlp-main.45 %U https://aclanthology.org/2021.emnlp-main.45 %U https://doi.org/10.18653/v1/2021.emnlp-main.45 %P 571-582