Sai Surya
2019
Unsupervised Neural Text Simplification
Sai Surya
|
Abhijit Mishra
|
Anirban Laha
|
Parag Jain
|
Karthik Sankaranarayanan
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
The paper presents a first attempt towards unsupervised neural text simplification that relies only on unlabeled text corpora. The core framework is composed of a shared encoder and a pair of attentional-decoders, crucially assisted by discrimination-based losses and denoising. The framework is trained using unlabeled text collected from en-Wikipedia dump. Our analysis (both quantitative and qualitative involving human evaluators) on public test data shows that the proposed model can perform text-simplification at both lexical and syntactic levels, competitive to existing supervised methods. It also outperforms viable unsupervised baselines. Adding a few labeled pairs helps improve the performance further.