[go: up one dir, main page]

Concept Equalization to Guide Correct Training of Neural Machine Translation

Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, SangKeun Jung


Abstract
Neural machine translation decoders are usually conditional language models to sequentially generate words for target sentences. This approach is limited to find the best word composition and requires help of explicit methods as beam search. To help learning correct compositional mechanisms in NMTs, we propose concept equalization using direct mapping distributed representations of source and target sentences. In a translation experiment from English to French, the concept equalization significantly improved translation quality by 3.00 BLEU points compared to a state-of-the-art NMT model.
Anthology ID:
I17-2051
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
302–307
Language:
URL:
https://aclanthology.org/I17-2051
DOI:
Bibkey:
Cite (ACL):
Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, and SangKeun Jung. 2017. Concept Equalization to Guide Correct Training of Neural Machine Translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 302–307, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Concept Equalization to Guide Correct Training of Neural Machine Translation (Kim et al., IJCNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/I17-2051.pdf