[go: up one dir, main page]

Rationale-Enhanced Language Models are Better Continual Relation Learners

Weimin Xiong, Yifan Song, Peiyi Wang, Sujian Li


Abstract
Continual relation extraction (CRE) aims to solve the problem of catastrophic forgetting when learning a sequence of newly emerging relations. Recent CRE studies have found that catastrophic forgetting arises from the model’s lack of robustness against future analogous relations. To address the issue, we introduce rationale, i.e., the explanations of relation classification results generated by Large Language Models (LLM), into CRE task. Specifically, we design the multi-task rationale tuning strategy to help the model learn current relations robustly. We also conduct contrastive rationale replay to further distinguish analogous relations. Experimental results on two standard benchmarks demonstrate that our method outperforms the state-of-the-art CRE models.
Anthology ID:
2023.emnlp-main.958
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15489–15497
Language:
URL:
https://aclanthology.org/2023.emnlp-main.958
DOI:
10.18653/v1/2023.emnlp-main.958
Bibkey:
Cite (ACL):
Weimin Xiong, Yifan Song, Peiyi Wang, and Sujian Li. 2023. Rationale-Enhanced Language Models are Better Continual Relation Learners. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15489–15497, Singapore. Association for Computational Linguistics.
Cite (Informal):
Rationale-Enhanced Language Models are Better Continual Relation Learners (Xiong et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.958.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.958.mp4