[go: up one dir, main page]

Learning Prototype Representations Across Few-Shot Tasks for Event Detection

Viet Lai, Franck Dernoncourt, Thien Huu Nguyen


Abstract
We address the sampling bias and outlier issues in few-shot learning for event detection, a subtask of information extraction. We propose to model the relations between training tasks in episodic few-shot learning by introducing cross-task prototypes. We further propose to enforce prediction consistency among classifiers across tasks to make the model more robust to outliers. Our extensive experiment shows a consistent improvement on three few-shot learning datasets. The findings suggest that our model is more robust when labeled data of novel event types is limited. The source code is available at http://github.com/laiviet/fsl-proact.
Anthology ID:
2021.emnlp-main.427
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5270–5277
Language:
URL:
https://aclanthology.org/2021.emnlp-main.427
DOI:
10.18653/v1/2021.emnlp-main.427
Bibkey:
Cite (ACL):
Viet Lai, Franck Dernoncourt, and Thien Huu Nguyen. 2021. Learning Prototype Representations Across Few-Shot Tasks for Event Detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5270–5277, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Learning Prototype Representations Across Few-Shot Tasks for Event Detection (Lai et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.427.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.427.mp4
Code
 laiviet/fsl-proact