[go: up one dir, main page]

Correcting Language Model Bias for Text Classification in True Zero-Shot Learning

Feng Zhao, Wan Xianlin, Cheng Yan, Chu Kiong Loo


Abstract
Combining pre-trained language models (PLMs) and manual templates is a common practice for text classification in zero-shot scenarios. However, the effect of this approach is highly volatile, ranging from random guesses to near state-of-the-art results, depending on the quality of the manual templates. In this paper, we show that this instability stems from the fact that language models tend toward predicting certain label words of text classification, and manual templates can influence this tendency. To address this, we develop a novel pipeline for annotating and filtering a few examples from unlabeled examples. Moreover, we propose a new method to measure model bias on label words that utilizes unlabeled examples as a validation set when tuning language models. Our approach does not require any pre-labeled examples. Experimental results on six text classification tasks demonstrate that the proposed approach significantly outperforms standard prompt learning in zero-shot settings, achieving up to 19.7% absolute improvement and 13.8% average improvement. More surprisingly, on IMDB and SST-2, our approach even exceeds all few-shot baselines.
Anthology ID:
2024.lrec-main.359
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
4036–4046
Language:
URL:
https://aclanthology.org/2024.lrec-main.359
DOI:
Bibkey:
Cite (ACL):
Feng Zhao, Wan Xianlin, Cheng Yan, and Chu Kiong Loo. 2024. Correcting Language Model Bias for Text Classification in True Zero-Shot Learning. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 4036–4046, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Correcting Language Model Bias for Text Classification in True Zero-Shot Learning (Zhao et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.359.pdf