[go: up one dir, main page]

Transformer Language Models Handle Word Frequency in Prediction Head

Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui


Abstract
Prediction head is a crucial component of Transformer language models. Despite its direct impact on prediction, this component has often been overlooked in analyzing Transformers.In this study, we investigate the inner workings of the prediction head, specifically focusing on bias parameters. Our experiments with BERT and GPT-2 models reveal that the biases in their word prediction heads play a significant role in the models’ ability to reflect word frequency in a corpus, aligning with the logit adjustment method commonly used in long-tailed learning. We also quantify the effect of controlling the biases in practical auto-regressive text generation scenarios;under a particular setting, more diverse text can be generated without compromising text quality.
Anthology ID:
2023.findings-acl.276
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4523–4535
Language:
URL:
https://aclanthology.org/2023.findings-acl.276
DOI:
10.18653/v1/2023.findings-acl.276
Bibkey:
Cite (ACL):
Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2023. Transformer Language Models Handle Word Frequency in Prediction Head. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4523–4535, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Transformer Language Models Handle Word Frequency in Prediction Head (Kobayashi et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.276.pdf
Video:
 https://aclanthology.org/2023.findings-acl.276.mp4