[go: up one dir, main page]

Bias in Opinion Summarisation from Pre-training to Adaptation: A Case Study in Political Bias

Nannan Huang, Haytham Fayek, Xiuzhen Zhang


Abstract
Opinion summarisation aims to summarise the salient information and opinions presented in documents such as product reviews, discussion forums, and social media texts into short summaries that enable users to effectively understand the opinions therein.Generating biased summaries has the risk of potentially swaying public opinion. Previous studies focused on studying bias in opinion summarisation using extractive models, but limited research has paid attention to abstractive summarisation models. In this study, using political bias as a case study, we first establish a methodology to quantify bias in abstractive models, then trace it from the pre-trained models to the task of summarising social media opinions using different models and adaptation methods. We find that most models exhibit intrinsic bias. Using a social media text summarisation dataset and contrasting various adaptation methods, we find that tuning a smaller number of parameters is less biased compared to standard fine-tuning; however, the diversity of topics in training data used for fine-tuning is critical.
Anthology ID:
2024.eacl-long.63
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1041–1055
Language:
URL:
https://aclanthology.org/2024.eacl-long.63
DOI:
Bibkey:
Cite (ACL):
Nannan Huang, Haytham Fayek, and Xiuzhen Zhang. 2024. Bias in Opinion Summarisation from Pre-training to Adaptation: A Case Study in Political Bias. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1041–1055, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Bias in Opinion Summarisation from Pre-training to Adaptation: A Case Study in Political Bias (Huang et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.63.pdf
Video:
 https://aclanthology.org/2024.eacl-long.63.mp4