[go: up one dir, main page]

SPIN: Sparsifying and Integrating Internal Neurons in Large Language Models for Text Classification

Difan Jiao, Yilun Liu, Zhenwei Tang, Daniel Matter, Jürgen Pfeffer, Ashton Anderson


Abstract
Among the many tasks that Large Language Models (LLMs) have revolutionized is text classification. Current text classification paradigms, however, rely solely on the output of the final layer in the LLM, with the rich information contained in internal neurons largely untapped. In this study, we present SPIN: a model-agnostic framework that sparsifies and integrates internal neurons of intermediate layers of LLMs for text classification. Specifically, SPIN sparsifies internal neurons by linear probing-based salient neuron selection layer by layer, avoiding noise from unrelated neurons and ensuring efficiency. The cross-layer salient neurons are then integrated to serve as multi-layered features for the classification head. Extensive experimental results show our proposed SPIN significantly improves text classification accuracy, efficiency, and interpretability.
Anthology ID:
2024.findings-acl.277
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4666–4682
Language:
URL:
https://aclanthology.org/2024.findings-acl.277
DOI:
10.18653/v1/2024.findings-acl.277
Bibkey:
Cite (ACL):
Difan Jiao, Yilun Liu, Zhenwei Tang, Daniel Matter, Jürgen Pfeffer, and Ashton Anderson. 2024. SPIN: Sparsifying and Integrating Internal Neurons in Large Language Models for Text Classification. In Findings of the Association for Computational Linguistics: ACL 2024, pages 4666–4682, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
SPIN: Sparsifying and Integrating Internal Neurons in Large Language Models for Text Classification (Jiao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.277.pdf