[go: up one dir, main page]

MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers

Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, Furu Wei


Anthology ID:
2021.findings-acl.188
Volume:
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2140–2151
Language:
URL:
https://aclanthology.org/2021.findings-acl.188
DOI:
10.18653/v1/2021.findings-acl.188
Bibkey:
Cite (ACL):
Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021. MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2140–2151, Online. Association for Computational Linguistics.
Cite (Informal):
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers (Wang et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-acl.188.pdf
Code
 additional community code
Data
BookCorpusCoLAGLUEMLQAMRPCMultiNLIQNLISQuADSSTSST-2