[go: up one dir, main page]

DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping

Yongrui Chen, Haiyun Jiang, Xinting Huang, Shuming Shi, Guilin Qi


Abstract
The improvement of LLMs’ instruction-following capabilities relies heavily on the availability of high-quality instruction-response pairs. Unfortunately, the current methods used to collect the pairs suffer from either unaffordable labor costs or severe hallucinations in the self-generation of LLM.To tackle these challenges, this paper proposes a scalable solution.It involves training LLMs to generate instruction-response pairs based on human-written documents, rather than relying solely on self-generation without context.Our proposed method not only exploits the advantages of human-written documents in reducing hallucinations but also utilizes an LLM to wrap the expression of documents, which enables us to bridge the gap between various document styles and the standard AI response.Experiments demonstrate that our method outperforms existing typical methods on multiple benchmarks.In particular, compared to the best-performing baseline, the LLM trained using our generated dataset exhibits a 10% relative improvement in performance on AlpacaEval, despite utilizing only 1/5 of its training data.Furthermore, a comprehensive manual evaluation validates the quality of the data we generated.
Anthology ID:
2024.naacl-long.230
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4125–4135
Language:
URL:
https://aclanthology.org/2024.naacl-long.230
DOI:
10.18653/v1/2024.naacl-long.230
Bibkey:
Cite (ACL):
Yongrui Chen, Haiyun Jiang, Xinting Huang, Shuming Shi, and Guilin Qi. 2024. DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 4125–4135, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping (Chen et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.230.pdf