Baize: An open-source chat model with parameter-efficient tuning on self-chat data

C Xu, D Guo, N Duan, J McAuley - arXiv preprint arXiv:2304.01196, 2023 - arxiv.org
arXiv preprint arXiv:2304.01196, 2023arxiv.org
Chat models, such as ChatGPT, have shown impressive capabilities and have been rapidly
adopted across numerous domains. However, these models are only accessible through a
restricted API, creating barriers for new research and progress in the field. We propose a
pipeline that can automatically generate a high-quality multi-turn chat corpus by leveraging
ChatGPT to engage in a conversation with itself. Subsequently, we employ parameter-
efficient tuning to enhance LLaMA, an open-source large language model. The resulting …
Chat models, such as ChatGPT, have shown impressive capabilities and have been rapidly adopted across numerous domains. However, these models are only accessible through a restricted API, creating barriers for new research and progress in the field. We propose a pipeline that can automatically generate a high-quality multi-turn chat corpus by leveraging ChatGPT to engage in a conversation with itself. Subsequently, we employ parameter-efficient tuning to enhance LLaMA, an open-source large language model. The resulting model, named Baize, demonstrates good performance in multi-turn dialogues with guardrails that minimize potential risks. Furthermore, we propose a new technique called Self-Distill with Feedback, to further improve the performance of the Baize models with feedback from ChatGPT. The Baize models and data are released for research purposes only at https://github.com/project-baize/baize-chatbot. An online demo is also available at https://huggingface.co/spaces/project-baize/chat-with-baize.
arxiv.org