[go: up one dir, main page]

CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion

Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, Lizhuang Ma


Abstract
The rapid advancement of Large Language Models (LLMs) has brought about remarkable generative capabilities but also raised concerns about their potential misuse. While strategies like supervised fine-tuning and reinforcement learning from human feedback have enhanced their safety, these methods primarily focus on natural languages, which may not generalize to other domains. This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs, presenting a novel environment for testing the safety generalization of LLMs. Our comprehensive studies on state-of-the-art LLMs including GPT-4, Claude-2, and Llama-2 series reveal a new and universal safety vulnerability of these models against code input: CodeAttack bypasses the safety guardrails of all models more than 80% of the time. We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization, such as encoding natural language input with data structures. Furthermore, we give our hypotheses about the success of CodeAttack: the misaligned bias acquired by LLMs during code training, prioritizing code completion over avoiding the potential safety risk. Finally, we analyze potential mitigation measures. These findings highlight new safety risks in the code domain and the need for more robust safety alignment algorithms to match the code capabilities of LLMs.
Anthology ID:
2024.findings-acl.679
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11437–11452
Language:
URL:
https://aclanthology.org/2024.findings-acl.679
DOI:
10.18653/v1/2024.findings-acl.679
Bibkey:
Cite (ACL):
Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, and Lizhuang Ma. 2024. CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion. In Findings of the Association for Computational Linguistics: ACL 2024, pages 11437–11452, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion (Ren et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.679.pdf