[go: up one dir, main page]

Combating Security and Privacy Issues in the Era of Large Language Models

Muhao Chen, Chaowei Xiao, Huan Sun, Lei Li, Leon Derczynski, Anima Anandkumar, Fei Wang


Abstract
This tutorial seeks to provide a systematic summary of risks and vulnerabilities in security, privacy and copyright aspects of large language models (LLMs), and most recent solutions to address those issues. We will discuss a broad thread of studies that try to answer the following questions: (i) How do we unravel the adversarial threats that attackers may leverage in the training time of LLMs, especially those that may exist in recent paradigms of instruction tuning and RLHF processes? (ii) How do we guard the LLMs against malicious attacks in inference time, such as attacks based on backdoors and jailbreaking? (iii) How do we ensure privacy protection of user information and LLM decisions for Language Model as-a-Service (LMaaS)? (iv) How do we protect the copyright of an LLM? (v) How do we detect and prevent cases where personal or confidential information is leaked during LLM training? (vi) How should we make policies to control against improper usage of LLM-generated content? In addition, will conclude the discussions by outlining emergent challenges in security, privacy and reliability of LLMs that deserve timely investigation by the community
Anthology ID:
2024.naacl-tutorials.2
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Rui Zhang, Nathan Schneider, Snigdha Chaturvedi
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8–18
Language:
URL:
https://aclanthology.org/2024.naacl-tutorials.2
DOI:
10.18653/v1/2024.naacl-tutorials.2
Bibkey:
Cite (ACL):
Muhao Chen, Chaowei Xiao, Huan Sun, Lei Li, Leon Derczynski, Anima Anandkumar, and Fei Wang. 2024. Combating Security and Privacy Issues in the Era of Large Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts), pages 8–18, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Combating Security and Privacy Issues in the Era of Large Language Models (Chen et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-tutorials.2.pdf