[go: up one dir, main page]

Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data

Xinzhe Li, Ming Liu


Abstract
This paper addresses the ethical concerns arising from the use of unauthorized public data in deep learning models and proposes a novel solution. Specifically, building on the work of Huang et al. (2021), we extend their bi-level optimization approach to generate unlearnable text using a gradient-based search technique. However, although effective, this approach faces practical limitations, including the requirement of batches of instances and model architecture knowledge that is not readily accessible to ordinary users with limited access to their own data. Furthermore, even with semantic-preserving constraints, unlearnable noise can alter the text’s semantics. To address these challenges, we extract simple patterns from unlearnable text produced by bi-level optimization and demonstrate that the data remains unlearnable for unknown models. Additionally, these patterns are not instance- or dataset-specific, allowing users to readily apply them to text classification and question-answering tasks, even if only a small proportion of users implement them on their public content. We also open-source codes to generate unlearnable text and assess unlearnable noise to benefit the public and future studies.
Anthology ID:
2023.trustnlp-1.22
Volume:
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anaelia Ovalle, Kai-Wei Chang, Ninareh Mehrabi, Yada Pruksachatkun, Aram Galystan, Jwala Dhamala, Apurv Verma, Trista Cao, Anoop Kumar, Rahul Gupta
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
249–259
Language:
URL:
https://aclanthology.org/2023.trustnlp-1.22
DOI:
10.18653/v1/2023.trustnlp-1.22
Bibkey:
Cite (ACL):
Xinzhe Li and Ming Liu. 2023. Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 249–259, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data (Li & Liu, TrustNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.trustnlp-1.22.pdf