[go: up one dir, main page]

Training DNNs Resilient to Adversarial and Random Bit-Flips by Learning Quantization Ranges

Published: 06 Nov 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Promoting robustness in deep neural networks (DNNs) is crucial for their reliable deployment in uncertain environments, such as low-power settings or in the presence of adversarial attacks. In particular, bit-flip weight perturbations in quantized networks can significantly degrade performance, underscoring the need to improve DNN resilience. In this paper, we introduce a training mechanism to learn the quantization range of different DNN layers to enhance DNN robustness against bit-flip errors on the model parameters. The proposed approach, called weight clipping-aware training (WCAT), minimizes the quantization range while preserving performance, striking a balance between the two. Our experimental results on different models and datasets showcase that DNNs trained with WCAT can tolerate a high amount of noise while keeping the accuracy close to the baseline model. Moreover, we show that our method significantly enhances DNN robustness against adversarial bit-flip attacks. Finally, when considering the energy-reliability trade-off inherent in on-chip SRAM memories, we observe that WCAT consistently improves the Pareto frontier of test accuracy and energy consumption across diverse models.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have prepared a finalized version with an updated publication date and a direct link to the associated code repository.
Code: https://github.com/kmchiti/WCAT
Assigned Action Editor: ~Naigang_Wang1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1497
Loading