Comment on "Biologically inspired protection of deep networks from adversarial attacks"
Abstract
A recent paper suggests that Deep Neural Networks can be protected from gradient-based adversarial perturbations by driving the network activations into a highly saturated regime. Here we analyse such saturated networks and show that the attacks fail due to numerical limitations in the gradient computations. A simple stabilisation of the gradient estimates enables successful and efficient attacks. Thus, it has yet to be shown that the robustness observed in highly saturated networks is not simply due to numerical limitations.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2017
- DOI:
- arXiv:
- arXiv:1704.01547
- Bibcode:
- 2017arXiv170401547B
- Keywords:
-
- Statistics - Machine Learning;
- Computer Science - Machine Learning;
- Quantitative Biology - Neurons and Cognition
- E-Print:
- 4 pages, 3 figures