Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipschitz $ℓ_p$ Regularization

Authors

  • Xuan Lin
  • Haidong Xie
  • Chunlin Wu
  • Xueshuang Xiang

DOI:

https://doi.org/10.4208/csiam-am.SO-2022-0005

Keywords:

Sparse adversarial attack, $ℓ_p \ (0< p <1)$ regularization, lower bound theory, support shrinkage, ADMM.

Abstract

Deep neural networks are considerably vulnerable to adversarial attacks. Therein, sparse attacks mislead image classifiers with a sparse, pixel-level perturbation that alters few pixels, and have much potential in physical world applications. The existing sparse attacks are mostly based on $ℓ_0$ optimization, and there are few theoretical results in these works. In this paper, we propose a novel sparse attack approach named the non-Lipschitz attack (NLA). For the proposed $ℓ_p \ (0< p <1)$ regularization attack model, we derive a lower bound theory that indicates a support inclusion analysis. Based on these discussions, we naturally extend previous works to present an iterative algorithm with support shrinking and thresholding strategies, as well as an efficient ADMM inner solver. Experiments show that our NLA method outperforms comparative attacks on several datasets with different networks in both targeted and untargeted scenarios. Our NLA achieves the 100% attack success rate in almost all cases, and the pixels perturbed are roughly 14% fewer than the recent $ℓ_0$ attack FMN-$ℓ_0$ on average.

Published

2023-10-26

Abstract View

  • 29007

Pdf View

  • 2559

Issue

Section

Articles

How to Cite

Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipschitz $ℓ_p$ Regularization. (2023). CSIAM Transactions on Applied Mathematics, 4(4), 797-819. https://doi.org/10.4208/csiam-am.SO-2022-0005