Optimization for Adversarial Robustness Evaluations and Implications from the Solution Patterns


Empirical evaluation of deep learning models against adversarial perturbations entails solving nontrivial constrained optimization problems. Existing numerical algorithms commonly used in practice to solve these problems predominantly rely on using projected gradient methods and mostly handle perturbations modeled by ℓ1, ℓ2 and ℓ∞ distance metrics. In this paper, we introduce a novel algorithmic framework that blends a general-purpose constrained-optimization solver PyGRANSO With Constraint-Folding (PWCF), which can add more reliability and generality to the state-of-the-art (SOTA) algorithms (e.g., AutoAttack). Regarding reliability, PWCF provide solutions with stationarity measures to assess the solution quality, and is generally free from delicate hyperparameter tuning. For generality, PWCF can handle much more general perturbation models (e.g., modeled by any piece-wise differentiable metric) which are inaccessible to the existing project gradient methods. With PWCF, we further explore the distinct solution patterns found by various combinations of losses, perturbation models, and optimization algorithms used in robustness evaluation, and discuss the possible implications of these patterns on the current robustness evaluation and adversarial training.

Under review at International Journal of Computer Vision (IJCV)
Buyun Liang
Buyun Liang
Computer and Information Science Ph.D. Student