Robustness of Deep Networks
In this talk, we will present tools to assess the robustness of image classifiers to a diverse set of perturbations, ranging from adversarial to random noise. In particular, we will propose a semi-random noise regime that generalizes both the random and adversarial noise regimes. We provide theoretical bounds on the robustness of classifiers in this general regime, which depends on the curvature of the classifier's decision boundary. In the second part of the talk, we will show the surprising existence of universal perturbation images that cause most natural images to be misclassified by state-of-the-art deep neural network classifiers.