Developed a fast verifier (two to three orders of magnitude faster than the state of the art) to evaluate the robustness of neural networks to adversarial examples. The computational speedup enabled:
1) Verification of properties on convolutional and residual networks with over 100,000 ReLUs
2) Computation (for the first time) of the exact adversarial accuracy of a classifier subject to bounded perturbations
Across all networks and datasets considered, we certified more samples than the state-of-the-art and found more adversarial examples than a strong first-order attack. Analysis of factors affecting verification time led to follow-on work creating a regularizer to further reduce verification times.