Oxford logo
[RWS+19] Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening and Marta Kwiatkowska. Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance. In Proc. International Joint Conference on Artificial Intelligence (IJCAI-19). 2019. [pdf] [bib]
Downloads:  pdf pdf (1.53 MB)  bib bib
Abstract. Deployment of deep neural networks (DNNs) in safety-critical systems requires provable guarantees for their correct behaviours. We compute the maximal radius of a safe norm ball around a given input, within which there are no adversarial examples for a trained DNN. We define global robustness as an expectation of the maximal safe radius over a test dataset, and develop an algorithm to approximate the global robustness measure by iteratively computing its lower and upper bounds. Our algorithm is the first efficient method for the Hamming (L0) distance, and we hypothesise that this norm is a good proxy for a certain class of physical attacks. The algorithm is anytime, i.e., it returns intermediate bounds and robustness estimates that are gradually, but strictly, improved as the computation proceeds; tensor-based, i.e., the computation is conducted over a set of inputs simultaneously to enable efficient GPU computation; and has provable guarantees, i.e., both the bounds and the robustness estimates can converge to their optimal values. Finally, we demonstrate the utility of our approach by applying the algorithm to a set of challenging problems.

QAV:

Home

People

Projects

Publications