Oxford logo
[CKL+19] L. Cardelli, M. Kwiatkowska, L. Laurenti, A. Patane. Robustness Guarantees for Bayesian Inference with Gaussian Processes. In Proc. Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19). To appear. 2019. [pdf] [bib]
Downloads:  pdf pdf (635 KB)  bib bib
Abstract. Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and control to biological systems. Many of these applications are safety-critical and require a characterization of the uncertainty associated with the learning model and formal guarantees on its predictions.   In this paper we define a robustness measure for Bayesian inference against input perturbations, given by the probability that, for a test point and a compact set in the input space containing the test point, the prediction of the learning model will remain δ-close for all the points in the set, for δ > 0:   Such measures can be used to provide formal guarantees for the absence of adversarial examples. By employing the theory of Gaussian processes, we derive tight upper bounds on the resulting robustness by utilising the Borell-TIS inequality, and propose algorithms for their computation. We evaluate our techniques on two examples, a GP regression problem and a fully-connected deep neural network, where we rely on weak convergence to GPs to study adversarial examples on the MNIST dataset.

QAV:

Home

People

Projects

Publications