Oxford logo
[VSLK24] Jon Vadillo, Roberto Santana, Jose A. Lozano, Marta Kwiatkowska. Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks. Technical report , arXiv:2403.13740 . Paper under submission. 2024. [pdf] [bib] https://arxiv.org/abs/2403.13740
Downloads:  pdf pdf (12.26 MB)  bib bib
Abstract. The lack of transparency of Deep Neural Networks continues to be a limitation that severely undermines their reliability and usage in high-stakes applications. Promising approaches to overcome such limitations are Prototype-Based Self-Explainable Neural Networks (PSENNs), whose predictions rely on the similarity between the input at hand and a set of prototypical representations of the output classes, offering therefore a deep, yet transparent-by-design, architecture. So far, such models have been designed by considering pointwise estimates for the prototypes, which remain fixed after the learning phase of the model. In this paper, we introduce a probabilistic reformulation of PSENNs, called Prob-PSENN, which replaces point estimates for the prototypes with probability distributions over their values. This provides not only a more flexible framework for an end-to-end learning of prototypes, but can also capture the explanatory uncertainty of the model, which is a missing feature in previous approaches. In addition, since the prototypes determine both the explanation and the prediction, Prob-PSENNs allow us to detect when the model is making uninformed or uncertain predictions, and to obtain valid explanations for them. Our experiments demonstrate that Prob-PSENNs provide more meaningful and robust explanations than their non-probabilistic counterparts, thus enhancing the explainability and reliability of the models.

QAV:

Home

People

Projects

Publications