Abstract
This paper describes a robust and computationally feasible method to train and quantify the uncertainty of Neural Networks. Specifically, we propose a back propagation algorithm for Neural Networks with interval predictions. In order to maintain numerical stability we propose minimising the maximum of the batch of errors at each step. Our approach can accommodate incertitude in the training data, and therefore adversarial examples from a commonly used attack model can be trivially accounted for. We present results on a test function example, and a more realistic engineering test case. The reliability of the predictions of these networks is guaranteed by the non-convex Scenario approach to chance constrained optimisation, which takes place following training, and is hence robust to the performance of the optimiser. A key result is that, by using minibatches of size M, the complexity of the proposed approach scales as O(M⋅Niter), and does not depend upon the number of training data points as with other Interval Predictor Model methods. In addition, troublesome penalty function methods are avoided. To the authors’ knowledge this contribution presents the first computationally feasible approach for dealing with convex set based epistemic uncertainty in huge datasets.
Original language | English |
---|---|
Pages (from-to) | 338-351 |
Number of pages | 14 |
Journal | Neural Networks |
Volume | 118 |
Early online date | 24 Jul 2019 |
DOIs | |
Publication status | Published - 31 Oct 2019 |
Keywords
- imprecise probability
- interval predictor models
- machine learning
- neural networks
- uncertainty quantification