Efficient training of interval neural networks for imprecise training data

Jonathan Sadeghi, M. de Angelis, Edoardo Patelli

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

This paper describes a robust and computationally feasible method to train and quantify the uncertainty of Neural Networks. Specifically, we propose a back propagation algorithm for Neural Networks with interval predictions. In order to maintain numerical stability we propose minimising the maximum of the batch of errors at each step. Our approach can accommodate incertitude in the training data, and therefore adversarial examples from a commonly used attack model can be trivially accounted for. We present results on a test function example, and a more realistic engineering test case. The reliability of the predictions of these networks is guaranteed by the non-convex Scenario approach to chance constrained optimisation, which takes place following training, and is hence robust to the performance of the optimiser. A key result is that, by using minibatches of size M, the complexity of the proposed approach scales as O(M⋅Niter), and does not depend upon the number of training data points as with other Interval Predictor Model methods. In addition, troublesome penalty function methods are avoided. To the authors’ knowledge this contribution presents the first computationally feasible approach for dealing with convex set based epistemic uncertainty in huge datasets.

Original languageEnglish
Pages (from-to)338-351
Number of pages14
JournalNeural Networks
Volume118
Early online date24 Jul 2019
DOIs
Publication statusPublished - 31 Oct 2019

Keywords

  • imprecise probability
  • interval predictor models
  • machine learning
  • neural networks
  • uncertainty quantification

Fingerprint Dive into the research topics of 'Efficient training of interval neural networks for imprecise training data'. Together they form a unique fingerprint.

  • Cite this