Abstract
In this paper we attempt to build upon past work on Interval Neural Networks, and provide a robust way to train and quantify the uncertainty of Deep Neural Networks. Specifically, we propose a back propagation algorithm for Neural Networks with constant width predictions. In order to maintain numerical stability we propose minimising the maximum of the batch of errors at each step. Our approach can accommodate incertitude in the training data, and therefore adversarial examples from a commonly used attack model can be trivially accounted for. We present preliminary results on a test function example. The reliability of the predictions of these networks are guaranteed by the non-convex Scenario approach to chance constrained optimisation. A key result is that, by using minibatches of size M, the complexity of our approach scales as O(MNiter), and does not depend upon the number of training data points as with other Interval Predictor Model methods.
Original language | English |
---|---|
Pages | 137-146 |
Number of pages | 10 |
Publication status | Published - 18 Jul 2018 |
Event | 8th International Workshop on Reliable Computing: “Computing with Confidence” - Institute for Risk and Uncertainty, University of Liverpool, Liverpool, United Kingdom Duration: 16 Jul 2018 → 18 Jul 2018 Conference number: 8th http://rec2018.uk/ https://riskinstitute.uk/events/rec2018/ |
Conference
Conference | 8th International Workshop on Reliable Computing |
---|---|
Abbreviated title | REC2018 |
Country/Territory | United Kingdom |
City | Liverpool |
Period | 16/07/18 → 18/07/18 |
Internet address |
Keywords
- machine learning
- imprecise probability
- uncertainty quantification
- neural network
- interval predictor models
- deep neural networks