Uncertainty propagation through radial basis function networks part I: regression networks

D. Chetwynd, K. Worden, G. Manson, S.G. Pierce

Research output: Contribution to conferencePaper

Abstract

Radial Basis Function (RBF) networks are examples of a versatile artificial neural network paradigm which lend themselves equally well to problems of classification and regression. Training the networks can be accomplished by a number of textbook techniques. The objective of the current paper is to explore how uncertainty propagates through such networks. In this, the first of two papers, the regression problem is addressed. The RBF networks are trained with crisp data, but interval output weights, in such a way that a regression model predicts an interval rather than a crisp value. This technique, as developed for the more common Multi-Layer Perceptron (MLP) network allows the user to investigate Ben-Haim’s concept of opportunity.
Original languageEnglish
Pages923-928
Number of pages5
Publication statusPublished - 2005
EventEurodyn 2005: 6th International Conference on Structural Dynamics - Paris, France
Duration: 4 Sep 20057 Sep 2005

Conference

ConferenceEurodyn 2005: 6th International Conference on Structural Dynamics
CountryFrance
CityParis
Period4/09/057/09/05

Keywords

  • multi-layer perceptron network
  • Radial Basis Function
  • Ben-Haim’s concept of
  • artificial neural network

Fingerprint Dive into the research topics of 'Uncertainty propagation through radial basis function networks part I: regression networks'. Together they form a unique fingerprint.

Cite this