Radial Basis Function (RBF) networks are examples of a versatile artificial neural network paradigm which lend themselves equally well to problems of classification and regression. Training the networks can be accomplished by a number of textbook techniques. The objective of the current paper is to explore how uncertainty propagates through such networks. In this, the first of two papers, the regression problem is addressed. The RBF networks are trained with crisp data, but interval output weights, in such a way that a regression model predicts an interval rather than a crisp value. This technique, as developed for the more common Multi-Layer Perceptron (MLP) network allows the user to investigate Ben-Haim’s concept of opportunity.
|Number of pages||5|
|Publication status||Published - 2005|
|Event||Eurodyn 2005: 6th International Conference on Structural Dynamics - Paris, France|
Duration: 4 Sep 2005 → 7 Sep 2005
|Conference||Eurodyn 2005: 6th International Conference on Structural Dynamics|
|Period||4/09/05 → 7/09/05|
- multi-layer perceptron network
- Radial Basis Function
- Ben-Haim’s concept of
- artificial neural network
Chetwynd, D., Worden, K., Manson, G., & Pierce, S. G. (2005). Uncertainty propagation through radial basis function networks part I: regression networks. 923-928. Paper presented at Eurodyn 2005: 6th International Conference on Structural Dynamics, Paris, France.