Ordinal neural networks without iterative tuning

Francisco Fernández-Navarro, Annalisa Riccardi, Sante Carloni

Research output: Contribution to journalArticlepeer-review

37 Citations (Scopus)

Abstract

Ordinal regression (OR) is an important branch of supervised learning in between the multiclass classification and regression. In this paper, the traditional classification scheme of neural network is adapted to learn ordinal ranks. The model proposed imposes monotonicity constraints on the weights connecting the hidden layer with the output layer. To do so, the weights are transcribed using padding variables. This reformulation leads to the so-called inequality constrained least squares (ICLS) problem. Its numerical solution can be obtained by several iterative methods, for example, trust region or line search algorithms. In this proposal, the optimum is determined analytically according to the closed-form solution of the ICLS problem estimated from the Karush-Kuhn-Tucker conditions. Furthermore, following the guidelines of the extreme learning machine framework, the weights connecting the input and the hidden layers are randomly generated, so the final model estimates all its parameters without iterative tuning. The model proposed achieves competitive performance compared with the state-of-the-art neural networks methods for OR.
Original languageEnglish
Pages (from-to)2075-2085
Number of pages11
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume25
Issue number11
DOIs
Publication statusPublished - 21 Feb 2014
Externally publishedYes

Keywords

  • pattern classification
  • iterative methods
  • adaptation models
  • encoding
  • biological neural networks

Fingerprint

Dive into the research topics of 'Ordinal neural networks without iterative tuning'. Together they form a unique fingerprint.

Cite this