Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging

Jaime Zabalza, Jinchang Ren*, Jiangbin Zheng, Huimin Zhao, Chunmei Qing, Zhijing Yang, Peijun Du, Stephen Marshall

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

332 Citations (Scopus)
279 Downloads (Pure)

Abstract

Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.

Original languageEnglish
Pages (from-to)1-10
Number of pages10
JournalNeurocomputing
Volume185
Early online date23 Dec 2015
DOIs
Publication statusPublished - 12 Apr 2016

Keywords

  • data reduction
  • deep learning (DL)
  • hyperspectral remote sensing
  • segmented stacked autoencoder (S-SAE)

Fingerprint

Dive into the research topics of 'Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging'. Together they form a unique fingerprint.

Cite this