Multiscale spatial-spectral convolutional network with image-based framework for hyperspectral imagery classification

Ximin Cui, Ke Zheng, Lianru Gao, Bing Zhang, Dong Yang, Jinchang Ren

Research output: Contribution to journalArticle

4 Citations (Scopus)
9 Downloads (Pure)


Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straightforward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification.
Original languageEnglish
Article number2220
Number of pages21
JournalRemote Sensing
Issue number19
Publication statusPublished - 23 Sep 2019


  • hyperspectral image classification
  • convolutional neural network
  • multiscale spatial-spectral features
  • spatial neighbor feature extraction
  • dilation convolution
  • feature pyramid

Fingerprint Dive into the research topics of 'Multiscale spatial-spectral convolutional network with image-based framework for hyperspectral imagery classification'. Together they form a unique fingerprint.

Cite this