Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area

Wenzhi Liao, Rik Bellens, Sidharta Gautama, Wilfried Philips, Sebastian Van Der Linden (Editor), Tobias Kuemmerle (Editor), Katja Janson (Editor)

Research output: Contribution to conferencePaperpeer-review

Abstract

Nowadays, we have very diverse sensor technologies and image processing algorithms that allow to measure different aspects of objects on the earth (spectral characteristics in hyperspectral (HS) images, height in Light Detection And Ranging (LiDAR) data, geometry in image processing technologies like morphological profiles). It is clear that no single technology can be sufficient for a reliable classification. Because the remote sensing data from urban area is a mix between man-made structures and natural materials, different objects may be made by same materials (e.g. roofs and roads made by the same asphalt). On the other hand, objects with same geometry or elevation may belong to different classes. The use of stacking different features together is widely applied in data fusion of multi-sensor data for classification. These methods first apply feature extraction on each individual data source, then concatenate all the features together into one stacked vector for classification. Despite of the simplicity of such feature fusion methods (simply concatenate several kinds of features together), the systems may not perform better (or even worse) than using single features. This is because the information contained by different features is not equally represented or measured. The element values of different features can be significantly unbalanced. Furthermore, the resulting data by stacking several kinds of features may contain redundant information. Last, but not least, the increase in the dimensionality of the stacked features, as well as the limited number of labeled samples in many real applications may pose the problem of the curse of dimensionality and, as a consequence, result in the risk of overfitting the training data. We propose a graph-based fusion method to couple dimensionality reduction and data fusion of the spectral information (of original HS image) and the features extracted by morphological features computed on both HS and LiDAR data together. Our proposed method takes into account the properties of different data sources, and makes full advantages of all the spectral, the spatial and the elevation information through fusion graph. Experimental results on fusion of Hyperspectral and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest show effectiveness of the proposed method. Compared to the methods using only single feature and stacking all the features together, our proposed method has more than 10% and 5% improvements in overall classification accuracy, respectively. Moreover, our approach won the “Best Paper Challenge” of the 2013 IEEE GRSS Data Fusion Contest: http://hyperspectral.ee.uh.edu/?page_id=795.
Original languageEnglish
Pages34
Number of pages1
Publication statusPublished - 2014
Event5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover - Berlin, Germany
Duration: 17 Mar 201418 Mar 2014

Conference

Conference5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover
Country/TerritoryGermany
CityBerlin
Period17/03/1418/03/14

Keywords

  • LiDAR
  • hyperspectral images
  • image classification
  • remote sensing

Fingerprint

Dive into the research topics of 'Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area'. Together they form a unique fingerprint.

Cite this