A comparison on multiple level features for fusion hyperspectral and LiDAR data

Wenzhi Liao, Aleksandra Pizurica, Renbo Luo, Wilfried Philips

Research output: Contribution to conferencePaperpeer-review

2 Citations (Scopus)

Abstract

Remote sensed images contain a wealth of information. Next to diverse sensor technologies that allow us to measure different aspects of objects on the Earth (spectral characteristics in hyperspectral (HS) images, height in Light Detection And Ranging (LiDAR) data), we also have advanced image processing algorithms that have been developed to mine relevant information from multisensor remote sensing data for Earth observation. However, automatic interpretation of remote sensed images is still very difficult. In this paper, we compare multiple level features for fusion of HS and LiDAR data for urban area classification. Experimental results on fusion of HS and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest demonstrate that middlelevel morphological attribute features outperform high-level deep learning features. Compared to the methods using raw data fusion and deep learning fusion, with the graph-based fusion method [4], overall classification accuracies were improved by 8%.
Original languageEnglish
Number of pages4
DOIs
Publication statusPublished - 11 May 2017
Event2017 International Joint Urban Remote Sensing Event (JURSE 2017) - Dubai, United Arab Emirates
Duration: 6 Mar 20178 Mar 2017

Conference

Conference2017 International Joint Urban Remote Sensing Event (JURSE 2017)
Abbreviated titleJURSE 2017
Country/TerritoryUnited Arab Emirates
CityDubai
Period6/03/178/03/17

Keywords

  • urban remote sensing
  • graph fusion
  • deep learning
  • hyperspectral
  • LiDAR
  • laser radar
  • machine learning
  • data integration
  • feature extraction
  • geophysical image processing
  • image classification

Fingerprint

Dive into the research topics of 'A comparison on multiple level features for fusion hyperspectral and LiDAR data'. Together they form a unique fingerprint.

Cite this