Abstract
Remote sensed images contain a wealth of information. Next to diverse sensor technologies that allow us to measure different aspects of objects on the Earth (spectral characteristics in hyperspectral (HS) images, height in Light Detection And Ranging (LiDAR) data), we also have advanced image processing algorithms that have been developed to mine relevant information from multisensor remote sensing data for Earth observation. However, automatic interpretation of remote sensed images is still very difficult. In this paper, we compare multiple level features for fusion of HS and LiDAR data for urban area classification. Experimental results on fusion of HS and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest demonstrate that middlelevel morphological attribute features outperform high-level deep learning features. Compared to the methods using raw data fusion and deep learning fusion, with the graph-based fusion method [4], overall classification accuracies were improved by 8%.
Original language | English |
---|---|
Number of pages | 4 |
DOIs | |
Publication status | Published - 11 May 2017 |
Event | 2017 International Joint Urban Remote Sensing Event (JURSE 2017) - Dubai, United Arab Emirates Duration: 6 Mar 2017 → 8 Mar 2017 |
Conference
Conference | 2017 International Joint Urban Remote Sensing Event (JURSE 2017) |
---|---|
Abbreviated title | JURSE 2017 |
Country/Territory | United Arab Emirates |
City | Dubai |
Period | 6/03/17 → 8/03/17 |
Keywords
- urban remote sensing
- graph fusion
- deep learning
- hyperspectral
- LiDAR
- laser radar
- machine learning
- data integration
- feature extraction
- geophysical image processing
- image classification