Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area

Wenzhi Liao, Rik Bellens, Sidharta Gautama, Wilfried Philips, Sebastian Van Der Linden (Editor), Tobias Kuemmerle (Editor), Katja Janson (Editor)

Research output: Contribution to conferencePaper

Abstract

Nowadays, we have very diverse sensor technologies and image processing algorithms that allow to measure different aspects of objects on the earth (spectral characteristics in hyperspectral (HS) images, height in Light Detection And Ranging (LiDAR) data, geometry in image processing technologies like morphological profiles). It is clear that no single technology can be sufficient for a reliable classification. Because the remote sensing data from urban area is a mix between man-made structures and natural materials, different objects may be made by same materials (e.g. roofs and roads made by the same asphalt). On the other hand, objects with same geometry or elevation may belong to different classes. The use of stacking different features together is widely applied in data fusion of multi-sensor data for classification. These methods first apply feature extraction on each individual data source, then concatenate all the features together into one stacked vector for classification. Despite of the simplicity of such feature fusion methods (simply concatenate several kinds of features together), the systems may not perform better (or even worse) than using single features. This is because the information contained by different features is not equally represented or measured. The element values of different features can be significantly unbalanced. Furthermore, the resulting data by stacking several kinds of features may contain redundant information. Last, but not least, the increase in the dimensionality of the stacked features, as well as the limited number of labeled samples in many real applications may pose the problem of the curse of dimensionality and, as a consequence, result in the risk of overfitting the training data. We propose a graph-based fusion method to couple dimensionality reduction and data fusion of the spectral information (of original HS image) and the features extracted by morphological features computed on both HS and LiDAR data together. Our proposed method takes into account the properties of different data sources, and makes full advantages of all the spectral, the spatial and the elevation information through fusion graph. Experimental results on fusion of Hyperspectral and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest show effectiveness of the proposed method. Compared to the methods using only single feature and stacking all the features together, our proposed method has more than 10% and 5% improvements in overall classification accuracy, respectively. Moreover, our approach won the “Best Paper Challenge” of the 2013 IEEE GRSS Data Fusion Contest: http://hyperspectral.ee.uh.edu/?page_id=795.

Conference

Conference5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover
CountryGermany
CityBerlin
Period17/03/1418/03/14

Fingerprint

multisensor fusion
remote sensing
urban area
fusion
image processing
asphalt
roofs
sensors
stacking
geometry
roads
pattern recognition
education
detection
sensor
profiles
method
roof
road

Keywords

  • LiDAR
  • hyperspectral images
  • image classification
  • remote sensing

Cite this

Liao, W., Bellens, R., Gautama, S., Philips, W., Linden, S. V. D. (Ed.), Kuemmerle, T. (Ed.), & Janson, K. (Ed.) (2014). Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area. 34. Paper presented at 5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover, Berlin, Germany.
Liao, Wenzhi ; Bellens, Rik ; Gautama, Sidharta ; Philips, Wilfried ; Linden, Sebastian Van Der (Editor) ; Kuemmerle, Tobias (Editor) ; Janson, Katja (Editor). / Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area. Paper presented at 5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover, Berlin, Germany.1 p.
@conference{e40e3a915afd4866a7bbddff3c61498c,
title = "Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area",
abstract = "Nowadays, we have very diverse sensor technologies and image processing algorithms that allow to measure different aspects of objects on the earth (spectral characteristics in hyperspectral (HS) images, height in Light Detection And Ranging (LiDAR) data, geometry in image processing technologies like morphological profiles). It is clear that no single technology can be sufficient for a reliable classification. Because the remote sensing data from urban area is a mix between man-made structures and natural materials, different objects may be made by same materials (e.g. roofs and roads made by the same asphalt). On the other hand, objects with same geometry or elevation may belong to different classes. The use of stacking different features together is widely applied in data fusion of multi-sensor data for classification. These methods first apply feature extraction on each individual data source, then concatenate all the features together into one stacked vector for classification. Despite of the simplicity of such feature fusion methods (simply concatenate several kinds of features together), the systems may not perform better (or even worse) than using single features. This is because the information contained by different features is not equally represented or measured. The element values of different features can be significantly unbalanced. Furthermore, the resulting data by stacking several kinds of features may contain redundant information. Last, but not least, the increase in the dimensionality of the stacked features, as well as the limited number of labeled samples in many real applications may pose the problem of the curse of dimensionality and, as a consequence, result in the risk of overfitting the training data. We propose a graph-based fusion method to couple dimensionality reduction and data fusion of the spectral information (of original HS image) and the features extracted by morphological features computed on both HS and LiDAR data together. Our proposed method takes into account the properties of different data sources, and makes full advantages of all the spectral, the spatial and the elevation information through fusion graph. Experimental results on fusion of Hyperspectral and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest show effectiveness of the proposed method. Compared to the methods using only single feature and stacking all the features together, our proposed method has more than 10{\%} and 5{\%} improvements in overall classification accuracy, respectively. Moreover, our approach won the “Best Paper Challenge” of the 2013 IEEE GRSS Data Fusion Contest: http://hyperspectral.ee.uh.edu/?page_id=795.",
keywords = "LiDAR, hyperspectral images, image classification, remote sensing",
author = "Wenzhi Liao and Rik Bellens and Sidharta Gautama and Wilfried Philips and Linden, {Sebastian Van Der} and Tobias Kuemmerle and Katja Janson",
year = "2014",
language = "English",
pages = "34",
note = "5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover ; Conference date: 17-03-2014 Through 18-03-2014",

}

Liao, W, Bellens, R, Gautama, S, Philips, W, Linden, SVD (ed.), Kuemmerle, T (ed.) & Janson, K (ed.) 2014, 'Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area' Paper presented at 5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover, Berlin, Germany, 17/03/14 - 18/03/14, pp. 34.

Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area. / Liao, Wenzhi; Bellens, Rik; Gautama, Sidharta; Philips, Wilfried; Linden, Sebastian Van Der (Editor); Kuemmerle, Tobias (Editor); Janson, Katja (Editor).

2014. 34 Paper presented at 5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover, Berlin, Germany.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area

AU - Liao, Wenzhi

AU - Bellens, Rik

AU - Gautama, Sidharta

AU - Philips, Wilfried

A2 - Linden, Sebastian Van Der

A2 - Kuemmerle, Tobias

A2 - Janson, Katja

PY - 2014

Y1 - 2014

N2 - Nowadays, we have very diverse sensor technologies and image processing algorithms that allow to measure different aspects of objects on the earth (spectral characteristics in hyperspectral (HS) images, height in Light Detection And Ranging (LiDAR) data, geometry in image processing technologies like morphological profiles). It is clear that no single technology can be sufficient for a reliable classification. Because the remote sensing data from urban area is a mix between man-made structures and natural materials, different objects may be made by same materials (e.g. roofs and roads made by the same asphalt). On the other hand, objects with same geometry or elevation may belong to different classes. The use of stacking different features together is widely applied in data fusion of multi-sensor data for classification. These methods first apply feature extraction on each individual data source, then concatenate all the features together into one stacked vector for classification. Despite of the simplicity of such feature fusion methods (simply concatenate several kinds of features together), the systems may not perform better (or even worse) than using single features. This is because the information contained by different features is not equally represented or measured. The element values of different features can be significantly unbalanced. Furthermore, the resulting data by stacking several kinds of features may contain redundant information. Last, but not least, the increase in the dimensionality of the stacked features, as well as the limited number of labeled samples in many real applications may pose the problem of the curse of dimensionality and, as a consequence, result in the risk of overfitting the training data. We propose a graph-based fusion method to couple dimensionality reduction and data fusion of the spectral information (of original HS image) and the features extracted by morphological features computed on both HS and LiDAR data together. Our proposed method takes into account the properties of different data sources, and makes full advantages of all the spectral, the spatial and the elevation information through fusion graph. Experimental results on fusion of Hyperspectral and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest show effectiveness of the proposed method. Compared to the methods using only single feature and stacking all the features together, our proposed method has more than 10% and 5% improvements in overall classification accuracy, respectively. Moreover, our approach won the “Best Paper Challenge” of the 2013 IEEE GRSS Data Fusion Contest: http://hyperspectral.ee.uh.edu/?page_id=795.

AB - Nowadays, we have very diverse sensor technologies and image processing algorithms that allow to measure different aspects of objects on the earth (spectral characteristics in hyperspectral (HS) images, height in Light Detection And Ranging (LiDAR) data, geometry in image processing technologies like morphological profiles). It is clear that no single technology can be sufficient for a reliable classification. Because the remote sensing data from urban area is a mix between man-made structures and natural materials, different objects may be made by same materials (e.g. roofs and roads made by the same asphalt). On the other hand, objects with same geometry or elevation may belong to different classes. The use of stacking different features together is widely applied in data fusion of multi-sensor data for classification. These methods first apply feature extraction on each individual data source, then concatenate all the features together into one stacked vector for classification. Despite of the simplicity of such feature fusion methods (simply concatenate several kinds of features together), the systems may not perform better (or even worse) than using single features. This is because the information contained by different features is not equally represented or measured. The element values of different features can be significantly unbalanced. Furthermore, the resulting data by stacking several kinds of features may contain redundant information. Last, but not least, the increase in the dimensionality of the stacked features, as well as the limited number of labeled samples in many real applications may pose the problem of the curse of dimensionality and, as a consequence, result in the risk of overfitting the training data. We propose a graph-based fusion method to couple dimensionality reduction and data fusion of the spectral information (of original HS image) and the features extracted by morphological features computed on both HS and LiDAR data together. Our proposed method takes into account the properties of different data sources, and makes full advantages of all the spectral, the spatial and the elevation information through fusion graph. Experimental results on fusion of Hyperspectral and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest show effectiveness of the proposed method. Compared to the methods using only single feature and stacking all the features together, our proposed method has more than 10% and 5% improvements in overall classification accuracy, respectively. Moreover, our approach won the “Best Paper Challenge” of the 2013 IEEE GRSS Data Fusion Contest: http://hyperspectral.ee.uh.edu/?page_id=795.

KW - LiDAR

KW - hyperspectral images

KW - image classification

KW - remote sensing

UR - http://hdl.handle.net/1854/LU-4364973

M3 - Paper

SP - 34

ER -

Liao W, Bellens R, Gautama S, Philips W, Linden SVD, (ed.), Kuemmerle T, (ed.) et al. Feature fusion of hyperspectral and LiDAR data for classification of remote sensing data from urban area. 2014. Paper presented at 5th Workshop of the EARSeL Special Interest Group on Land Use and Land Cover, Berlin, Germany.