Deep background subtraction of thermal and visible imagery for redestrian detection in videos

Yijun Yan, Huimin Zhao, Fu-Jen Kao, Valentin Masero Vargas, Sophia Zhao, Jinchang Ren

Research output: Contribution to conferencePaper

52 Downloads (Pure)

Abstract

In this paper, we introduce an efficient framework to subtract the background from both visible and thermal imagery for pedestrians’ detection in the urban scene. We use a deep neural network (DNN) to train the background subtraction model. For the training of the DNN, we first generate an initial background map and then employ randomly 5% video frames, background map, and manually segmented ground truth. Then we apply a cognition-based post-processing to further smooth the foreground detection result. We evaluate our method against our previous work and 11 recently widely cited method on three challenge video series selected from a publicly available color-thermal benchmark dataset OCTBVS. Promising results have been shown that the proposed DNN-based approach can successfully detect the pedestrians with good shape in most scenes regardless of illuminate changes and occlusion problem.
Original languageEnglish
Number of pages10
Publication statusPublished - 7 Jul 2018
Event9th International Conference on Brain Inspired Cognitive Systems - Xi'an, China
Duration: 7 Jul 20188 Jul 2018

Conference

Conference9th International Conference on Brain Inspired Cognitive Systems
CountryChina
CityXi'an
Period7/07/188/07/18

Fingerprint

Color
Processing
Hot Temperature
Deep neural networks

Keywords

  • deep neural network (DNN)
  • video salient objects
  • pedestrian detection/tracking

Cite this

Yan, Y., Zhao, H., Kao, F-J., Vargas, V. M., Zhao, S., & Ren, J. (2018). Deep background subtraction of thermal and visible imagery for redestrian detection in videos. Paper presented at 9th International Conference on Brain Inspired Cognitive Systems, Xi'an, China.
Yan, Yijun ; Zhao, Huimin ; Kao, Fu-Jen ; Vargas, Valentin Masero ; Zhao, Sophia ; Ren, Jinchang. / Deep background subtraction of thermal and visible imagery for redestrian detection in videos. Paper presented at 9th International Conference on Brain Inspired Cognitive Systems, Xi'an, China.10 p.
@conference{c7c795351cae4b83b2d02701bf8aa9cc,
title = "Deep background subtraction of thermal and visible imagery for redestrian detection in videos",
abstract = "In this paper, we introduce an efficient framework to subtract the background from both visible and thermal imagery for pedestrians’ detection in the urban scene. We use a deep neural network (DNN) to train the background subtraction model. For the training of the DNN, we first generate an initial background map and then employ randomly 5{\%} video frames, background map, and manually segmented ground truth. Then we apply a cognition-based post-processing to further smooth the foreground detection result. We evaluate our method against our previous work and 11 recently widely cited method on three challenge video series selected from a publicly available color-thermal benchmark dataset OCTBVS. Promising results have been shown that the proposed DNN-based approach can successfully detect the pedestrians with good shape in most scenes regardless of illuminate changes and occlusion problem.",
keywords = "deep neural network (DNN), video salient objects, pedestrian detection/tracking",
author = "Yijun Yan and Huimin Zhao and Fu-Jen Kao and Vargas, {Valentin Masero} and Sophia Zhao and Jinchang Ren",
year = "2018",
month = "7",
day = "7",
language = "English",
note = "9th International Conference on Brain Inspired Cognitive Systems ; Conference date: 07-07-2018 Through 08-07-2018",

}

Yan, Y, Zhao, H, Kao, F-J, Vargas, VM, Zhao, S & Ren, J 2018, 'Deep background subtraction of thermal and visible imagery for redestrian detection in videos' Paper presented at 9th International Conference on Brain Inspired Cognitive Systems, Xi'an, China, 7/07/18 - 8/07/18, .

Deep background subtraction of thermal and visible imagery for redestrian detection in videos. / Yan, Yijun; Zhao, Huimin; Kao, Fu-Jen; Vargas, Valentin Masero; Zhao, Sophia; Ren, Jinchang.

2018. Paper presented at 9th International Conference on Brain Inspired Cognitive Systems, Xi'an, China.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Deep background subtraction of thermal and visible imagery for redestrian detection in videos

AU - Yan, Yijun

AU - Zhao, Huimin

AU - Kao, Fu-Jen

AU - Vargas, Valentin Masero

AU - Zhao, Sophia

AU - Ren, Jinchang

PY - 2018/7/7

Y1 - 2018/7/7

N2 - In this paper, we introduce an efficient framework to subtract the background from both visible and thermal imagery for pedestrians’ detection in the urban scene. We use a deep neural network (DNN) to train the background subtraction model. For the training of the DNN, we first generate an initial background map and then employ randomly 5% video frames, background map, and manually segmented ground truth. Then we apply a cognition-based post-processing to further smooth the foreground detection result. We evaluate our method against our previous work and 11 recently widely cited method on three challenge video series selected from a publicly available color-thermal benchmark dataset OCTBVS. Promising results have been shown that the proposed DNN-based approach can successfully detect the pedestrians with good shape in most scenes regardless of illuminate changes and occlusion problem.

AB - In this paper, we introduce an efficient framework to subtract the background from both visible and thermal imagery for pedestrians’ detection in the urban scene. We use a deep neural network (DNN) to train the background subtraction model. For the training of the DNN, we first generate an initial background map and then employ randomly 5% video frames, background map, and manually segmented ground truth. Then we apply a cognition-based post-processing to further smooth the foreground detection result. We evaluate our method against our previous work and 11 recently widely cited method on three challenge video series selected from a publicly available color-thermal benchmark dataset OCTBVS. Promising results have been shown that the proposed DNN-based approach can successfully detect the pedestrians with good shape in most scenes regardless of illuminate changes and occlusion problem.

KW - deep neural network (DNN)

KW - video salient objects

KW - pedestrian detection/tracking

M3 - Paper

ER -

Yan Y, Zhao H, Kao F-J, Vargas VM, Zhao S, Ren J. Deep background subtraction of thermal and visible imagery for redestrian detection in videos. 2018. Paper presented at 9th International Conference on Brain Inspired Cognitive Systems, Xi'an, China.