Robustness of deep neural networks for micro-Doppler radar classification

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

55 Downloads (Pure)

Abstract

With the great capabilities of deep classifiers for radar data processing come the risks of learning dataset-specific features that do not generalize well. In this work, the robustness of two deep convolutional architectures, trained and tested on the same data, is evaluated. When standard training practice is followed, both classifiers exhibit sensitivity to subtle temporal shifts of the input representation, an augmentation that carries minimal semantic content. Furthermore, the models are extremely susceptible to adversarial examples. Both small temporal shifts and adversarial examples are a result of a model overfitting on features that do not generalize well. As a remedy, it is shown that training on adversarial examples and temporally augmented samples can reduce this effect and lead to models that generalise better. Finally, models operating on cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
Original languageEnglish
Title of host publication2022 23rd International Radar Symposium (IRS)
Place of PublicationPiscataway, N.J.
PublisherIEEE
Pages1-6
Number of pages6
ISBN (Electronic)9788395602054
DOIs
Publication statusPublished - 14 Sept 2022
EventInternational Radar Symposium 2022 - Gdansk, Poland
Duration: 12 Sept 202214 Sept 2022
Conference number: 2022
https://mrw2022.org/irs/0/

Conference

ConferenceInternational Radar Symposium 2022
Abbreviated titleIRS 2022
Country/TerritoryPoland
CityGdansk
Period12/09/2214/09/22
Internet address

Keywords

  • micro-Doppler
  • model robustness
  • generalization
  • adversarial examples

Fingerprint

Dive into the research topics of 'Robustness of deep neural networks for micro-Doppler radar classification'. Together they form a unique fingerprint.

Cite this