Robustness of deep neural networks for micro-Doppler radar classification

Research output: Contribution to conferencePaperpeer-review


With the great capabilities of deep classifiers for radar data processing come the risks of learning dataset-specific features that do not generalize well. In this work, the robustness of two deep convolutional architectures, trained and tested on the same data, is evaluated. When standard training practice is followed, both classifiers exhibit sensitivity to subtle temporal shifts of the input representation, an augmentation that carries minimal semantic content. Furthermore, the models are extremely susceptible to adversarial examples. Both small temporal shifts and adversarial examples are a result of a model overfitting on features that do not generalize well. As a remedy, it is shown that training on adversarial examples and temporally augmented samples can reduce this effect and lead to models that generalise better. Finally, models operating on cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
Original languageEnglish
Number of pages6
Publication statusAccepted/In press - 9 Jun 2022
EventInternational Radar Symposium 2022 - Gdansk, Poland
Duration: 12 Sep 202214 Sep 2022
Conference number: 2022


ConferenceInternational Radar Symposium 2022
Abbreviated titleIRS 2022
Internet address


  • micro-Doppler
  • model robustness
  • generalization
  • adversarial examples


Dive into the research topics of 'Robustness of deep neural networks for micro-Doppler radar classification'. Together they form a unique fingerprint.

Cite this