Projects per year
With the great capabilities of deep classifiers for radar data processing come the risks of learning dataset-specific features that do not generalize well. In this work, the robustness of two deep convolutional architectures, trained and tested on the same data, is evaluated. When standard training practice is followed, both classifiers exhibit sensitivity to subtle temporal shifts of the input representation, an augmentation that carries minimal semantic content. Furthermore, the models are extremely susceptible to adversarial examples. Both small temporal shifts and adversarial examples are a result of a model overfitting on features that do not generalize well. As a remedy, it is shown that training on adversarial examples and temporally augmented samples can reduce this effect and lead to models that generalise better. Finally, models operating on cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
|Number of pages||6|
|Publication status||Accepted/In press - 9 Jun 2022|
|Event||International Radar Symposium 2022 - Gdansk, Poland|
Duration: 12 Sep 2022 → 14 Sep 2022
Conference number: 2022
|Conference||International Radar Symposium 2022|
|Abbreviated title||IRS 2022|
|Period||12/09/22 → 14/09/22|
- model robustness
- adversarial examples
FingerprintDive into the research topics of 'Robustness of deep neural networks for micro-Doppler radar classification'. Together they form a unique fingerprint.
- 1 Active
1/10/19 → 1/04/23
Project: Research Studentship - Internally Allocated