Skip to main navigation Skip to search Skip to main content

Prototype distance ratio sampling for generalised few shot object detection

Alessandro Lekkas, Marc Roper, Andrew Abel*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

Few-Shot Learning has emerged as a topic that maximises DNN performance based on very few samples. In Generalised Few-Shot Learning, a model has to learn new few-shot classes while recalling earlier large-scale training classes. Learning the new classes leads to a drop in performance on the base ones. In this work, we identify and explore the parallels between Generalised Few-Shot Object Detection (G-FSOD) and Continual Learning (CL), focusing on two areas in particular: gradient manipulation methods and sampling strategies. Through extensive experimentation we demonstrate that gradient manipulation methods appear to be no better than existing techniques and do not improve performance, but actually harm performance unless the gradients are averaged. Our investigations into sampling strategies consider a number of aspects: the impact of removing the base limit and the effectiveness of different distance measures (with respect to a class prototype) for sample selection. Our experiments into these aspects reveal illuminating insights into their impact on Average Precision on the COCO and VOC datasets. Consequently, we suggest that G-FSOD research focus on the replay aspect and investigate other sampling strategies.

Original languageEnglish
Pages (from-to)1-7
Number of pages7
JournalPattern Recognition Letters
Volume204
Early online date17 Mar 2026
DOIs
Publication statusE-pub ahead of print - 17 Mar 2026

Keywords

  • generalised Few-Shot Learning
  • continual learning
  • object detection
  • sampling
  • benchmarks

Fingerprint

Dive into the research topics of 'Prototype distance ratio sampling for generalised few shot object detection'. Together they form a unique fingerprint.

Cite this