Domain randomisation and CNN-based keypoint-regressing pose initialisation for relative navigation with uncooperative finite-symmetric spacecraft targets using monocular camera images

Karl Martin Kajak, Christie Maddock, Heike Frei, Kurt Schwenk

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)
27 Downloads (Pure)

Abstract

Vision-based relative navigation technology is a key enabler of several areas of the space industry such as on-orbit servicing, space debris removal, and formation flying. A particularly demanding scenario is navigating relative to a non-cooperative target that does not offer any navigational aid and is unable to stabilise its attitude. This research integrates a convolutional neural network (CNN) and an EPnP-solver in a pose initialisation system. The system's performance is benchmarked on images gathered from the European Proximity Operations Simulator EPOS 2.0 laboratory. A synthetic dataset is generated using Blender as a rendering engine. A segmentation-based pose estimation CNN is trained using the synthetic dataset and the resulting pose estimation performance is evaluated on a set of real images gathered from the cameras of the EPOS 2.0 robotic close-range relative navigation laboratory. It is demonstrated that a synthetic-image-trained CNN-based pose estimation pipeline is able to successfully perform in a close-range visual relative navigation setting on real camera images of a 6-facet symmetrical spacecraft.
Original languageEnglish
Pages (from-to)2824-2844
Number of pages21
JournalAdvances in Space Research
Volume72
Issue number7
Early online date17 Feb 2023
DOIs
Publication statusPublished - 1 Oct 2023

Keywords

  • close-range relative navigation
  • pose estimation
  • symmetric uncooperative target
  • monocular camera
  • convolutional network
  • domain randomisation

Fingerprint

Dive into the research topics of 'Domain randomisation and CNN-based keypoint-regressing pose initialisation for relative navigation with uncooperative finite-symmetric spacecraft targets using monocular camera images'. Together they form a unique fingerprint.

Cite this