Segmentation-driven spacecraft pose estimation for vision-based relative navigation in space

Karl Martin Kajak, Christie Maddock, Heike Frei, Kurt Schwenk

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

34 Downloads (Pure)

Abstract

Vision-based relative navigation technology is a key enabler of several areas of the space industry such as on-orbit servicing, space debris removal, and formation flying. A particularly demanding scenario is navigating relative to a
non-cooperative target that does not offer any navigational aid and is unable to stabilize its attitude. Previously, the state-of-the-art in vision-based relative navigation has relied on image processing and template matching
techniques. However, outside of the space industry, state-of-the-art object pose estimation techniques are dominated by convolutional neural networks (CNNs). This is due to CNNs flexibility towards arbitrary pose estimation targets, their ability to use whatever available target features, and robustness towards varied lighting conditions, damage to targets, occlusions, and other effects that might interfere with the image. The use of CNNs for visual relative navigation is still relatively unexplored in terms of how their unique advantages can best be exploited. This research aims to integrate a state-of-the-art CNN-based pose estimation architecture in a relative navigation system. The system's navigation performance is benchmarked on realistic images gathered from the European Proximity Operations Simulator 2.0 (EPOS 2.0) robotic hardware-in-the-loop laboratory. A synthetic dataset is generated using Blender as a rendering engine. A segmentation-based 6D pose estimation CNN is trained using the synthetic dataset and the resulting pose estimation performance is evaluated on a set of real images gathered from the cameras of the EPOS 2.0 robotic close-range relative navigation laboratory. It is demonstrated that a synthetic-image-trained CNN-based pose estimation pipeline is able to successfully perform in a close-range visual navigation setting on real camera images of spacecraft that exhibits, though with some limitations that still have to be surpassed for the system to be ready for operation. Furthermore, it is able to do so with a symmetric target, a common difficulty with neural networks in a pose estimation setting.
Original languageEnglish
Title of host publicationIAF Astrodynamics Symposium 2021
Number of pages12
ISBN (Electronic)9781713843078
Publication statusPublished - 25 Oct 2021
EventIAF Astrodynamics Symposium 2021 at the 72nd International Astronautical Congress, IAC 2021 - Dubai, United Arab Emirates
Duration: 25 Oct 202129 Oct 2021

Publication series

NameProceedings of the International Astronautical Congress, IAC
VolumeC1
ISSN (Print)0074-1795

Conference

ConferenceIAF Astrodynamics Symposium 2021 at the 72nd International Astronautical Congress, IAC 2021
Country/TerritoryUnited Arab Emirates
CityDubai
Period25/10/2129/10/21

Funding

This research was funded under EU H2020 MSCA ITN Stardust-R, grant agreement 813644.

Keywords

  • close-range relative navigation
  • convolutional neural network
  • domain randomization
  • monocular camera
  • pose estimation
  • symmetric uncooperative target

Fingerprint

Dive into the research topics of 'Segmentation-driven spacecraft pose estimation for vision-based relative navigation in space'. Together they form a unique fingerprint.

Cite this