Segmentation-driven spacecraft pose estimation for vision-based relative navigation in space

Karl Martin Kajak, Christie Maddock, Heike Frei, Kurt Schwenk

Research output: Contribution to conferenceProceedingpeer-review

3 Downloads (Pure)

Abstract

Vision-based relative navigation technology is a key enabler of several areas of the space industry such as on-orbit servicing, space debris removal, and formation flying. A particularly demanding scenario is navigating relative to a
non-cooperative target that does not offer any navigational aid and is unable to stabilize its attitude. Previously, the state-of-the-art in vision-based relative navigation has relied on image processing and template matching
techniques. However, outside of the space industry, state-of-the-art object pose estimation techniques are dominated by convolutional neural networks (CNNs). This is due to CNNs flexibility towards arbitrary pose estimation targets, their ability to use whatever available target features, and robustness towards varied lighting conditions, damage to targets, occlusions, and other effects that might interfere with the image. The use of CNNs for visual relative navigation is still relatively unexplored in terms of how their unique advantages can best be exploited. This research aims to integrate a state-of-the-art CNN-based pose estimation architecture in a relative navigation system. The system's navigation performance is benchmarked on realistic images gathered from the European Proximity Operations Simulator 2.0 (EPOS 2.0) robotic hardware-in-the-loop laboratory. A synthetic dataset is generated using Blender as a rendering engine. A segmentation-based 6D pose estimation CNN is trained using the synthetic dataset and the resulting pose estimation performance is evaluated on a set of real images gathered from the cameras of the EPOS 2.0 robotic close-range relative navigation laboratory. It is demonstrated that a synthetic-image-trained CNN-based pose estimation pipeline is able to successfully perform in a close-range visual navigation setting on real camera images of spacecraft that exhibits, though with some limitations that still have to be surpassed for the system to be ready for operation. Furthermore, it is able to do so with a symmetric target, a common difficulty with neural networks in a pose estimation setting.
Original languageEnglish
Number of pages12
Publication statusPublished - 25 Oct 2021
Event72nd International Astronautical Congress - Dubai World Trade Centre, Dubai, United Arab Emirates
Duration: 25 Oct 202129 Oct 2021
https://iac2021.org/
https://www.iafastro.org/events/iac/iac-2021/

Conference

Conference72nd International Astronautical Congress
Abbreviated titleIAC 2021
Country/TerritoryUnited Arab Emirates
CityDubai
Period25/10/2129/10/21
Internet address

Keywords

  • close-range relative navigation
  • pose estimation
  • symmetric uncooperative target
  • monocular camera
  • convolutional neural network (CNN)
  • domain randomization

Fingerprint

Dive into the research topics of 'Segmentation-driven spacecraft pose estimation for vision-based relative navigation in space'. Together they form a unique fingerprint.

Cite this