Projects per year
Abstract
Vision-based relative navigation technology is a key enabler of several areas of the space industry such as on-orbit servicing, space debris removal, and formation flying. A particularly demanding scenario is navigating relative to a
non-cooperative target that does not offer any navigational aid and is unable to stabilize its attitude. Previously, the state-of-the-art in vision-based relative navigation has relied on image processing and template matching
techniques. However, outside of the space industry, state-of-the-art object pose estimation techniques are dominated by convolutional neural networks (CNNs). This is due to CNNs flexibility towards arbitrary pose estimation targets, their ability to use whatever available target features, and robustness towards varied lighting conditions, damage to targets, occlusions, and other effects that might interfere with the image. The use of CNNs for visual relative navigation is still relatively unexplored in terms of how their unique advantages can best be exploited. This research aims to integrate a state-of-the-art CNN-based pose estimation architecture in a relative navigation system. The system's navigation performance is benchmarked on realistic images gathered from the European Proximity Operations Simulator 2.0 (EPOS 2.0) robotic hardware-in-the-loop laboratory. A synthetic dataset is generated using Blender as a rendering engine. A segmentation-based 6D pose estimation CNN is trained using the synthetic dataset and the resulting pose estimation performance is evaluated on a set of real images gathered from the cameras of the EPOS 2.0 robotic close-range relative navigation laboratory. It is demonstrated that a synthetic-image-trained CNN-based pose estimation pipeline is able to successfully perform in a close-range visual navigation setting on real camera images of spacecraft that exhibits, though with some limitations that still have to be surpassed for the system to be ready for operation. Furthermore, it is able to do so with a symmetric target, a common difficulty with neural networks in a pose estimation setting.
non-cooperative target that does not offer any navigational aid and is unable to stabilize its attitude. Previously, the state-of-the-art in vision-based relative navigation has relied on image processing and template matching
techniques. However, outside of the space industry, state-of-the-art object pose estimation techniques are dominated by convolutional neural networks (CNNs). This is due to CNNs flexibility towards arbitrary pose estimation targets, their ability to use whatever available target features, and robustness towards varied lighting conditions, damage to targets, occlusions, and other effects that might interfere with the image. The use of CNNs for visual relative navigation is still relatively unexplored in terms of how their unique advantages can best be exploited. This research aims to integrate a state-of-the-art CNN-based pose estimation architecture in a relative navigation system. The system's navigation performance is benchmarked on realistic images gathered from the European Proximity Operations Simulator 2.0 (EPOS 2.0) robotic hardware-in-the-loop laboratory. A synthetic dataset is generated using Blender as a rendering engine. A segmentation-based 6D pose estimation CNN is trained using the synthetic dataset and the resulting pose estimation performance is evaluated on a set of real images gathered from the cameras of the EPOS 2.0 robotic close-range relative navigation laboratory. It is demonstrated that a synthetic-image-trained CNN-based pose estimation pipeline is able to successfully perform in a close-range visual navigation setting on real camera images of spacecraft that exhibits, though with some limitations that still have to be surpassed for the system to be ready for operation. Furthermore, it is able to do so with a symmetric target, a common difficulty with neural networks in a pose estimation setting.
Original language | English |
---|---|
Title of host publication | IAF Astrodynamics Symposium 2021 |
Number of pages | 12 |
ISBN (Electronic) | 9781713843078 |
Publication status | Published - 25 Oct 2021 |
Event | IAF Astrodynamics Symposium 2021 at the 72nd International Astronautical Congress, IAC 2021 - Dubai, United Arab Emirates Duration: 25 Oct 2021 → 29 Oct 2021 |
Publication series
Name | Proceedings of the International Astronautical Congress, IAC |
---|---|
Volume | C1 |
ISSN (Print) | 0074-1795 |
Conference
Conference | IAF Astrodynamics Symposium 2021 at the 72nd International Astronautical Congress, IAC 2021 |
---|---|
Country/Territory | United Arab Emirates |
City | Dubai |
Period | 25/10/21 → 29/10/21 |
Funding
This research was funded under EU H2020 MSCA ITN Stardust-R, grant agreement 813644.
Keywords
- close-range relative navigation
- convolutional neural network
- domain randomization
- monocular camera
- pose estimation
- symmetric uncooperative target
Fingerprint
Dive into the research topics of 'Segmentation-driven spacecraft pose estimation for vision-based relative navigation in space'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Stardust-R (Stardust Reloaded) H2020 MCSA ITN 2018
Vasile, M. (Principal Investigator), Feng, J. (Co-investigator), Fossati, M. (Co-investigator), Maddock, C. (Co-investigator), Minisci, E. (Co-investigator) & Riccardi, A. (Co-investigator)
European Commission - Horizon Europe + H2020
1/01/19 → 31/12/22
Project: Research