Visual pose estimation system for autonomous rendezvous of spacecraft

Mark A. Post, Xiu T. Yan, Junquan Li, Craig Clark

Research output: Contribution to conferencePaper

Abstract

In this work, a tracker spacecraft equipped with a short-range vision system is tasked with visually identifying a target spacecraft and determining its relative angular velocity and relative linear velocity using only visual information from onboard cameras. Focusing on methods that are feasible for implementation on relatively simple spacecraft hardware, we locate and track objects in three-dimensional space using conventional high-resolution cameras, saving cost and power compared to laser or infrared ranging systems. Identification of the target is done by means of visual feature detection and tracking across rapid, successive frames, taking the perspective matrix of the camera system into account, and building feature maps in three dimensions over time. Features detected in two-dimensional images are matched and triangulated to provide three-dimensional feature maps using structure-from-motion techniques. This methodology allows one, two, or more cameras with known baselines to be used for triangulation, with more images resulting in higher accuracy. Triangulated points are organized by means of orientation histogram descriptors and used to identify and track parts of the target spacecraft over time. This allows some estimation of the target spacecraft's motion even if parts of the spacecraft are obscured or in shadow. The state variables with respect to the camera system are extracted as a relative rotation quaternion and relative translation vector for the target. Robust tracking of the state variables for the target spacecraft is accomplished by an embedded adaptive unscented Kalman filter. In addition to estimation of the target quaternion from visual Information, the adaptive filter can also identify when tracking errors have occurred by measurement of the residual. Significant variations in lighting can be tolerated as long as the movement of the satellite is consistent with the system model, and illumination changes slowly enough for state variables to be estimated periodically. Inertial measurements over short periods of time can then be used to determine the movement of both the tracker and target spacecraft. In addition, with a sufficient number of features tracked, the center of mass of the target can be located. This method is tested using laboratory images of spacecraft movement with a simulated spacecraft movement model. Varying conditions are applied to demonstrate the effectiveness and limitations of the system for online estimation of the movement of a target spacecraft at close range.

Conference

Conference13th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2015)
Abbreviated titleASTRA2015
CountryNetherlands
CityNoordwijk
Period11/05/1513/05/15

Fingerprint

rendezvous
Spacecraft
spacecraft
cameras
Cameras
quaternions
spacecraft motion
Lighting
laser ranging
adaptive filters
triangulation
Kalman filters
angular velocity
Adaptive filters
Angular velocity
histograms
Triangulation
histogram
illuminating
Kalman filter

Keywords

  • satellite
  • vision
  • pose estimation
  • spacecraft navigation

Cite this

Post, M. A., Yan, X. T., Li, J., & Clark, C. (2015). Visual pose estimation system for autonomous rendezvous of spacecraft. 1-9. Paper presented at 13th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2015), Noordwijk, Netherlands.
Post, Mark A. ; Yan, Xiu T. ; Li, Junquan ; Clark, Craig. / Visual pose estimation system for autonomous rendezvous of spacecraft. Paper presented at 13th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2015), Noordwijk, Netherlands.9 p.
@conference{0ea097f098494e89b3517fbd27e4efc6,
title = "Visual pose estimation system for autonomous rendezvous of spacecraft",
abstract = "In this work, a tracker spacecraft equipped with a short-range vision system is tasked with visually identifying a target spacecraft and determining its relative angular velocity and relative linear velocity using only visual information from onboard cameras. Focusing on methods that are feasible for implementation on relatively simple spacecraft hardware, we locate and track objects in three-dimensional space using conventional high-resolution cameras, saving cost and power compared to laser or infrared ranging systems. Identification of the target is done by means of visual feature detection and tracking across rapid, successive frames, taking the perspective matrix of the camera system into account, and building feature maps in three dimensions over time. Features detected in two-dimensional images are matched and triangulated to provide three-dimensional feature maps using structure-from-motion techniques. This methodology allows one, two, or more cameras with known baselines to be used for triangulation, with more images resulting in higher accuracy. Triangulated points are organized by means of orientation histogram descriptors and used to identify and track parts of the target spacecraft over time. This allows some estimation of the target spacecraft's motion even if parts of the spacecraft are obscured or in shadow. The state variables with respect to the camera system are extracted as a relative rotation quaternion and relative translation vector for the target. Robust tracking of the state variables for the target spacecraft is accomplished by an embedded adaptive unscented Kalman filter. In addition to estimation of the target quaternion from visual Information, the adaptive filter can also identify when tracking errors have occurred by measurement of the residual. Significant variations in lighting can be tolerated as long as the movement of the satellite is consistent with the system model, and illumination changes slowly enough for state variables to be estimated periodically. Inertial measurements over short periods of time can then be used to determine the movement of both the tracker and target spacecraft. In addition, with a sufficient number of features tracked, the center of mass of the target can be located. This method is tested using laboratory images of spacecraft movement with a simulated spacecraft movement model. Varying conditions are applied to demonstrate the effectiveness and limitations of the system for online estimation of the movement of a target spacecraft at close range.",
keywords = "satellite, vision, pose estimation, spacecraft navigation",
author = "Post, {Mark A.} and Yan, {Xiu T.} and Junquan Li and Craig Clark",
year = "2015",
month = "5",
day = "13",
language = "English",
pages = "1--9",
note = "13th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2015), ASTRA2015 ; Conference date: 11-05-2015 Through 13-05-2015",

}

Post, MA, Yan, XT, Li, J & Clark, C 2015, 'Visual pose estimation system for autonomous rendezvous of spacecraft' Paper presented at 13th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2015), Noordwijk, Netherlands, 11/05/15 - 13/05/15, pp. 1-9.

Visual pose estimation system for autonomous rendezvous of spacecraft. / Post, Mark A.; Yan, Xiu T.; Li, Junquan; Clark, Craig.

2015. 1-9 Paper presented at 13th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2015), Noordwijk, Netherlands.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Visual pose estimation system for autonomous rendezvous of spacecraft

AU - Post, Mark A.

AU - Yan, Xiu T.

AU - Li, Junquan

AU - Clark, Craig

PY - 2015/5/13

Y1 - 2015/5/13

N2 - In this work, a tracker spacecraft equipped with a short-range vision system is tasked with visually identifying a target spacecraft and determining its relative angular velocity and relative linear velocity using only visual information from onboard cameras. Focusing on methods that are feasible for implementation on relatively simple spacecraft hardware, we locate and track objects in three-dimensional space using conventional high-resolution cameras, saving cost and power compared to laser or infrared ranging systems. Identification of the target is done by means of visual feature detection and tracking across rapid, successive frames, taking the perspective matrix of the camera system into account, and building feature maps in three dimensions over time. Features detected in two-dimensional images are matched and triangulated to provide three-dimensional feature maps using structure-from-motion techniques. This methodology allows one, two, or more cameras with known baselines to be used for triangulation, with more images resulting in higher accuracy. Triangulated points are organized by means of orientation histogram descriptors and used to identify and track parts of the target spacecraft over time. This allows some estimation of the target spacecraft's motion even if parts of the spacecraft are obscured or in shadow. The state variables with respect to the camera system are extracted as a relative rotation quaternion and relative translation vector for the target. Robust tracking of the state variables for the target spacecraft is accomplished by an embedded adaptive unscented Kalman filter. In addition to estimation of the target quaternion from visual Information, the adaptive filter can also identify when tracking errors have occurred by measurement of the residual. Significant variations in lighting can be tolerated as long as the movement of the satellite is consistent with the system model, and illumination changes slowly enough for state variables to be estimated periodically. Inertial measurements over short periods of time can then be used to determine the movement of both the tracker and target spacecraft. In addition, with a sufficient number of features tracked, the center of mass of the target can be located. This method is tested using laboratory images of spacecraft movement with a simulated spacecraft movement model. Varying conditions are applied to demonstrate the effectiveness and limitations of the system for online estimation of the movement of a target spacecraft at close range.

AB - In this work, a tracker spacecraft equipped with a short-range vision system is tasked with visually identifying a target spacecraft and determining its relative angular velocity and relative linear velocity using only visual information from onboard cameras. Focusing on methods that are feasible for implementation on relatively simple spacecraft hardware, we locate and track objects in three-dimensional space using conventional high-resolution cameras, saving cost and power compared to laser or infrared ranging systems. Identification of the target is done by means of visual feature detection and tracking across rapid, successive frames, taking the perspective matrix of the camera system into account, and building feature maps in three dimensions over time. Features detected in two-dimensional images are matched and triangulated to provide three-dimensional feature maps using structure-from-motion techniques. This methodology allows one, two, or more cameras with known baselines to be used for triangulation, with more images resulting in higher accuracy. Triangulated points are organized by means of orientation histogram descriptors and used to identify and track parts of the target spacecraft over time. This allows some estimation of the target spacecraft's motion even if parts of the spacecraft are obscured or in shadow. The state variables with respect to the camera system are extracted as a relative rotation quaternion and relative translation vector for the target. Robust tracking of the state variables for the target spacecraft is accomplished by an embedded adaptive unscented Kalman filter. In addition to estimation of the target quaternion from visual Information, the adaptive filter can also identify when tracking errors have occurred by measurement of the residual. Significant variations in lighting can be tolerated as long as the movement of the satellite is consistent with the system model, and illumination changes slowly enough for state variables to be estimated periodically. Inertial measurements over short periods of time can then be used to determine the movement of both the tracker and target spacecraft. In addition, with a sufficient number of features tracked, the center of mass of the target can be located. This method is tested using laboratory images of spacecraft movement with a simulated spacecraft movement model. Varying conditions are applied to demonstrate the effectiveness and limitations of the system for online estimation of the movement of a target spacecraft at close range.

KW - satellite

KW - vision

KW - pose estimation

KW - spacecraft navigation

UR - http://robotics.estec.esa.int/ASTRA/Astra2015/index.html

UR - http://www.congrexprojects.com/2015-events/15a07/introduction

M3 - Paper

SP - 1

EP - 9

ER -

Post MA, Yan XT, Li J, Clark C. Visual pose estimation system for autonomous rendezvous of spacecraft. 2015. Paper presented at 13th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2015), Noordwijk, Netherlands.