Abstract
This work presents the design and prototyping work of a miniaturised camera system with integrated ‘deep learning’ neural network capabilities, developed within a framework for implementing autonomous data processing onboard small and nanosatellites. The framework targets low-resource algorithms developed in other sectors including autonomous vehicles and commercial machine learning. For proof of concept, the system has been initially trained for real time cloud detection and classification, looking 1 min ahead of the satellite to enable responsive decision making for Earth observation and telecommunication applications. The design has been miniaturised and modularised to allow accommodation on small and nanosatellite systems. Flight representative and heritage components have been selected for prototyping. Compatibility of the autonomy framework with ECSS and CCSDS standards and existing off-the-shelf flight software was evaluated. A simulator to facilitate end to end testing of the system has been developed using existing data sets as input, incorporating distortions to test robustness. Results show that a competitive low power < 2 W system can be delivered, with the chain < 5 seconds from capture to input into the onboard planning and with timing consistent with continuous real time decision-making.
Original language | English |
---|---|
Title of host publication | 4S Symposium 2018 |
Subtitle of host publication | The Symposium on Small Satellites for Earth Observation |
Place of Publication | Noordwijk |
Number of pages | 9 |
Publication status | Published - 28 May 2018 |
Keywords
- onboard operations
- miniaturised camera systems
- deep learning
- neural networks
- nanosatellites
- autonomous vehicles