What are the particular assurance challenges of working on space systems that have such tight constraints? Does this provide lessons for other domains?

Space provides a novel and challenging environment for the deployment of a robotic system. The ability to test a space system in a fully representative environment before launch is impossible, and so tools such as hardware-in-the-loop simulation must be heavily relied upon. This is only compounded by the recent developments in autonomous space systems. In many cases, such autonomy is driven by visual data acquired from onboard optical instruments, passed through neural networks to extract information about features on the Earth and in the local orbital environment. Training data for such activities is very limited, particularly when considering the specific features of interest and properties of the capturing instrument. This, combined with the limited processing power of onboard computers, restricts the performance – both accuracy and speed – of any neural networks deployed on board. This has a subsequent impact on the safety of the autonomous satellite, to itself, its data (a key concern of end users), and even to life on the ground in some applications.

As satellite autonomy is still in its infancy, the tools and resources required to adequately assure the safety of an autonomous satellite are relatively immature. We have spent much time defining and characterising a representative mission for investigation during our AAIP project, ACTIONS. We are now simulating this mission with representative flight hardware in the loop, so that we can truly understand the impact of low-power neural network performance on the mission behaviour and results. This simulation requires a close coupling of orbital mechanics, spacecraft dynamics and visual inputs to the simulated instrument. It is only by doing this work that we can test the neural network in a realistic context.

Such work has lessons for other domains, primarily those where testing in the intended operating environment is infeasible or impossible, such as deep-sea missions or robots used in the nuclear industry. In these cases, simulations of sufficient fidelity can be used to model the operating environment and provide the relevant stimuli (image data and environmental disturbances) to fully test the autonomous system’s response and be confident in its safety.

Murray Ireland
Responsive Operations Lead
Craft Prospect

Principal investigator of the ACTIONS project

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH