Assuring autonomy in space: improving the utilisation of satellites through the safe introduction of autonomy.

The ACTIONS project focused on fire detection carried out autonomously by a machine learnt (ML) component onboard a satellite. The project demonstrator generated a fire detection alert to emergency response services on the ground, with confidence that the data was accurate, truthful, and timely.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

3D rendering of small satellite orbiting the Earth

Project report

The full project report describes how the team has worked to assure the safety of satellite autonomy to improve emergency response to wildfires.

Final project report

The challenge

Small satellites have limited resources and sparse opportunities for data capture. Autonomy offers significant improvements in the utilisation and timeliness of service to end-users of such systems. In an autonomous in-orbit fire detection and near-real-time emergency response application, these include:

  • Rapid tagging and filtering of data - prioritise data which is confidently believed to include wildfire
  • Alert generation - extract salient data (detection time, location and size of detected fire)
  • Verification data generation - create ancillary data products such as image thumbnails or augmented visualisations
  • Data reduction - selective compression to retain only valuable regions of data at full quality
  • Responsive tasking - monitor wildfire on subsequent passes

But what level of trust can be placed in algorithmic operators versus ground-based human operators when onboard autonomy is introduced to satellite missions? With data analysis and operational decision responsibilities moved upstream and only intermittent ground station contact available to verify these autonomous activities, it is critical that such activities are rigorously assured and can be trusted within some reasonable limits.

The research

Using autonomous in-orbit fire detection to support wildfire emergency response as the driving application, this project considered the safety assurance of ML algorithms onboard small satellites:

  • System design - model-based systems engineering (MBSE) was followed to develop, document and communicate the requirements and behaviour of the system. The team used the Capella tool to capture system behaviour and model the dataflow through the system, identifying the failure modes associated with the functional flow of the system.
  • System safety requirements – missed detections, or misdirection of emergency services to attend non-fires both pose a risk. The team defined four system safety requirements in response.
  • ML safety requirements - the system safety requirements were allocated and interpreted for the ML component.
  • ML assurance – the AAIP’s AMLAS process was used to assure the safety of the ML. The team found that the assurance artefacts generated when following AMLAS are valuable for communication with customers and partners and building trust in the ML component.
  • Hardware-In-the-Loop (HIL) simulation testing - the ML component was deployed in a simulated environment with target HIL across a set of defined operational scenarios.
  • Processing results
  • Mission results
  • Burnt area detection

The results

The project delivered a demonstration system for autonomous wildfire detection and reporting, which the team tested in a realistic mission simulator. They developed and tested a commercial application of the ACTIONS mission, where data products generated onboard are used for ground-based burnt area detection to support the recovery of wildfire-affected areas.

Illustration of a satellite in space

What are the particular assurance challenges of working on space systems that have such tight constraints? Does this provide lessons for other domains?

Find out more from project PI, Murray Ireland

Project partners

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH