Skip to content Accessibility statement
Home>Assuring Autonomy International Programme>Impact>Lydia Gauerhof Bosch case study

Lydia Gauerhof is a Research Engineer within corporate research at Robert Bosch GmbH and an AAIP Fellow. She's passionate about bringing together industry and academia.

Lydia's link to AAIP began when she worked with Professor Dr Simon Burton, an AAIP Fellow. Lydia and Simon were collaborating on ways to safely introduce machine learning to different applications in the automotive domain.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

There was no ISO standard for AI and we both [Simon Burton and I] recognised the need for consensus between academia and industry.

Early work

Lydia first began working with Richard Hawkins and Ibrahim Habli on the paper "Confidence arguments for evidence of performance in machine learning for highly automated driving functions" (Burton, S., Gauerhof, L., Hawkins, R.D., Habli, I., and Sethy, B).

We then welcomed Lydia to AAIP as a Fellow, which provided a direct connection to our York team, leading to the publication “Assuring the safety of machine learning for pedestrian detection at crossings" (Gauerhof,L., Hawkins, R.D., Picardi, C., Paterson, C., Hagiwara, Y., and Habli, I.). Lydia then worked as part of the expert team that reviewed the draft Assurance of Machine Learning for Autonomous Systems (AMLAS) guidance.

I was glad to be part of the team. It really helped me to gain a better and broader understanding as AAIP has strong cross-domain knowledge, not just in the automotive sector, but also learning, for example, from medical devices.

Further collaboration

To share the AMLAS guidance once published, Bosch convened two workshops facilitated by AAIP. The workshops were convincing and the guidance was then shared across the different departments. "Universities focus more on methods, Bosch focuses more on products. Of course, methods are needed for development and here is where the value of universities comes in through their cross-domain expertise."

For Lydia, AMLAS provides a shared resource and approach that can be shared across project teams. It can be used as a credible basis for discussions about assuring safety. The alternative is to develop a bespoke safety case, which typically is only well understood by those who created it.

AMLAS is important as it provides the know-how to develop a safety argument. The structure really works; it is very clear for people who come from an AI rather than a safety background. There will be special details that are outside of the guidance which are domain specific. We can use the guidance as a sanity check, asking for example, ‘did AMLAS use the same argument?’

Find out more

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH