Skip to content Accessibility statement

Staff spotlight: Dr Kate Preston

News

Posted on Thursday 21 August 2025

Dr Kate Preston is a Research Associate in Safe Autonomy with a specialism in human factors and ergonomics. In our latest staff spotlight Dr Preston explains how her specialism is a critical ingredient in both the development and use of AI technologies - particularly in critical areas such as healthcare.
Dr Kate Preston is a white woman with blonde, shoulder length hair. She is wearing glasses and a black top and is smiling at the camera.

1. Can you tell us about your areas of research?

At the centre I work as a Research Associate in Safe and Human-centred AI, which is a pretty good description of my area of research. In short, I’m interested in ensuring that AI technology is not just safe technically, but is also safe for use in its chosen context. For me, that means bringing a human-centred approach into every stage of the AI development lifecycle. 

To do this I draw a lot from the discipline of Human Factors/Ergonomics which focuses on taking a systems approach to understanding interactions, and how work is done within a sociotechnical context. For the development of AI technology this can mean understanding its impact on current work practices, or how it will interact with other elements such as other tools and technologies or regulations presented by governmental bodies. 

2. What led you to start work in Human Factors/Ergonomics?

I think a lot of people who work in Human Factors/Ergonomics would say that they fell into the discipline. This is true for me as well. I started out, as many do, in Psychology, where I had a great experience of research during my undergraduate dissertation. This eventually led me to taking a job as a research assistant in pharmacy health service research which is where I had my first glimpse at the world of Human Factors/Ergonomics. However, it wasn’t till I started my PhD that I really started to apply the discipline. 

During my PhD I focused on how Human Factors/Ergonomics could support the development of AI technology in healthcare. This is where I learnt the importance of applying a systems approach from the outset of development. My PhD also introduced me to the Chartered Institute of Ergonomics and Human Factors, and the special interest group in AI and Digital Health which I now co-chair alongside Professor Mark Sujan. 

All in all, I'm glad I ‘fell’ into this world, as I believe it is crucial for developing new innovations such as AI technology to ensure it is safe and effective for use in its chosen context. 

3. Why is the human factors/ergonomics discipline so important to consider and include around the development and assurance of AI technologies?

AI technology in its current form is not set to replace, but to augment everyday work. In healthcare for example, it can support clinicians by conducting routine phone calls, or in Maritime it can help support the process of pilotage. However, take healthcare for example, when AI is integrated it will need to interact with a number of different elements in the sociotechnical context, such as clinicians and patients, technologies including electronic health records, and guidelines and protocols associated with procedures.  

Therefore, we can’t think of AI in isolation and focus solely on its technological development and assurance but also look at how it will interact with the wider sociotechnical context. This is why the discipline of Human Factors/Ergonomics is so important as it allows for an understanding of these interactions. 

4. One of your areas of interest is organisational readiness for AI technologies, can you tell us more about what this is and why it’s important?

The importance of having sufficient organisational readiness for AI technology was one of my PhD’s key conclusions, and is now an area I advocate for regularly.  In simple terms, organisational readiness is about whether an organisation, such as a hospital,  is actually prepared to adopt AI technology. This means having factors such as the right infrastructure, technologies, processes and skills in place before the technology is developed.  

But why does this matter? Because without sufficient readiness, even the most impressive of AI technologies can fail once integrated. In healthcare, this failure could be because the new AI does not have the ability to pull data from the electronic health record, meaning clinicians would need to do this manually. This could be a very time consuming task, which may stop the clinician from actually using the technology. 

To figure out how to ensure sufficient organisational readiness, a systems approach can be taken, which looks at the interactions and what needs to be in place before the AI is introduced. This way, when AI technology is ready to be integrated, it is more likely to succeed. To help address organisational readiness, I am taking this systems approach in my work at the CfAA. Particularly in a project I work on with Ufonia where I am focusing on developing an in-depth understanding of the context where Dora (an AI-powered telephone assistant developed by Ufonia) will be integrated. By having this understanding, it should help ensure sufficient organisational readiness. 

5. What do you foresee as the next emerging issue in your field and how are you preparing for this?

I think there are two big challenges on the horizon. 

The first is the rise of ambient AI, technologies that work in the background without constant user input. A good example of this is ‘ambient scribes’ which can be used in healthcare to listen to a consultation between a clinician and patient, and then generate notes automatically. In theory, this frees up clinicians to focus solely on the conversation, rather than also juggling note taking. However, these systems do bring new challenges, like missing key details, generating inaccurate information or disrupting established workflows. At the CfAA, we’re exploring how a human-centred, systems focused approach can help identify and address these challenges. 

The second challenge is figuring out how to show human-centred safety alongside current traditional safety processes. Progress has already been made here with the development of the BIG Argument, which highlights the importance of understanding the context where AI will be integrated. We are also working on a human-centred assurance framework, which should support in providing a clear, evidence based approach for showing AI has been developed using a human-centred perspective. 

6. Finally, where can we find you outside of the CfAA?

I play a lot of scrabble. Put me in a coffee shop on a rainy day with a game of scrabble and I'll be happy for hours. When not playing scrabble, I'm normally making something crafty, or when the Scottish weather permits it, enjoying a trip out on my paddleboard or a nice walk (especially if it ends with coffee and scrabble).