Spotlight profile: Dr Nathan Hughes
Dr Nathan Hughes’ work into understanding how people make decisions with technology is crucial for building upon the research into human-centred autonomy undertaken at the CfAA. Learn why their investigations into how and why humans make decisions is so important in designing safe AI systems across distinct domains.

How has your interest in human decision-making fed into your current role as Research Associate in Human-centred Explainable AI?
My interest in human decision-making came whilst studying my integrated masters in psychology at the University of York. My final year project looked at how people use social space as a way to show their trust towards autonomous vehicles - the main hypothesis being if people trust autonomous vehicles, they would drive closer to them. This is similar to how people use social space in everyday life; if you trust someone, you tend to stand closer together. Unfortunately I couldn’t find any evidence of the effect in my project, but it did lead me to my continued enjoyment for trying to understand how people make decisions with technology. Mostly that humans are not very predictable in how much they will or will not trust autonomous technology!
For my PhD I investigated how people experience decision-making in open world video games such as Skyrim or The Witcher 3, where players can freely explore and interact with the environment. Here I had the opportunity to really dig into how humans make decisions in a game setting where they had free reign to do anything they wanted at any time. I had the chance to apply a lot of fascinating theories about human decision-making during my thesis, such as how people put their goals and motivations into action during gameplay, and how we can measure these behaviours. These experiences helped me understand not only how we can measure interactions with technology at the action level, but also how we can link these actions to more abstract, non-directly measurable concepts such as goals and values (i.e. the reasons behind the actions). This is important in my current work, as concepts like trust and understandability of AI are complex concepts, but there are patterns in behaviour we can observe that show these concepts in action.
What have you learnt from working across more than one domain?
Over the years I have had the chance to work in a number of domains, including autonomous cars, gaming, civil aviation, and healthcare. I have learnt it is always important to understand the specific context of a domain when we want to introduce new ways for humans to work alongside AI. We do not want to step on any toes as it were, by interfering with existing (and safety critical!) processes that ensure a system remains safe during operation. For example, if we wanted to design a system to help prevent drivers from accidentally applying the brakes, it would not be good if this feature also prevented them from stopping in an emergency! By collaborating with a wide range of teams/people within these domains, we reduce the chance of making these kinds of choices with unintended consequences.
I have also learnt decision-making is different when the intended user of an AI system is an expert in their field. Expert human decision-makers - such as Air Traffic Controllers (ATCOs) and Intensive Care Unit (ICU) Consultants - are so highly trained they tend to make mental groups of the variables they take into account when making a decision. This allows them to form high level strategies they can use quickly and efficiently, which is great for their performance. However, it makes it more difficult to design technology for them that will work alongside these complex strategies. It may be more difficult, but it is also more rewarding when you gain insights into how the human mind works.
Overall, whilst each of these domains have unique challenges, all of them fundamentally involve humans interacting with complex technologies that give rise to a variety of experiences and behaviours. In this way, they are more alike than they may at first seem. We can gain insights about human behaviours to inform the design of new AI technologies, whilst also taking into account how systems currently work. This prevents us from designing technology that does not meet the user’s needs, or is out of place in the context of other work the user has to do.
Why do you think the link between human-computer interaction (HCI) and safety is so important?
At the end of the day we build technology to help people. If we don’t keep our intended users in mind when building an AI system, it’s likely they will either not use it, or find ways to make it work for them that we didn’t anticipate. In the first case, it would be a little pointless to build a system nobody wants. In the second case, and especially in safety critical environments like healthcare, this could be dangerous. From my background in psychology and my ongoing work in HCI, I have found one part of assuring that a system will be safe is by understanding and making evidence-based assumptions about how people will interact with the AI. If our findings do not match how people will act in real life, we may not be able to guarantee safe usage of an AI system’s design. For this reason, understanding human decision-making when using an AI is critical to making sure we design a system that is usable and understandable to the people that use it. For example, from my work with ICU consultants it was found that knowing what data an AI uses was important. This is because consultants need to know the data reflect what they consider to be important factors when making decisions about a patient, to make sense of any output from the AI system.
Keeping intended users at the core of AI system development is known as user centred design, and is one of the main ways we can apply a HCI lens to developing safe AI systems. In doing so, we are more likely to design a system that can be interacted with predictably, and, in turn, safely. Building on the previous example, ICUs are places full of data where a patient’s status is constantly updating, presenting lots of information an AI system could use. When talking to ICU consultants, some flagged the importance that an AI system can handle this incoming data in real time, otherwise it may make suggestions based on out of date data. Knowing how old the data used to make a prediction is, and displaying this to consultants for them to inspect, was explored as one potential user-centred design solution.
What are some of the most interesting findings that have come out of your projects and why?
I’ve learnt so many fascinating things about how humans interact with AI during my projects. For me, the most interesting ones tend to be about understanding what it is about the current way we do things that influences how people perceive and want to use AI. For example, when talking to Air Traffic Controllers (ATCOs), they face a rather unique problem in comparison to other safety-critical domains. Namely, there is no way to stop the planes in the sky. This is in contrast to many other systems that are able to base their safety arguments on the fallback of, “if the AI system goes wrong, things can be stopped and passed back to humans until it can be resolved.” However, this can’t be so in air traffic control, because the planes will all need to be safely guided back to the ground. Therefore, building AI systems for aviation can be even more tricky than other domains like autonomous cars, because there are no hard shoulders to pull into if things go wrong!
Another interesting finding I've found is the ways in which Intensive Care Unit (ICU) consultants make decisions about patients under their care. There has been a high demand from policymakers, stakeholders and researchers alike to build AI systems for ICUs because the work is so critical, and, as the name implies, intensive. It would be great if we could build systems to support healthcare professionals, but it is not an easy task. ICU consultants talk frequently about a need to ‘eyeball’ the patient to make decisions about their treatment. However, this eyeballing is not something that is done from looking at healthcare data, in the way that AI systems do. This means that the human clinicians are using extra information that the AI system is not, which can make it more difficult to trust the AI. As I found from working with ICU clinicians, they would still want to see the patient before making a decision, and may not trust the AI system on its own to be correct. This puts restrictions on how an AI system could be used in ICUs, as it would need to work alongside the clinician’s ability to make decisions about a patient by inspecting them visually.
Finally, where can we find you when you’re not working?
Outside of work I am a performance poet, so I can usually be found at an open mic night around York (and beyond!). I also enjoy writing fiction and poetry in coffee shops, listening to progressive rock music, and helping out at the local LGBTQ+ cafe.