If we want to use AI to make better decisions on the fire ground, we should first understand how people make decisions. Let’s talk about this and three other elephants.
This article is a summary of a couple of points from a paper I have written with Dr Richard Gasaway and Gerke Spaling. The initial working paper won a ‘Best working paper award’ at the Human Hybrid Artificial Intelligence Conference in 2022. A follow-up paper is still in progress, meanwhile I have presented on this topic in various venues to bring attention to human challenges of using AI.
My fear inspired me on my journey to use more and better information during incidents:
"I'm afraid that something will happen to me, one of my colleagues or the people we serve, and later we will find out that we had the information to prevent this from happening, but could not get it to the right person, in the right form on time."
The rise of open data around 2008 was the source of my fear. I recognized that a lot of information about the area I worked in, was available out on there on the web, but we were not using it.
The environment
The real challenge lies is the interacting with this information in the environment we work in, a high-risk, high-consequence and time-compressed environment. Working in this environment results in a different operational mode of your brain. Instead of considering all your options with pros and cons, your brain will operate fully automated, making decisions based on your experience and training.
This means that you use automated mental models to gather information about an incident scene and assemble a picture. This picture helps you to create an understanding of what is going on. Based on this understanding, you try to predict what the best cause of action would be to tackle the incident. This whole process will take place almost automatically, allowing you to respond quickly and decisively.
For the relatively simple day-to-day incidents, this works mostly fine. The real problems start when incidents become more complex.
The statistical trend in incidents is that they get less frequent, but the incidents that occur turn out to be more complex.
Complex incidents expose human factors
The more complex incidents have a common thread in their reviews; human factors play a critical role in the way the incident is handled. An important part of human factors is situational awareness, the understanding of your surroundings. The definition situational awareness according to Situational Awareness Matters is:
The ability to perceive and understand what is happening around you in relation to how time is passing, and then be able to accurately predict future events in time to avoid bad outcomes.
Earlier I described how we perceive and understand the world. This process is very much automated, but not free from problems. There are more than 120 (!) barriers for situational awareness. So your brain is not really assisting you in the increasing complexity of the incident. It actually makes it harder to obtain proper situational awareness. The use of technology is one of them, and yet AI is advertised to solve these problems.
Can AI really help?
This is where the four elephants enter the show.
Predicting what will happen next is one of the big promises of AI. The claim is that if you put enough data in an AI system, it will be able to tell you what will happen next on a fire scene or what part of information is relevant. And although that is what we want, the reality is probably a bit more sobering.
Data quality
It is no secret that globally, fire services are struggling with providing quality data from their operations. For an AI system to be properly trained, it needs a large amount of good quality data. Both the amount and the quality of the data currently available is problematic at best. There is no proper data model standard that is being used now, and on top of that, we barely have proper definitions for the elements that we record. This results in the garbage in, garbage out problem.
Outliers
Even when data would be available in good quality and in sufficient amounts, the reality is that the fire service responds to outliers. Every incident is unique in its own way; if there was a common theme among many incidents, it would result in policies to mitigate them.
The current focus on alternative energies, both in building installations and mobility, will cause new challenges for the fire service. As an all-hazards mitigating agency, climate change will bring new and unknown problems for you to solve. There is a fair chance that the combination of factors relevant to the next complex incident you go to are unique to that incident, an outlier.
Personal context
As a first responder, you have your own experience and training, which results in a very personal way of making decisions. The environment in which you need to make these decisions puts the brain in an almost automated decision mode. So even if the data quality was good enough to accurately help in these outlier situations, it would be very challenging to integrate it into the automated decision-making mode of your brain.
This requires the AI system to have detailed knowledge about you, the first responder it tries to help. Apart from poor data collection on a large scale, data gathering on a personal level is close to non-existent when it comes to who was where, and did what on the fire ground. There is nothing from AI systems to learn from!
Legal and Ethical aspects
If we send you out to an incident with an AI system as your assistants, do we consider the ethical and legal implications of these assistants? What if you make a fatal decision based on the advice of that AI system. Who is responsible? A legal aspect no one seems to talk about — yet. What are the effects on you when you become aware of the fatal outcome of a decision you made based on AI input? An ethical aspect, probably even less talked about.
Maybe we should focus on where things go wrong
So the promise of AI to be able to help us make better decisions sounds great, but ignores some of the challenges that are clearly present. Could there be an alternative method where AI could help?
If you look at incidents where first responders get in trouble, you will find a common thread, “We did not expect that”. If you consider the way first responders make their decisions and build their situational awareness, it is easy to see where that problem comes from. The picture they created, which was the basis for their decisions, was flawed. A common problem for this flawed situational awareness is the assumption that they understand the situation without really checking what is going on.
What if there was an AI system that asks you as a first responder questions that will prompt you to really check what is going on. This is coined as ‘Adjunct AI’ in recent research. Instead of telling you exactly what to do – which is nearly impossible – it will serve as an assistant, an adjunct, which will spark your curiosity to check out what is actually going on.
Although this is just one direction we could take to tame the elephants, it is important that we start having this discussion in the fire service in the first place. I do see that in all the enthusiasm for technology, these elephants are rarely discussed.
Making it human again
The challenges of AI in emergency response are only for a small portion of technical nature.
Human factor implications are the biggest issue when we are using technology. It is challenging to tap into the decision-making process and influence the actions of first responders.
We all know that first responders should wear seatbelts in firetrucks, but do you?
When adopting AI in the fire service, besides taming the four elephants, we need to make sure we focus on the human factors first.