“Rather than perceiving time as a continuum, we tend to think about our lives as episodes, creating story arcs from the notable incidents or chapters in our lives.” - Katy Milkman
“What you call ‘root cause’ is simply the place where you stop looking any further.” - Sidney Dekker

In order to explore improved ways of learning from incidents and near-misses, let’s look back at some of the common fallacies and errors that can lead us to perhaps learn the wrong lessons. After an incident has occurred, those charged with investigating and learning from the incident may be inherently biased. Being aware of these biases may be one of the best strategies to pursue authentic learning. Before we consider any of the following biases, however, we should start with an incredibly influential and overarching bias that tends to strongly influence almost anyone who is reviewing an incident after it has occurred: Hindsight bias.
Hindsight Bias
Hindsight bias is exactly what it sounds like - the tendency for people to look at an incident with the lens of hindsight and knowledge of adverse outcomes, to believe that the event was foreseeable. This can look like people saying things like “It was an accident waiting to happen” or “How could they not see it coming?” and as a result, judging the participants in the incident more harshly as a result. Anytime we are looking back at an incident after it has occurred, with knowledge of the outcomes of the incident, we are potentially subjected to the powerful influence of hindsight bias.
Understanding hindsight bias helps us see how the following biases or fallacies can further affect our ability to objectively analyze an incident after the fact. The following four examples of biases are inspired by the work of Johan Bergstrӧm, a professor of Human Factors at Lund University.
Counterfactual Reasoning
Once you learn about counterfactual reasoning, it will start to appear everywhere you look, even from well-established, mature organizations with long-standing risk management systems and personnel. Counterfactual reasoning occurs when we analyze and critique events that actually never happened - often in the form of steps which the people involved in the incident failed to do. The problem with this approach is that the actual incident gets compared to a parallel universe, with the insinuation that things would have gone differently if the preferred steps had been followed, biasing the investigation / analysis. By describing things that didn’t happen, and by describing these things as failures, and then by linking them to the outcomes, this is a way to frame the entire event in a way that perpetuates the hindsight bias we already are feeling. Examples could include any of the following (hypothetical) ones:
“Despite being aware of incoming weather conditions, the mountaineering instructors failed to adjust their plans for the day, did not call in to headquarters for a weather report, and as a result, exposed their students to hazardous conditions when the storm caught them on the exposed ridge crest.”
“The field guides ignored signs of distress in the student group and failed to maintain control of the situation as prescribed in their training, leading directly to the altercation between two students.”
“Despite being clearly instructed on multiple occasions about the no swimming policy, campers chose to wade into the river anyway. Failure by the counselors to be strict about rules in the days leading up to the river incident created a loose attitude towards safety in the campers, which led to three campers being swept downstream. Had the counselors been more strict in enforcing rules earlier, this event could have been prevented.”
In these examples, rather than analyzing what did occur, the incident analysis is focusing on a parallel universe in which completely different things should have happened. The implication is that if X had happened, then Y would have not happened. The problem, of course, is that we don’t actually know that to be true. It short-circuits learning and leaps straight to solving the problem, and it pits the people involved in the incident against another alternate reality in which everything went the way it was “supposed” to go. If we consider the powerful influence of hindsight bias further amplifying this effect, the result is usually to ascribe blame to individuals.
Mechanistic Reasoning
Mechanistic reasoning occurs whenever we suggest that incidents are caused by malfunctioning components in an otherwise well-functioning and safe system, like a well-oiled machine that has a defective sprocket. Mechanistic reasoning can lead to doing root cause analysis, trying to find a broken component.
The way mechanistic reasoning usually goes is to provide a list of things that are working correctly, so as to exclude them from any further consideration - like a doctor doing a differential diagnosis, if we rule out potential causes one by one, then whatever is left must be the faulty component.
Incidents don’t routinely occur in linear, predictable ways. If we were working on repairing toaster ovens or lawnmowers, then mechanistic reasoning might be more applicable; but when we apply mechanistic reasoning to complex situations, we end up oversimplifying our own process of understanding, yielding simple “solutions” to what are in fact complex issues. An example of mechanistic reasoning in an Outdoor/Experiential Education program follows.
After a canoe capsizes in a Class II rapid on a backcountry expedition at a summer camp, program directors sit down to review the incident and attempt to learn from it. They determine that they had issued the proper paddles, proper life jackets, well-maintained boats, properly loaded the boats, that the water levels were acceptable for the conditions needed for the trip, verified that the students all knew the needed paddle strokes, and confirmed that the staff were not under the influence of drugs or alcohol when the incident occurred. All that remained on their list of possible causes was instructor error. Investigating this line of inquiry more thoroughly, they did in fact learn that one of the instructors confessed to “being a bit lackadaisical” in his mindset towards the rapids in question, as they had already made it safely through much harder rapids earlier in the same trip. Program directors landed on “instructor carelessness” as the cause of the incident and determined that they would impress on staff the importance of being careful at all times in future training. (This was the last trip of the year so there would be no further opportunities to retrain staff for this season).
The problem with this analysis, of course, is that it’s settling for a very simple story, and proposing a simple solution that targets the low-hanging fruit of “complacency” by telling people to care more and try harder. But no one woke up that morning and decided that they would have a capsize on that day. Telling people - especially front-line staff- to care more and try harder is not, ultimately, a very effective risk management intervention. Human error is always going to be present in almost any post-incident analysis - according to Todd Conklin, you will find it 100% of the time. But if we stop looking there, we miss significant opportunities for deeper learning.
Mechanistic reasoning can operate the opposite way as well, focusing on something like equipment as the sole reason why an incident occurred. I recently read a US Forest Service analysis of a chainsaw near-miss (“knicked chaps”) incident that exclusively focused on equipment as the cause, and the solution, to the incident, not considering the systems factors and human factors that contributed to the incident, or which could lead it to recur. Focusing on (or fixing) the equipment alone does not solve the issues.
Normative Language
Another common trap we see in incident analysis is normative language, occurring whenever an incident investigator puts their own values into their description of someone else’s actions or inactions. Examples could include:
Instructors mismanaged their students’ layers of clothing for the wet, cold conditions, leading to hypothermia in several cases. By the time the entire group was cold and shivering, it was too late to adequately warm some students, and they had to return to camp earlier than planned.
Another way we see normative language in incident analysis arises from investigator speculation about the causes of actions or inactions, as follows:
Crew leaders’ insufficient supervision of crew members at night, and the subsequent missing persons, likely resulted from inexperience, complacency in their front-country setting, and knowledge that they could easily call 911 if needed. Ironically, being in a frontcountry setting made this crew less safe than they would have been in a backcountry campsite. To their credit, leaders appropriately called for help once they realized that some of the crew members were unaccounted for.
The investigator here is making a series of leaps, and ultimately is acting more like a judge of the incident than a curious analyst afterwards - identifying which actions were appropriate and inappropriate, and creating causal links as a result of those judgments.
Cherry-Picking
Cherry-picking, also known as confirmation bias, occurs whenever an investigator seeks out and highlights data that support a belief they already have. Remember Hollnagel’s concept of WYLFWYF - “what you look for is what you find.” Cherry-picking is a natural human tendency in incident review, or perhaps life in general. Recent studies, interestingly, have shown that this is a cognitive bias that can be actively un-learned, but only through intentional efforts to do so.
To understand cherry-picking, we need look no further than how news has shifted from being an objective report to being a type of entertainment for profit, or for political purposes. Whether on the extreme right or extreme left, cable news channels cherry-pick clips, stories, or parts of stories, present them out of the larger context, and use that to perpetuate a pre-existing narrative about one party or another. For example, were the events in Washington, DC on January 6, 2021 acts of patriotism, or insurrection? People’s world views around this event, for example, are powerfully influenced by which pieces of that event, and the narrative around them, that their news sources allow them to see.
Another example of cherry-picking is that in this post, I have selected just a handful of traps or biases that investigators may fall victim to. There are many more, too many for the needs and intent of this post. I have cherry-picked those that serve my focus in this post. Having reviewed these biases, we will now shift to a discussion of what might be a better way to learn from incidents and near-misses: Sensemaking.
Sensemaking
Sensemaking, defined by Karl Weick in The Social Psychology of Organizing, is “the process by which people give meaning to experience.” Karl Weick drew from the field of organizational psychology to introduce the concept of sensemaking, which we all use in our daily decision-making processes.
Outdoor/Experiential Education programs can apply sensemaking to the process of reviewing and learning from incidents and near-misses after they occur. Essential questions to ask include:
How did people’s decisions and actions (or inactions) make sense to them at the time?
What did people see and experience that led them to respond as they did?
What surprised them about the situation they found themselves in?
What made the situation difficult, hard to interpret, or complicated?
What pressures were they feeling that contributed to the choices they made?
For example, let’s consider a situation in which two very experienced mountaineering instructors, along with a new instructor, end up working together on a student climb of a long, steep snow couloir in the North Cascades of Washington State. It’s a strenuous, meticulous ascent involving rope team travel, many transitions from one section to the next, and a final ascent up a short, almost-vertical headwall of snow.
The final stretch of climbing is so steep, both senior instructors wonder if they should belay it rather than have students attempting to slide their Prusiks (friction hitches) up a fixed line -- but neither speaks up about their concern, feeling confident that if there was a concern, that the other one would speak up about it. Everyone is tired, it’s getting warm and late in the day to still be on the steep, exposed snow, and they are feeling time pressure to get on top of the peak and begin the long descent down the back side to return to camp in the basin below.
After several students have made it safely to the top, one student struggles to ascend the steep snow while sliding their Prusik hitch up to protect themselves. They climb above the Prusik, now tight against their harness belay loop, and eventually blow out the snow steps the other students had used to ascend the headwall, falling 5-6 feet (with slack and rope stretch) down into the moat between the snow and rock, banging into the rock wall and twisting their knee. After being put on belay from above, the student is able to eventually climb to the top. The rest of the students are also belayed safely to the top.
When we apply the questions above to this situation, a useful understanding of what happened emerges:
How did the decisions and actions (or inactions) make sense to people at the time?
Each instructor felt that the fixed line setup was questionable given the steepness of the terrain, but that the other instructor would speak up if there were any concerns. Each felt that they would have probably belayed the students from the beginning if the other instructor hadn't been there to tacitly “approve” the fixed line approach.
What did people see and experience that led them to respond as they did?
Each instructor was feeling a sense of urgency to get the students up the couloir and down the snowy descent before the day got any warmer. This sense of urgency contributed to their desire to do the (quicker) fixed line rather than the (slower) belayed approach.
What surprised them about the situation they found themselves in?
They were surprised at how much the students struggled to manage their Prusik while moving their feet up pre-kicked steps. As long-time colleagues and friends, they were also surprised at how each one was so deferential to the other that neither one spoke up with their concern at the time about the fixed line strategy.
What made the situation difficult, hard to interpret, or complicated?
Neither instructor felt that the situation was particularly hard to interpret or complicated, but that they felt some reluctance to change the plan. Neither one wanted to be the one to change the plan.
What pressures were they feeling that contributed to the choices they made?
Both of the instructors were aware of the time of day, the warming snowpack, and the long descent they needed to do to return to camp, so were feeling time pressure to move the students as quickly and simply as possible through the headwall and down the back side of the peak.
This line of questioning can reveal new ways of seeing incidents and near-misses, or other challenges at work, more from the worker’s perspective. Sometimes, what we learn resonates with Todd Conklin’s question: Do people make bad choices, or do they HAVE bad choices? The next logical step, from this line of questioning, is “How can we, as an organization, help people have better choices available to them than they had (or realized they had) in this situation?”
Implications for Outdoor/Experiential Education Programs
Teach your staff and leadership team about the preceding common fallacies: Hindsight, Counterfactuals, Mechanistic, Normative, Cherry-Picking.
When reading incident reports shared by the media or by peer organizations, look for the common fallacies and consider if there may be other lessons to be learned from these events.
Look for elements in your incident reporting forms, systems, and organizational routines that may amplify the common fallacies (for example, some organizations ask staff to identify in a sentence “why the incident occurred,” an invitation to mechanistic thinking.
If your organization has external incident reporting requirements (such as those required by an accrediting body, etc.) be careful about the ways in which fulfilling these requirements may drive you towards the common fallacies, seeking out a quick, simple story instead of the deeper learning.
The more serious an incident is, the more we may be impelled towards the fallacies as a way of understanding and documenting learning from the incident. I have read many quarterly or annual risk management reports from organizations that rely on normative language, for example (such as “instructors failed to accurately assess the conditions”).
Teach and practice sensemaking after an incident occurs. Rather than asking “How could they have been so stupid?” ask “How did their decisions make sense to them at the time?”
Comments