twitter share facebook share 2019-11-28 1008

I magine a pilot is taking a familiar flight along a known route, during which the weather takes a turn for the worst. She knows that flying through the storm comes with some serious risks – and according to her training, she should take a detour or return. But she has flown the same route before, in similar weather – and she hadn’t experienced any problems then. Should she continue? Or should she turn back?

If you believe that she is safe to fly on, then you have fallen for a cognitive quirk known as the “outcome bias”. Studies have shown that we often judge the quality of a decision or behaviour by its endpoint, while ignoring the many mitigating factors that might have contributed to success or failure – and that this can render us oblivious to potentially catastrophic errors in our thinking.

In this example, the decision to take the previous flight was itself very risky – and the pilot may have only avoided an accident through a combination of lucky circumstances. But thanks to the outcome bias, she might ignore this possibility and assume that either the dangers had been overrated, or that it was her extraordinary skill that got her through, leading her to feel even happier taking the risk again in the future. And the more she does it, the less concerned about the danger she becomes.

Besides leading us to become increasingly risky in our decision-making, the outcome bias can lead us to ignore incompetence and unethical behaviour in our colleagues. And the consequences can be truly terrifying, with studies suggesting that it has contributed to many famous catastrophes, including the crash of Nasa’s Columbia shuttle and the Deepwater Horizon oil spill.

The end, not the means

Like much of our understanding of human irrationality, the outcome bias was first observed in the 1980s, with a seminal study of medical decision-making.

Participants were given descriptions of various scenarios, including the risks and benefits of the different procedures, and then asked to rate the quality of the doctors’ judgement.

The participants were told about a doctor’s choice to offer a patient a heart bypass, for instance – potentially adding many more years of good health, but with a small chance of death during the operation. Perhaps predictably, the participants judged the doctor’s decision far more harshly if they were told the patient subsequently died than when they were told that the patient lived – even though the benefits and risks were exactly the same in each case.

The outcome bias is so deeply ingrained in our brains that it’s easy to understand why they would feel that the doctor should be punished for the patient’s death. Yet the participants’ reasoning is not logical, since there would have been no better way for the doctor to have weighed up that evidence – at the time of making the decision there was every chance the operation would have been a success. Once you know about the tragedy, however, it’s hard to escape that nagging feeling that the doctor was nevertheless at fault – leading the participants to question his competence.

Negative results lead us to blame someone for events that were clearly beyond their control, even when we know all the facts that excuse their decision-making

“We just have a hard time dissociating the random events that, along with the quality of the decision, jointly contribute to the outcome,” explains Krishna Savani at the Nanyang Technological University in Singapore.

The finding, published in 1988, has been replicated many times, showing that negative results lead us to blame someone for events that were clearly beyond their control, even when we know all the facts that excuse their decision-making. And we now know that the opposite is also true: thanks to the outcome bias, a positive result can lead us to ignore flawed decision-making that should be kept in check, giving people a free pass for unacceptable behaviour.

In one experiment by Francesca Gino at Harvard Business School, participants were told a story about a scientist who fudged their results to prove the efficacy of a drug they were testing. Gino found that the participants were less critical of the scientist’s behaviour if the drug turned out to be safe and effective than if it turned out to have dangerous side effects. Ideally, of course, you would judge both situations equally harshly – since an employee who behaves so irresponsibly could be a serious liability in the future.

Such flawed thinking is a serious issue when considering things like promotion. It means that an investor, say, could be rewarded for a lucky streak in their performance even if there is clear evidence of incompetent or unethical behaviour, since their boss is unable to disconnect their decision-making from their results. Conversely, it shows how a failure can subtly harm your reputation even if there is clear evidence that you had acted appropriately based on the information at hand.

“It’s a big problem that people are either being praised, or being blamed, for events that were largely determined by chance,” says Savani. “And this is relevant for government policy makers, for business managers – for anyone who's making a decision.”

The outcome bias may even affect our understand of sport. Arturo Rodriguez at the University of Chile recently examined pundits’ ratings of footballers on Goal.com. In games that had to be decided by penalty shootouts, he found that the results of those few short minutes at the end of the game swayed the experts’ judgements of the players’ performance throughout the whole match. Crucially, that was even true for the players who hadn’t scored any goals. “The result of the shoot-out had a significant impact on the individual evaluation of the players – even if they didn’t participate in it,” Rodriguez says. They could simply bask in the victory of others.

Near misses

The outcome bias’s most serious consequences, however, concern our perceptions of risk.

One study of general aviation, for instance, examined pilots’ evaluations of flying under perilous weather conditions with poor visibility. It found that pilots were more likely to underestimate the dangers of the flight if they had just heard that another pilot had successfully made it through the same route. In reality, there is no guarantee that their success would mean a safe passage for the second flight – they may have only made it through by luck – but the outcome bias means that the pilots overlooked this fact.

Catherine Tinsley, at Georgetown University, has found a similar pattern in people’s responses to natural disasters like hurricanes. If someone weathers one storm unscathed, they become less likely to purchase flood insurance before the next disaster, for instance.

Tinsley’s later research suggests that this phenomenon may explain many organisational failings and catastrophes too. The crash of Nasa’s Columbia shuttle was caused by foam insulation breaking off an external tank during the launch, creating debris that struck a hole through the wing of the orbiter. The foam had broken from the insulation on many previous flights, however – but due to lucky circumstance it had never before created enough damage to cause a crash.

Inspired by these findings, Tinsley’s team asked participants to consider a hypothetical mission with a near miss and to rate the project leader’s competence. She found that emphasising factors like safety, and the organisation’s visibility, meant that people were more likely to spot the event as a warning sign of a potential danger. The participants were also more conscious of the latent danger if they were told they would have to explain their judgement to a senior manager. Given these findings, organisations should emphasise everyone’s responsibility for spotting latent risks and reward people for reporting them.

Savani agrees that we can protect ourselves from the outcome bias. He has found, for instance, that priming people to think more carefully about the context surrounding a decision or behaviour can render them less susceptible to the outcome effect. The aim should be to think about the particular circumstances in which it was made and to recognise the factors, including chance, that might have contributed to the end result.

One way to do this is to engage in counter-factual thinking when assessing your or someone else’s performance, he says. What factors might have caused that different outcome? And would you still rate the decision or process the same way, if that had occurred?

Consider that case of the scientist who was fudging their drug results. Even if the drug was safe in the end, imagining the worst-case scenario – with patient deaths – would make you more conscious of the risks he was taking. Similarly, if you were that pilot who chose to fly in unsuitable conditions, you might look at each flight to examine any risks you were taking and to think through how that might have played out in different circumstances.

Whether you are an investor, a pilot or a Nasa scientist, these strategies to avoid the outcome bias will help prevent a chance success from blinding you to dangers in front of your eyes. Life is a gamble, but you can at least stack the odds in your favour, rather than allowing your mind to lull you into a false sense of security.

Comments