Don’t trust the autopilot

Scroll to continue
Back

On June 1, 2009, Air France Flight 447 from Rio to Paris crashed into the Atlantic, killing all 228 people on board.

Nine months earlier, 25,000 people lost their jobs at Lehman Brothers, a 158-year-old US bank which was forced to file for bankruptcy. It sparked turmoil in the financial markets that is still being felt today.

So how are these two events linked?

Both were caused, in part, by what is known as the ‘autopilot problem’, and it lies at the heart of a study carried out by academics in partnership with MS Amlin and reported in a series of White Papers.

In the case of the Air France tragedy, the plane was on autopilot with the pilots acting only as overseers. When a technical failure dropped them into an emergency, they didn’t have time to identify the problem before the plane hit the waters of the Atlantic Ocean.

As for Lehman brothers, the financial models, used to measure risk, turned out to be flawed. As the authors of the MS Amlin study point out: “The collapse in confidence, credit and liquidity was far greater than the models ever imagined, while the credit agencies gave AAA ratings to investments that turned out to be worthless.

“Bankers and investors had to come up with some other way of assessing the true risk of investments and corporations, least credit dry up completely and the crisis become self-sustaining. They failed and the world was set for many years of recession.”

The financial crisis that followed led some in the insurance industry to ask whether the same thing could happen in their own market.  After all, underwriters use models to calculate risk; indeed, they are encouraged by the regulators to do so.

But what if, like the pilots of Air France Flight 447, they become too reliant on models to make decisions, becoming overseers of the situation, rather than actively engaging their own skill, judgement and experience?

What if this reliance becomes the norm over time? Is there a risk that human skills, sharply honed over many years become blunt and obsolete?

And if the models are flawed, could there be a systemic failure in the insurance market, similar to the 2008 financial crisis?

Anders Sandberg, a respected neuroscientist, and one of the White Paper’s authors, looked in detail at how underwriters interact with the models, and how they arrive at their decisions.

“The underwriter’s job is a lot more than moving numbers around spreadsheets,” he says. “They also need to maintain their ability to question the results the models are giving them.

“At some point there has to be a reality check, but it’s a complex problem, that has many facets to it.”

Anders and his colleagues narrow down the ‘autopilot problem’ to four main elements.

A loss of situational awareness – in the Air France accident, the pilots didn’t realise the instruments had failed. They didn’t notice the discrepancy between the readings and the reality because they hadn’t been flying the plane.

In the financial crisis, the true value of the assets being traded was buried under many layers of calculations. Traders had no idea of the true value of what they were buying and selling.

Skills degradation – to maintain expert status needs practise. Experts must be put into situations where they receive frequent feedback about their actions. Without this feedback, expertise and skills degrade and performance deteriorates. In other words, if you’re over-reliant on a model, you can end up losing the skills needed to arrive at rational, considered decisions.

Human error: misplaced trust and complacency – put simply, humans are prone to take ‘cognitive shortcuts’ where they can. Rather than slowly work our way through the pros and cons, we like these shortcuts that cut out some of the heavy lifting. Models just exacerbate this problem.

And because most autopilot users only experience successes and not the failures, we place too much trust in the models, believing that they will always deliver the outcomes we need.

Unreliable modal estimates – Often the most likely estimate that the model gives – called the 'modal estimate' - is actually known to be inaccurate. Yet despite this, it is often this inaccurate estimate that is used. This illustrates a core feature of human decision-making: far from being rational and objective, our decisions are heavily influenced by assumptions, biases and feelings. In the case of modal estimates, people feel more psychologically comfortable trusting something that looks like a precise numerical estimate, even if that precision is in fact spurious.

Models have become the source of ‘standard wisdom’ within the insurance industry. For an individual underwriter to deviate from this 'group think' is perceived as a significant personal risk. The consequences of going out on a limb, and getting it wrong, are seen as sources of serious personal and reputational impact by those taking these decisions. 

The autopilot problem is not just limited to airline pilots and financial markets. Driverless cars are soon to become a reality. Doctors use algorithms to help them make diagnoses. This is clearly a wider issue for everyone. 

Models are here to stay in the insurance industry. Making their use safer and more reliable is the next step for the MS Amlin working party team. And it’s the topic of our next article in this series on the systemic risk of modelling.

Download our Systemic Risk of Modelling research here.

Are you sure you want to remove this article from your library?