MS Amlin explores the systemic risk of modelling

Scroll to continue
Back

Recent hurricanes have brought widespread attention to the insurance industry and the final bill it faces for the rebuilding operation. Here at MS Amlin we are working to understand the potential problems of relying too much on models to assess risk and whether they might pose a threat to the system as a whole.  

The devastation left in the wake of hurricanes Irma and Harvey have reminded us all of the power of nature, its impact on people’s lives and livelihoods.

It’s also put the insurance industry under the spotlight.

The full extent of the damage, and therefore the final cost for insurers and re-insurers, is unlikely to be known for some time, but it’s certain that the losses will be significant, and may well have a longer-term impact insurance market as a whole.

It will inevitably reignite the debate about how much climate change is to blame for the force and frequency of such weather-related catastrophes. And, of course, whether we only have ourselves to blame for the changing climate

Less likely to be debated is the social role insurance plays in our lives. After all, it is insurance that oils the wheels of the whole recovery process.  

So it’s more important than ever that we have a robust, sustainable insurance industry that can cope with the demands made on it.

Flawed tools
We all know what happened when the banking industry took risks they didn’t understand, using tools that were flawed. The effects of the 2008 financial crisis are still being felt today.

Given the critical role insurance plays – not only in financial markets, but more broadly in people’s lives – it’s crucial that we have an insurance market that works.

One of the lessons learned from 2008 is the danger of over-reliance on models, and prompted MS Amlin to commission research into the relationship between models and the underwriters who use them.

The aim has been trying to deepen our understanding of the systemic risk of modelling. Our ground-breaking work with the Future of Humanity Institute at the Oxford Martin School laid bare the potential risks of over-reliance on models.  

After all, as our original report pointed out “the catalyst for this financial meltdown was the bursting of the US housing bubble which peaked in 2004, causing the value of securities tied to the US housing market to plummet and damage financial institutions on a global scale.”

The value of these securities was based on modelling assumptions that were fundamentally flawed. And since everyone was using the same models, the consequences spread across the market.

What sets this work apart is that... we used actual tasks used in the real world as performed by groups doing them every day - the underwriters.

 

Understanding the relationship between the humans – in our case our underwriters – and the computer models – was one of the cornerstones of the original research.

Cognitive bias
This widely observed human tendency is suspected to operate during the information processing and decision-making characteristic of underwriting.  Anders Sandberg, one of the authors of the original report, conducted what we believe to be the first pilot study to explore whether the phenomenon can be observed in real-world insurance decisions.

The study set out with three key questions:

  • To what extent do underwriters rely on modelled output as opposed to their own judgement when estimating expected loss-on-line?
  • To what extent do underwriters rely on market price when estimating risk?
  • To what extent does anchoring occur when underwriters estimate risk?

Five-part study
Underwriters were invited to participate in the five-part study.   Provided with information such as modelling output, loss/claims history, market price and other information standard in a broker’s submission, they were asked to estimate loss costs, both with, and without access to modelled output, with and without access to market prices, or with other information.

As well as trying to understand how much underwriters rely on models and market price, the study also looked for evidence of anchoring bias. It also tested for recognition-primed decision making (using a fast-judgement of a situation rather than careful deliberation) by being given five minutes or 60 minutes to analyse a scenario.

Also under the spotlight were consistency – some scenarios were repeated (with other scenarios in between and the results compared) – and consensus, to see how much variation there was between the answers to the scenarios.

What did we learn?
Firstly underwriters seemed to rely less on modelled output than was feared. They also tended to ignore market price when estimating risk, at least in the scenarios they worked on here.

When underwriters had enough evidence to base their decisions on, they tended to avoid anchoring bias – taking the modelled output as a starting point for their assessment of risk.

Particularly encouraging was the evidence that underwriters are generally skilled enough to avoid many of the widely observed pitfalls caused by cognitive bias.

More work needed
We believe the evaluation of systemic risk needs a research programme to develop these ideas further in exploring the relationship between the models and the people who use them in insurance decision-making.  Pertinent topics would include how company methods and culture affect how underwriters use models and how their decisions are influenced by external and internal factors.

Only then can we understand the potential risk to the whole insurance system in this delicate relationship between models and humans.

Are you sure you want to remove this article from your library?