by Judith Curry
Roger Pielke Jr. has a very interesting post on uncertainty in catastrophe modeling. The basis for the post is an interview with Karen Clark. Karen Clark developed the first catastrophe model, and is worried that these models are being given more credit and influence than they deserve.
Background
A good overview of catastrophe modeling is provided by this document from RMS.
From the web page of Karen Clark & Co.:
Catastrophe Risk
While it’s virtually impossible to predict when or where the next catastrophe will occur, companies with exposure to loss need to be prepared for the types of events that could occur. They need to know the full range of possible future loss scenarios. They need tools to assess and manage their catastrophe risk.
Catastrophe Models
Many companies rely on catastrophe models to assess and manage catastrophe risk. Catastrophe models are very detailed and complex and they incorporate the science underlying the occurrences of catastrophes and the engineering knowledge to estimate the damage caused by catastrophic events. The models use statistical techniques to generate large samples of hypothetical future events. For each of these events, the models simulate the intensities by location and then estimate the damage to the exposed properties at each affected location.
The models are based on many assumptions and each assumption has not one ‘correct’ value, but rather a range of scientifically valid values. The scientists and engineers who work on the different models make different scientific judgments at different points in time about how to implement these assumptions. This means there is quite a bit of variability and uncertainty inherent in the models and hence the model loss estimates. Catastrophe models do not produce deterministic answers, but rather ranges of possible outcomes along with estimated probabilities of those outcomes. The simulated outcomes along with their estimated probabilities will naturally differ between the different models.
All of this doesn’t mean the models should not be used. The models are still the best and most sophisticated tools for catastrophe risk assessment. It just means the models need to be used with the limitations clearly in mind. The models are just one part of the risk assessment and management process, and they need to be supplemented and validated using independent information and benchmarks.
Interview in the Insurance Journal
The article can be found here [here]. Video of the full interview is [here].
The introduction to the article states:
The need for insurers to understand catastrophe losses cannot be overestimated. Clark’s own research indicates that nearly 30 percent of every homeowner’s insurance premium dollar is going to fund catastrophes of all types. “[T]he catastrophe losses don’t show any sign of slowing down or lessening in any way in the near future,” says Clark, who today heads her own consulting firm, Karen Clark & Co., in Boston.
While catastrophe losses themselves continue to grow, the catastrophe models have essentially stopped growing. While some of today’s modelers claim they have new scientific knowledge, Clark says that in many cases the changes are actually due to “scientific unknowledge“— which she defines as “the things that scientists don’t know.”
But don’t the models have to go where the numbers take them? If that is what is indicated, isn’t that what they should be recommending?
Clark: Well, the problem is the models have actually become over-specified. What that means is that we are trying to model things that we can’t even measure. The further problem with that is that these assumptions that we are trying to model, the loss estimates are highly sensitive to small changes in those assumptions. So there is a huge amount of uncertainty. So just even minor changes in these assumptions, can lead to large swings in the loss estimates. We simply don’t know what the right measures are for these assumptions. That’s what I meant… when I talked about unknowledge.
There are a lot of things that scientists don’t know and they can’t even measure them. Yet we are trying to put that in the model. So that’s really what dictates a lot of the volatility in the loss estimates, versus what we actually know, which is very much less than what we don’t know.
Given the uncertainties, how do you advise insurance companies to use these models?
Clark: I would love to see companies keep them in the context of these are tools, but they are only one small piece of the catastrophe risk assessment and management process. They give a view of the risk, but they don’t give the total view. And companies should be using other approaches and other credible information to bring into their process. They need to get more insight by using all sources of credible information and not just a model …
So the point is one, they’re one tool, they’re part of the risk assessment process. They should not be used to tell the whole story, and they should not be used as providing a number. One of the biggest problems with model usage today is taking point estimates from the model and thinking that that’s an answer. There is an enormous amount of uncertainly around those point estimates. So companies need to use other credible information to get more insight into their risk and to understand the risk better, what their potential future loses could be.
Companies should not be lulled into a false sense of security by all the scientific jargon which sounds so impressive, because in reality, as we’ve already discussed, the science underlying the models is highly uncertain and it consists of a lot of research and theories, but very few facts.
So companies need to keep this in mind. They need to be skeptical of the numbers. They need to question the numbers. And they should not even use the numbers out of a model if they don’t look right or if they have just changed by 100 percent.
What types of additional information should they be using? Is there still a role for underwriters?
Clark: … One, what other information they should be looking at from a scientific point of view? They should be looking at information such as what historical events have happened in a particular geographic region. What would the losses be today if these events were to reoccur? What would the industry losses be? What would my losses be? What future scenarios can I imagine based on what’s actually happened and what would the losses be from those scenarios? …
What kind of events have happened in this area, what do we know about those events? And then what does that tell us about what future events could be and what my losses could be? So that’s some important information they should be looking at from the scientific point of view.
Is the bottom line that cat models are a great tool but they’re not the be all and end all?
Clark: Right. How I like to express it now is that they are a great all-around tool, but they’re not the best tool for all purposes. And it’s time — given the importance of catastrophe losses that we look at some other approaches and some other ways that we can assess and manage the risk. … We need to start thinking outside the black box and introducing some of these new approaches.
JC comments: Karen Clark definitely knows what she is talking about. To put the catastrophe modeling for the insurance sector into context of the climate problem, catastrophe models for the insurance sector are five year forecasts of catastrophes such as hurricanes, floods, earthquakes, etc. The uncertainty of catastrophes on longer timescales of interest in the climate debate has to be substantially greater. I definitely like her concept of “scientific unknowledge.”
