Uncertainty in Catastrophe Modeling

by Judith Curry

Roger Pielke Jr. has a very interesting post on uncertainty in catastrophe modeling.  The basis for the post is an interview with Karen Clark.  Karen Clark developed the first catastrophe model, and is worried that these models are being given more credit and influence than they deserve.


A good overview of catastrophe modeling is provided by this document from RMS.

From the web page of Karen Clark & Co.:

Catastrophe Risk

While it’s virtually impossible to predict when or where the next catastrophe will occur, companies with exposure to loss need to be prepared for the types of events that could occur. They need to know the full range of possible future loss scenarios. They need tools to assess and manage their catastrophe risk.

Catastrophe Models

Many companies rely on catastrophe models to assess and manage catastrophe risk. Catastrophe models are very detailed and complex and they incorporate the science underlying the occurrences of catastrophes and the engineering knowledge to estimate the damage caused by catastrophic events. The models use statistical techniques to generate large samples of hypothetical future events. For each of these events, the models simulate the intensities by location and then estimate the damage to the exposed properties at each affected location.

The models are based on many assumptions and each assumption has not one ‘correct’ value, but rather a range of scientifically valid values. The scientists and engineers who work on the different models make different scientific judgments at different points in time about how to implement these assumptions. This means there is quite a bit of variability and uncertainty inherent in the models and hence the model loss estimates. Catastrophe models do not produce deterministic answers, but rather ranges of possible outcomes along with estimated probabilities of those outcomes. The simulated outcomes along with their estimated probabilities will naturally differ between the different models.

All of this doesn’t mean the models should not be used. The models are still the best and most sophisticated tools for catastrophe risk assessment. It just means the models need to be used with the limitations clearly in mind. The models are just one part of the risk assessment and management process, and they need to be supplemented and validated using independent information and benchmarks.

Interview in the Insurance Journal

The article can be found here [here].  Video of the full interview is [here].

The introduction to the article states:

The need for insurers to understand catastrophe losses cannot be overestimated. Clark’s own research indicates that nearly 30 percent of every homeowner’s insurance premium dollar is going to fund catastrophes of all types. “[T]he catastrophe losses don’t show any sign of slowing down or lessening in any way in the near future,” says Clark, who today heads her own consulting firm, Karen Clark & Co., in Boston.

While catastrophe losses themselves continue to grow, the catastrophe models have essentially stopped growing. While some of today’s modelers claim they have new scientific knowledge, Clark says that in many cases the changes are actually due to “scientific unknowledge“— which she defines as “the things that scientists don’t know.”

But don’t the models have to go where the numbers take them? If that is what is indicated, isn’t that what they should be recommending?

Clark: Well, the problem is the models have actually become over-specified. What that means is that we are trying to model things that we can’t even measure. The further problem with that is that these assumptions that we are trying to model, the loss estimates are highly sensitive to small changes in those assumptions. So there is a huge amount of uncertainty. So just even minor changes in these assumptions, can lead to large swings in the loss estimates. We simply don’t know what the right measures are for these assumptions. That’s what I meant… when I talked about unknowledge.

There are a lot of things that scientists don’t know and they can’t even measure them. Yet we are trying to put that in the model. So that’s really what dictates a lot of the volatility in the loss estimates, versus what we actually know, which is very much less than what we don’t know.

Given the uncertainties, how do you advise insurance companies to use these models?

Clark: I would love to see companies keep them in the context of these are tools, but they are only one small piece of the catastrophe risk assessment and management process. They give a view of the risk, but they don’t give the total view. And companies should be using other approaches and other credible information to bring into their process. They need to get more insight by using all sources of credible information and not just a model …

So the point is one, they’re one tool, they’re part of the risk assessment process. They should not be used to tell the whole story, and they should not be used as providing a number. One of the biggest problems with model usage today is taking point estimates from the model and thinking that that’s an answer. There is an enormous amount of uncertainly around those point estimates. So companies need to use other credible information to get more insight into their risk and to understand the risk better, what their potential future loses could be.

Companies should not be lulled into a false sense of security by all the scientific jargon which sounds so impressive, because in reality, as we’ve already discussed, the science underlying the models is highly uncertain and it consists of a lot of research and theories, but very few facts.

So companies need to keep this in mind. They need to be skeptical of the numbers. They need to question the numbers. And they should not even use the numbers out of a model if they don’t look right or if they have just changed by 100 percent.

What types of additional information should they be using? Is there still a role for underwriters?

Clark: … One, what other information they should be looking at from a scientific point of view? They should be looking at information such as what historical events have happened in a particular geographic region. What would the losses be today if these events were to reoccur? What would the industry losses be? What would my losses be? What future scenarios can I imagine based on what’s actually happened and what would the losses be from those scenarios? …

What kind of events have happened in this area, what do we know about those events? And then what does that tell us about what future events could be and what my losses could be? So that’s some important information they should be looking at from the scientific point of view.

Is the bottom line that cat models are a great tool but they’re not the be all and end all?

Clark: Right. How I like to express it now is that they are a great all-around tool, but they’re not the best tool for all purposes. And it’s time — given the importance of catastrophe losses that we look at some other approaches and some other ways that we can assess and manage the risk. … We need to start thinking outside the black box and introducing some of these new approaches.

JC comments:  Karen Clark definitely knows what she is talking about.  To put the catastrophe modeling for the insurance sector into context of the climate problem, catastrophe models for the insurance sector are five year forecasts of catastrophes such as hurricanes, floods, earthquakes, etc.  The uncertainty of catastrophes on longer timescales of interest in the climate debate has to be substantially greater.  I definitely like her concept of “scientific unknowledge.”

66 responses to “Uncertainty in Catastrophe Modeling

  1. Thanks, Professor Curry, for this topic.

    When it comes to Earth’s climate, few topics would be more pertinent.

    Measurements have shown that Earth’s heat source is the unstable remains of a supernova, hidden inside an opaque, glowing ball of waste products (H and He) that are mistakenly referred to as the Sun [“Neutron Repulsion”, The APEIRON Journal, in press, 19 pages (2011)].


    It is impossible to model for catastrophic events with input information based on lock-step, consensus opinions promoted by NASA and the space science community during the “space age”.

  2. “What that means is that we are trying to model things that we can’t even measure”

    The theory is all the proof that is needed. It does not even matter that the two most important elements to that theory can not even be measured. They somehow are able to convince themselves that they can input values representing unmeasurable data into their computer models and come up with a projection of a future world when they are incapable of measuring the one that they actually live in.


  3. John Kannarr

    I found the graphs and tables in or referenced by Pielke’s site to be a great set of antidotes to the news reports we are already hearing about yesterday’s tornado storms as being unprecedented, climate-change-caused catastrophe:
    U.S. Tornado Deaths 1940 – 2010:
    U.S. Tornado Fatalities 1880 – 2000: http://www.nssl.noaa.gov/users/brooks/public_html/tornado/deathraw.gif
    Fatalities from Various Weather Event Categories 1940 – 2009:

    Of course, there are a variety of interacting factors hidden within these numbers, including better detection and warning systems and better construction features in recent decades that have reduced the incidence of harm, offset by the greater spread of development now encroaching on previously open spaces and thus producing larger areas containing vulnerable populations, with greater likelihood of people being in the way of the paths of weather systems.

    • The use of Roger Pielke’s charts, under the nonsensical header “Weather is not Climate Unless People Die?” to refute claims of rising tornado frequency/intensity, or their possible connection to climate change is …? I dunno, maybe “breathtaking”?

      Tornado tracking and warning is immeasurably better today than in 1974, not to mention 1952 or ’28 when there was nothing but looking outside to spot the funnel cloud. To substitute hurricane fatalities for any objective measure of frequency and intensity of hurricanes is just a bit less absurd than making some argument about the weather/climate system based on drownings a sea being down since 1800.

      While I don’t know what trending of measurable frequency and intensity of hurricanes would show, I do know this argument from fatalities is an absurd one, and would make me tend to skip over the next argument I see by the same person.

      • Brandon Shollenberger

        One important thing to consider is where the recent tornadoes struck. Then consider something like the May 3rd tornado in Oklahoma years back. That tornado cut a mile-wide swath of destruction, and it only caused a handful of deaths.

        Even if increased carbon dioxide levels increased tornado frequency/strength (and there is no evidence it does), reducing emissions would not be a particularly effective way of limiting casualties. It is far better to focus on improving forecasting, warning systems and the like.

      • Some people have tornado shelters. I don’t know how well these work against F5 tornados. It’s pretty difficult to imagine tar being ripped off the roadway and pipes being stripped out of house slabs, unless you’ve seen it.

        Having said that, doppler radar is a very useful system. With multiple tornados along an interstate, you can drive right between them if you have to. Tornados at night were the only ones that made me nervous – you can’t see them, you can’t see the wall cloud / etc, so you don’t have a clue where they are.

      • Brandon Shollenberger

        Some personal storm shelters might be good enough for F5 tornadoes, but I wouldn’t advise trying to use them. I was in Edmond when that tornado struck (my house was not even a mile from where it struck), and the idea of taking shelter from it just seems crazy given the damage I saw. A storm shelter would be better than nothing, but if possible, I’d get out of the area.

        In any event, the news stations were able to give a half hour warning to get out of the area. It bothers me to think about how many people died recently in that weather because they lived in areas which lacked technology many people take for granted.

      • John Kannarr

        Sort of like the absurdity of using tree ring widths as proxies for temperatures when the exact correlations with numerous possible variables like rainfall, temperature, etc. are unclear, and certainly when the direct correlation between tree ring widths and the single variable temperature is falsified in recent decades?

    • I’d like to add something that I’ve observed about how folks react to Tornado warnings. Tornadoes differ from most kinds of disasters in that their path of destruction is relatively narrow and unpredictable. Your neighbor’s house can be destroyed but yours not touched. That allows a high level of complacency to develop in those that live in Tornado prone areas. Yep, most of the time we can get early warning, provided we bother to pay attention to the warning sources. Then, when the warning happens, folks are left with the choice of leaving their homes with a possible Tornado coming to go to a place that might just as easily be in its path. Most folks just play the odds.

      The fact that we have much better warning systems in place now should not be assumed to save many more people from Tornadoes than before those warnings were available. People knew the signs of approaching Tornadoes long before Television weather reports and SAME weather radios. They may actually have been somewhat more aware simply because they knew they had to rely on their own senses and actions to save themselves from harm.

  4. Will J. Richardson

    You must remember that Insurance Companies are rate controlled by State Governments. Insurers use the projections of the CAT models to justify large the large reserves which the models state are necessary to pay claims when catastrophes occur. That necessity for large reserves then justifies higher premiums and more money for the Insurance Companies to lend and invest. To Insurance Companies, over specification of CAT Models is a feature, not a bug.

    • Will J. Richardson


      I musta missed that lecture in class.

      Insurers _want_ to be handcuffed by large reserves compared to income?

      Seems to run contrary to what some say.



      Insurers overstate their reserves systemically, pretty much indicating they hate to hold larger reserves, and the government is aware of this deficit to the extent that tax subsidies to insurers, ie tax subsidies to those who live outside the insurable frontier, are never far from opening up a black hole in the Treasury.

      Respectfully, I believe you may have the cart and the horse inverted there, Will.

      • No financial institution (like an insurance company) wants to carry larger reserves than necessary. It’s a balance between the risk of becoming insovent and the foregone profits when having money sitting around.

      • Harold

        I don’t want to be seen to be speaking out of both sides of my mouth at the same time, however there are a few cases of financial institutions finding themselves making more money out of the large mounds of cash they handle than their supposed principle business.

        This in particular affects, for example, the banking industry in Canada where huge obstacles to entry, substantial statutory protections decrease competition, and the ability to house large concentrations of investment expertise relative to the smaller Canadian investment marketplace have led to the situation that some banks really do prefer to trade then to serve their customers.

        There’s some fire to Will J. Richardson’s smoke; however, it isn’t as substantiated in the case of the US insurance industry, and overall the claim he makes doesn’t fly, so far as the very slim research I’d done to see if I could confirm it.

  5. I laughed at least. Early in my career I struggled through a paper that described a 14 compartment carbon model of Chesapeake Bay. I was very pleased with myself that I had come to some understanding of the paper until I reached the final sentence. This said that the model was far too simple and that they were working on a model 3 times larger. In reality – typically estuarine trophic models are much simpler – but even then rely on literature values, gut feeling, guesses, etc as most of the data is almost always unknown or indeed unknowable.

    Flood modelling is a little different. Typically in Australia there are reasonable pluviograph data for rainfall and the occasional stream guaging stations. Rainfall/runoff models are used to generate flood flows and these are calibrated to stream gauge data. One or 2 dimensional equations can then be used to determine flood heights based on stream morphology.

    The problem arises when calculating for floods for a specific flood frequency. In Australia, synthetically derived rainfall data is used to model say a 1 in 100 year storm using the calibration data obtained from the real storms. The bigger the catchment is – the longer is the critical design storm duration and the lower the average intensity.

    The problem is that this does not reflect real storm patterns at all. In real storms intense storm cells scud across the landscape and are not widely dispersed average rainfalls of lower intensity at all. So while the overall flooding estimate at the mouth of the catchment might be reasonable. The catastrophic effects upstream occur from intense and compact storm cells in local catchments.

    This can be seen in recent flooding near Brisbane where intense storms with local record intensities caused most of the deaths that occurred in only 2 towns. This needs a rethink.

    Commiserations on the recent US weather disasters. Apparently, you can blame it on La Nina too.

    • “The problem is that this does not reflect real storm patterns at all. In real storms intense storm cells scud across the landscape and are not widely dispersed average rainfalls of lower intensity at all.”

      Karen Clark’s approach is to run different storms with different storm tracks, and use detail model of response (damage). It’s still statistical in nature, but running different tracks could allow identification of unusual weaknesses that wash out in the averaged damage model – it would just be an intermediate result. As far as I know, this isn’t being done. I do know that modeling for the east coast of the US is being done by the government to redefine the hurricane storm / tide surge boundaries. They’re turning out to be much more inland in places like Maine, since significant hurricanes almost never make it that far up the coast.

  6. As has been pointed here and at R Pielke jr’s blog, at least some players in the cat modeling industry have been a wee bit cynical.
    Reinsurance rates have been far too high since the 2004 hurricane season, according to credible reports.
    What was presented as serious models were tossed together marketing tools designed to justify huge prices to the reinsurance customers.
    We have all been paying for this, and we can in no small part look to the AGW community for providing credibility to anything to do with gullibly accepting a lot of sutrm and drang cataclysmic predictions.
    This is just one of the ways that AGW costs us all far more than the reality of CO2.

    • My understanding (having looked at the insurance risk modeling area some time ago) is that the conventional model appraoch vastly underestimated the likely costs, but the new approach (above) was close, so everyone shifted to the newer modeling approach. Loss expectations increased.

      • I did a quick recheck, and Karen Clark’s latest analysis concludes the modeling by three companies are likely over estimating future losses by ~30+%, compared to historical losses over a decade.

    • Any attempt to “model” where guesstimates are placed on various parameters will be subject to bias that favors those paying for or with a stake in the model prediction/projection.

      In economics/business/politics, arguments over fractions of a percent can be worth millions of dollars.

  7. Dr. Curry

    I’m not sure if this topic isn’t too large a bite for the typical reader to chew on and digest without significantly more guidance.

    Noting, for example, the double dimensional effect of increase in forcings on the area under the insurable frontier curve, one should understand that as risk factors increase linearly, the area of insurable, viable, human endeavor decreases as a square for any single risk profile.

    There are multiple overlapping and disjoint risk profiles, and this topic covers I think only the one catastrophic category of hurricanes (which you have shown in an earlier topic to have probability and uncertainty connected to CO2E levels, no?).

    The higher the CO2E, the more truncated the viable space of human endeavor, those sustainable profitable ventures that draw raw resources and create from them economic utility.

    This is the important topic in the CO2E debate, this is where there is the least understanding and most good to be gotten from study and science and the role of CO2E in perturbations and of the spatiotemporal chaos of climate and of our access to resources under that constrained frontier, and the economies rooted in that shrinking atoll surrounded by a sea of rising uncertainty.

    Cool that you present the topic, and kudos, but please, more.

  8. “Companies should not be lulled into a false sense of security by all the scientific jargon which sounds so impressive…”

    Seems I’ve heard this refrain before regarding the calculation of risk for mortgage backed securities.

  9. Seems like TEPCO could have used some of this.

  10. And in this vain, I find it encouraging that at least some media outlets aren’t doing what you would expect the media to do, and attributing the tornadoes to climate change:


    Pre-climategate, it would have been a foregone conclusion that the tornadoes were proof of climate change.

    • Err… vein (morning on the west coast).

    • Unfortunately ChE, some people seem to have no shame or morals. They will look at every opportunity to blow up every extreme weather event and catastrophe to ” Climate Change “. The usual culprits are already attributing the tornadoes to ” Climate Change ”


      • And ” Climate Scientists ” attributed these kinds of storms to ” Global Cooling in 1974.


        Go figure.

      • the 1974 issue of Time also says:

        Since the 1940s the mean global temperature has dropped about 2.7° F. Although that figure is at best an estimate, it is supported by other convincing data.

        Compare this with the global records today. The cooling from 1940-1970 has all but been eliminated from the current records in a rewritting of history that Stalin would be proud of.

      • Dr.Curry,

        Do you really think now that the problem with Climate Science is ” communication “?

        Do you support the statements what Trenberth, Mann and Schmidt seem to have made with respect to the recent tornadoes?

      • Tom Fuller said it beautifully in a post at Watts Up With That about this

        ” I swear they must have meetings to talk about this stuff. And I’m sure they must ask each other, “What’s the stupidest thing we could even think about doing?”

        And then they do it.”

      • And this is another excellent piece from Dr.Roy Spencer about the tornadoes


        His last sentence says it all

        ” Anyone who claims more tornadoes are caused by global warming is either misinformed, pandering, or delusional.”

      • now here’s the thing – do I belive a blog by someone I’ve never heard of as the last word OR do I look at what the IPCC has been saying since the 1990s, in their analysis of all the peer reviewed papers produced on the topic. No, the last sentence on his blog proves that these papers have been written by misinformed or delusional people, or pandering individuals. I guess that ends that debate.

      • It is not the papers that are in question, but the IPCC selection and interpretation of them. The IPCC is a political organization that produces artfully crafted advocacy documents. I suppose one could call that pandering but I do not use that term. It is an honest political difference. I read the same papers and come to an opposite conclusion.

      • do I belive a blog by someone I’ve never heard of

        Do you believe in the UAH satellite temperature data?
        Because it comes to you courtesy of that same someone you’ve never heard of.

      • So how many per reviewed papers did IPCC analysis produce stating that tornadoes are caused by Global Warming?

      • And this is what meterologists and climatoligists said about the tornadoes


        This is what David Imy from NOAA storm prediction center said

        “We knew it was going to be a big tornado year,” he said. But the key to that tip-off was unrelated to climate change: “It is related to the natural fluctuations of the planet.”

        And Piers Corbyn also predicted this weather event well in advance


      • Peter317 quote the whole sentence ” as the last word ” I don’t mean don’t trust someone’s work if you don’t know them BUT I don’t think that the last word to end a debate will come from one person, and its ESPECIALLY unlikely to happen if I have never heard of them. I will assume good faith on your part as my wording could have been clearer, so appologies.
        Ventor – my point is that the last IPCC report claimed that the there is a high probability that a higher level of atmospheric CO2 will lead to more extreme weather events in terms of quality and intensity (many papers make this claim – I suggest you consult your copy of the report or if its not handy a trawl using google scholar and the search terms CO2 and extreme and tornedo should do. The exact number of events is impossible to determine, the same as the age that a heavy smoker will die from lung cancer (they might not). Still, I don’t think many people doubt the link.

      • Paul Haynes,
        Additionally, Dr. Spencer lives in Alabama, which is right in the middle of tornado territory, and was personally affected by these tornadoes. I’d trust what he says as the last word over someone who’s probably never even seen a tornado in real life.

      • peter317
        how would living in Alabama give someone special knowledge about the relationship between CO2 and the strength and frequency of tornado-type events on the global scale. I’m sure he is a very nice man and I hope noone in his family has been injured in a tornedo, but his view that anyone who differs in opinion about this issue is misinformed, pandering, or delusional is a bit strong in my view.

      • Venter, I have been too busy to keep up with things, do you have any links to what they are saying? Re attribution of extreme events, see my previous post http://judithcurry.com/2011/01/15/attribution-of-extreme-events/

      • Dear Dr.Curry,

        Here’s the link


        I read that the earlier thread about attribution of extreme events. Your view points are fair.

        It is said that even before death toll is assessed and people have not even been buried, people like Mann, Trenberth and Schmidt come out making such statements. Is there no dignity left in this field?

        This again comes to an issue which we discussed earlier. Shouldn’t the major climate science community say ” enough is enough ” and issue a rebuttal to such statements stating that they have no basis in science?

        Who’ll bell the cat here?

      • Venter, they hang the bell on themselves, and are ringing it louder and louder. This upside down null is as necessary for their health as upside down Tiljanders.

        That’s an All Star Team, there, those three. Maybe it should read Ill Starred Team. I think they are increasingly finding the no other playmates on the ice. Others think the ice is too thin. Oh, and the ominous cracks.

      • Awake, climate cats.
        Do not send to know the truth.
        Bells, bells, be louder.

      • I’m on it, new post forthcoming

  11. IIRC, Karen Clark developed the modeling techniques that are now widely used and hailed as (almost) industry savers.

    • The models are only as good as the people doing the expert analysis on them and what the goals of the models are. It would be interesting to hear how much better(statistically and technically) her models are than simple statistical analysis techniques and real world results.

      In this case, success is ultimately measured by it’s contribution to the profit of the insurance company and thats what the goals are too. Over charging is likely not ideal, since your competitor with lower prices may win your customers over, but undercharging would be worse for the company. It would surprise me if internal corporate politics come into play for her as well.

      In research science the goal often is to grab the attention of the public.

      • excellent point on over/undercharging. Also, of course, reputation comes into play in the insurence market and there are considerable barriers to entry for new companies, mainly based on scale, so there is scope for supernormal profits. In the UK established companies like supermarkets have entered the insurence market, but this is still at the low end of the market.

  12. Risk is only part of the story as mentioned. Beyond risk, which is just the likelihood of the occurrence, one must then look at consequences, if the eventuates, and cost of prevention before or repair after the fact.

    Managing all that properly means trying to prevent the things that are cheaper to prevent than fix and preparing to respond to things that are more expensive to prevent than fix given the statistical likelihood.

    All that would likely appear in the “engineering quality report”.

  13. Keep in mind that those models that do not meet the expectations of the model builder are rejected by the model builder in a process similar to “selection of the fittest”.

    For example: A climate model that predicts a doubling of CO2 will result in no change in temperature will be rejected by the model builder and “fixed”. A model that predicts a coubling of CO2 will result in a 10C increase in temperature will be rejected by the model builder and “fixed”.

    How is a model “fixed”? Not by changing the physical laws within the model, but rather changing the weightings.

    For example, how important is water vapor, or aerosols, or evaporation rates, or convection or land use. No one knows the true answer to these questions, so the models assign weights to these, and the weights are adjusted so that the models will hindcast AND deliver a future projection that is within the moldel builders range of expectations.

    This last point is the critical issue because it give rise to the “experimenter-expectancy effect” the is well recognized in animal studies. We know from animal studies that you need to isolate any organism capable of learning from the experimenter through double blind controls.

    What climate science has missed is that climate models are machine learning programs. These programs are subject to the “experimenter-expectancy effect” if proper controls are not used during the model training process.

    This means double blind controls. That the model builder cannot have access to the output of the model during the training process. Only after the model is fully trained and no more adjustments will be made can the model builder see the results of the model.

    This isolation is not done in climate science, which results in models that are not predicting future climate. Rather they are predicting to match the model builders expectations of what will happen. This generates a feedback loop between the model and the model builder which reinforces the ego of the model builder, to the point of an addiction-dependency. The model becomes more real to the model builder than reality.

    • John Kannarr

      Excellent analysis!

    • I recall seeing so example climate model runs a while back where each run would faithfully track near to the recent known historical temperatures, and then when it moved beyond the known would vary from flat line to sky high in the future. That the model builder could then choose one as the “best” or most accurate seemed a scientifically ludicrous proposition.

      • ferd berple

        that is exactly what happens with models. as soon as you step off the known training data, they go haywire and predict anything could happen. Why? Because the models reflect the reality about the future. Anything could happen.

      • I naively thought that a model soundly based in physics including models of multiple systems which all contain elements of negative feedback would be intrinsically stable just like our planet. So if one models a mishmash of processes incorrectly to give the “right” answer with historical data, it is no surprise there are zero clues for the future until the missing bits are modelled too.

  14. Ferd, spot on.

    My field is healthcare and I’m involved all the time with GMP’s in-vitro studies, in-vivo studies and double blind, randomised, placebo controlled human clinical studies. I work with Universities worldwide constantly on product development, research etc. I’m involved in the techno commercial side. I liase with agencies like FDA and other regulatory authorities for approvals etc. I’ve published a couple of papers in peer reviewed journals in the healthcare field. I’ve seen what amount of scrutiny the data,methods and results get subjected to before publication, in this field.

    So when I compare the data collection, collation, analysis and results and disclosure requirements in my field and see what Climate Science is getting away with, it is a joke.

    These kind of data handling practices employed by climate science would get them thrown out at first base in my industry.

  15. I am continually amazed by model builders that do not recognize the limits of machine learning models. One of the very first models you are typically asked to build in computer science is a model that predicts the model builder.

    For example, the model builder is given a choice to select even/odd, true/false, o/1, etc. This is done repeatedly as the model watches the model builder. Then the model builder’s choice is hidden from the model, and the model is asked to predict what the builder will choose next.

    If you have done a good job of building a model, it will predict the model builder’s next choice better than 50% of the time. The model will outperform chance. It will demonstrate what the naive observer would regard as ESP.

    This simple example can then be expanded to predict what shoppers will purchase, what gamblers will bet on, etc. etc., as a means of using machine learning to make money. It can also predict which answers will be most acceptable to the model builder, the IPCC and those that provide grant money.

    Those models that do a poor job or predicting “acceptable” answers will be eliminated and replaced by models that provide “acceptable” answers. Over time the models become like “yes” men reporting to the CEO – the model builder. What you end up with are models that perform very well at predicting the answers that people expect to see.

  16. Here is how you create a false result in machine learning models, including climate models. Say you want to achieve a CO2 sensitivity of 3C per doubling of CO2:

    What you do is take historical temperatures for the past 150 years, then splice a synthetic (artificial) data set to this, for the next 150 years into the future, with temperature going up at 3C per doubling of CO2.

    You then train the model for the entire 300 years; so that the weights give you are reasonable fit over the entire period. Then you remove the artificial (future) dataset. The resulting model will then recreate the past accurately, and continue to predict the future that you have trained it to predict. Giving the level of CO2 sensitivity you built into the future data.

    The key to making this approach work is to allow for lots of input parameters in your model. Each parameter will have its weighting, and very small changes on the weighting of on parameter versus another can have dramatic effects over time. By introducing a very small increase in the error rate in the first 150 years of data, virtually at the level of the noise, you can custom build just about any model prediction you want.

    The classic example of this is linear programming models; where very small machine round off errors lead to huge errors in the final result. To solve these models you need to back iterate through the models to reduce the errors and converge on the solution.

    Creating “artificial” models work this technique in reverse. By introducing very small errors into the weights of the model, you can create almost any answer you desire in the future. By keeping the errors small enough, spread over a large number of input parameters, the technique is virtually impossible to detect.

    Now you might argue that no reputable climate modeler would do this. However, this is exactly what happens when people build models, just in a more subtle fashion. They run the model numerous times, adjusting the weights until the model delivers the answer they expect. This is the model they then keep.

    The effect is that they have trained the model to predict what they expect, exactly as though they had used an artificial (future) dataset for the training in the manner I’ve laid out. The difference is that they are (perhaps) not aware of what is happening, while I’ve laid it out so you can recognize how the cheat takes place.

    • Thanks. That nicely explains the model runs I have seen and why the models are so unstable.

  17. Fascinating thread. For once rather than being an amateur amongst professionals, I am on home territory here, not as a cat modeller but as an underwriter, albeit retired.
    My take from the inside is that Karen is broadly right, in that the models are overspecified, with numerous parameters of questionable relevance that can be, and have been, tweaked to produce accurate hindcasts but concealing systemic flaws that have consistently undermined their predictive capability. Our crude internal models still generally produce higher expected catastrophe losses than the market leaders’ because we take the view that they have underestimated the combined effects of population shift, affluence and changes in insurance coverage in place.
    However although the models drive the amount of cover insurers buy, and pricing by a considerable number of reinsurers (disparagingly known by more cynical peers as “model jockeys”), the most significant driver of pricing is simple supply and demand. Because there is a huge need for windstorm cover in Florida and earthquake cover in California, prices in these areas are relatively higher, and in areas where the aggregate exposures are lower, competition rapidly drives prices down. However as reinsurers pool losses globally, prices will have to go up to pay for Australia, Chile, Japan etc as the non-US component of their porrtfolio is haemorrhaging.
    The industry is hugely competitive, often insanely so, which is why it goes through violent cycles of 8 years for property insurance, where claims emerge quickly, and 16 years for liability. As a whole it is far too stupid to pull off a successful profiteering strategy. So although as buyers you may be frustrated and annoyed by the instability of pricing, there is very little excess profit made by the industry over a cycle, it is just that at certain points in the cycle when loss activity has been low and reserves are understated, it looks that way. You can also be reassured that in spite of all the noise made by insurers about climate change generating increased cat losses (the Swiss Re are particularly bad offenders) it has little or no effect on actual pricing.

  18. Karen is mostly responding to insurance executive concerns about maximum claims and increases to commercial and business premiums under the significantly upgraded RMS model introduced earlier this year, since it reports increased wind risk. Since insurance is one of the most competitive and politicized businesses, the considerations tend to be short-sighted and profit-driven.

    Since previous short-term hurricane models may have overestimated risk but the older longer-term modelling technique is ill-suited to understanding the implications of climate change and is known to result in too-conservative estimates, Karen’s recommendation to nervous executives is to compensate by planning gradual rate increases rather than upfront hikes in areas identified as most vulnerable. Insureres, not hurricane modelers, set the rates. Objectively, her recommendations to insurers may or may not be good advice and may leave businesses and property owners – and the insurance sector – wide open over the longer-term.

    Her interview underscores the need for business sectors to develop their own policies based on the most current science – not politics.

    Regardless, it should not be mistaken for critical climate-related issues for the broader public and the rest of the world; or decision-making and management related to advancing emissions regulation, disaster reduction, or adaptation.

    • Martha, the main issue is the link between climate change and weather catastrophes. Even if you accept the basic tenets of AGW, linking this to catastrophes (on timescales of years to decades) has little to no skill, based upon the experiences of catastrophe modelers.

      • In reality, the main issue is the narrow conclusions you draw from the nature of uncertainties and policy relevant issues based in the science.

    • Martha,
      Do you miss all points this badly?
      If the insurance industry is not getting good risk models, then no one is.

  19. fitness reviews
    Appears to be like Pleasant. You have to be very sensible! fitness reviews