by Judith Curry
So, what can we expect for the 2012 Atlantic hurricane season? All of the seasonal forecasts are coming in for a near or below normal year. But already, we have seen two named storms, before the official start of the hurricane season on June 1.
What do I think the 2012 season holds?
A number of groups publicly issue seasonal hurricane forecasts. The main customer for such forecasts is the reinsurance sector. The timing for purchases is December, March, and end of May; hence seasonal forecasters put out their forecasts on this schedule. Bill Gray and Phil Klotzbach have recently abandoned their December forecasts since they have found they have no skill (something I have been saying for years). Their is little skill in March, but by the end of May and certainly in June, there is some predictability for the coming season.
So what do the latest (end of May) seasonal forecasts say?
NOAA‘s Climate Prediction Center says there’s a 70 percent chance of nine to 15 named storms (with top winds of 39 mph or higher), of which four to eight will strengthen to a hurricane (with top winds of 74 mph or higher) and of those one to three will become major hurricanes (with top winds of 111 mph or higher, ranking Category 3, 4 or 5). Based on the period 1981-2010, an average season produces 12 named storms with six hurricanes, including three major hurricanes.
Gray and Klotzbach anticipate that the 2012 Atlantic basin hurricane season will have reduced activity compared with the 1981-2010 climatology. The tropical Atlantic has anomalously cooled over the past several months, and it appears that the chances of an El Niño event this summer and fall are relatively high. We anticipate a below-average probability for major hurricanes making landfall along the United States coastline and in the Caribbean. Summary: 10 named storms, 4 hurricanes, 2 major hurricanes.
Weatherbug (Earth Networks): The Atlantic hurricane season will see a total of 11 to 13 named storms form in the Atlantic Hurricane Basin. The 30-year average is about 12 storms. Six or seven of these storms could become hurricanes, and two to four are predicted to become major hurricanes with possible winds in excess of 111 mph. The long-term average is about six hurricanes and three major hurricanes. The potential for a U.S. landfall appears to be near normal for the 2012 Atlantic Hurricane Season.
Weatherbell (Bastardi, Maue et al.) predict: The total number of storms and the ACE Index in the Atlantic are expected to be down for the 2012 Atlantic Hurricane Season. However, the potential for damage from land falling storms is high. This is because the pattern is such that development of over 50% of the storms may be within 300 miles of the US coast. Long tracked storms that turn out to sea, as we saw last year (with the exception of Irene), are not anticipated this year. Instead, storms that develop close to the coast are expected, which may reach maximum intensity as they reach shore. This would mean a tougher forecast year, in spite of fewer storms than the past couple of years. The earlier part of the season, June-August, may favor development near and just off the Southeast coast. Storms such as Belle (1976) or Bob (1991) come to mind in this case. During the mid and latter part of the season, the Gulf may be at a higher risk from the African waves that can make it across the Atlantic and develop late. One of the analog years is 2002, which is the year of Isidore and Lili.
TSR predicts a near normal year.
WSI /Weather Channel: This preseason forecast calls for 11 named storms, 6 hurricanes and 2 major hurricanes (Category 3 or higher on the Saffir-Simpson Hurricane Wind Scale). These forecast numbers are below the long-term average from 1950-2011 (12 named storms, 7 hurricanes, 3 major hurricanes) and well below the averages for the current active era from 1995-2011 (15 named storms, 8 hurricanes, 4 major hurricanes).
If there are other seasonal forecasts that you have spotted, let me know.
JC evaluation of these forecasts: the probabilistic approach of NOAA is far preferable to the deterministic approach of Gray and Klotzbach. The emphasis on named storms is rather pointless: the threshold for naming a storm is somewhat arbitrary; tropical depressions and tropical storms rarely cause much damage; and it is the number of landfalling hurricanes that really matter in terms of socioeconomic impacts. The more qualitative approach of Weatherbell is also a good approach, and I like that they separate out the dynamics of the early part of the season from the peak part of the season. WSI’s comparison with other years during the current active phase (e.g. since 1995) makes more sense than comparing to last 30 or 60 years.
Do I think any of these forecasts are likely to be correct? Some background on how I reason about this.
Seasonal predictability of Atlantic hurricanes
For background on this topic, see a report I wrote last year for a client in the reinsurance sector: Assessment of strategies used to project U.S. landfalling hurricanes. An excerpt (refers to figures in the document):
Interannual and multidecadal modes of climate variability have long been known to have an influence Atlantic hurricane activity. Several studies have explored the impact of different climate indices on Atlantic hurricane activity, including Atlantic and tropical sea surface temperatures, El Nino-Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), West African monsoon, Atlantic Multidecadal Oscillation (AMO), Atlantic Meridional Mode (AMM), Madden-Julian Oscillation (MJO), Quasi- Biennal Oscillation, and the solar cycle. In the analysis provided here, we focus on the ENSO, AMO and PDO because of their potential for longer-range predictability.
The El Nino Southern Oscillation (ENSO) dominates the interannual variability of North Atlantic hurricanes. Recent research by Kim, Webster, Curry (2009) highlighted the impact of the increasingly frequent Modoki El Nino on Atlantic hurricane activity. The Modoki is associated with central Pacific warming, rather than with eastern Pacific warming that characterizes the canonical El Nino.
Kim, Webster and Curry (2009) demonstrated that the two distinctly different forms of tropical Pacific Ocean warming have substantially different impacts on the frequency and tracks of North Atlantic tropical cyclones. There is a clear difference between the number of cyclones forming during EPW (El Nino) and EPC (La Nina) events, but there is almost as large a difference between the EPW (El Nino) and CPW (Modoki) events.
The location of the tropical Pacific warming (central or eastern) also affects the location of cyclogenesis and the tracks of tropical cyclones. During an EPW (El Nino), track density is reduced over most of the North Atlantic, with a concentration in the western and Caribbean regions. The tracks during a CPW (Modoki) event differ markedly from those occurring during an EPW event: track density for CPW increases across the Caribbean, the Gulf of Mexico, and the U.S. east coast, but it decreases in the central and western North Atlantic. During an EPC (La Nina) event, large increases in track density occur across the entire North Atlantic.
We are currently in the warm phase of the AMO and the cool phase of the PDO. The total number of Atlantic hurricanes has strong interannual and interdecadal variability, but the highest numbers are characterized by warm AMO and cool PDO. The AMO and PDO also provide a signal regarding the location of the landfalls:
- Atlantic coast: more frequent landfalls during warm AMO, and cool PDO
- Florida coast: more frequent landfalls during warm AMO
- Gulf coast: no strong multidecadal signal
2012 regimes and teleconnection modes
The lower than normal prediction for 2012 is primarily associated with the anticipation of an El Nino. A summary of the latest ENSO forecasts is provided here. A discussion of the various forecasts from the IRI page is:
Although most of the set of dynamical and statistical model predictions issued during late April and early May 2012 predict continuation of neutral ENSO conditions through the middle of northern summer (i.e., June-August), slightly more than half of the models predict development of El Nino conditions around the July-September season, continuing through the remainder of 2012. Still, a sizable 40-45% of the models predict a continuation of ENSO-neutral conditions throughout 2012. Most of the models predicting El Nino development are dynamical, while most of those predicting persistence of neutral conditions are statistical. It is clear that uncertainty remains regarding the ENSO state during the second half of 2012.
Individual model performance can be found here. For CFAN’s ENSO forecasts, we mainly look at ECMWF (secondarily at NCEP). While sometimes ENSO is predictable 6+ months in advance, a key issue in ENSO forecasts is the springtime predictability barrier, whereby forecasts across Mar, Apr, May have low skill. Predictability picks up for forecasts initialized later in May or Jun.
A good article assessing ENSO predictability was just published in BAMS, Skill of Real-Time Predictions of ENSO During 2002-2011: Is our skill increasing?
Bottom line ENSO forecast: many forecasters think we are headed for El Nino by late summer/autumn. I agree that the most likely scenario is to have the ENSO index positive, it is not clear whether it will stay in neutral territory or make it to El Nino. Note: there is no sign of a Modoki (central Pacific warming), which is more predictable than ENSO.
With regards to AMO and PDO, the AMO index is currently moderately positive, the PDO index is currently moderately negative.
In terms of analogue years for 2012:
- weakly positive AMO: 2011, 2003, 1999, 1967, 1961, 1955, 1954, 1951, 1949
- of these, negative PDO: 2011, 1999, 1967, 1955, 1954, 1954, 1951, 1949
- El Nino years (late summer) for warm phase of the AMO: 2009, 2002, 1997, 1957, 1951
By these criteria, 1951 is the best analogue year for North Atlantic hurricanes. According to the Wikipedia, in 1951 there were 10 total storms, 8 hurricanes, and 5 major hurricanes. The first hurricane of the season, Able, was the earliest major hurricane in Atlantic hurricane history. There were no U.S. landfalls this year, although major Hurricane Charlie struck Jamaica and caused considerable damage and loss of life. Apart from Able, the other hurricanes formed in the prime part of the hurricane season (Aug, Sept) and apparently originated from African easterly waves.
The positive AMO index is especially relevant for the Cape Verde type storms that form from African Easterly Waves during the peak of the hurricane season (Aug/ Sept). The frequency of AEWs is moderately tied to the AMO index, and also the AMM, which is the tropical expression of the AMO. Whereas the AMO index is moderately positive right now and is likely to stay that way through the summer, the AMM SST index is significantly negative. Generally, the AMM/AMO are very strongly correlated. However, there’s still plenty of time for this to change especially given how sensitive Gulf of Guinea SSTs are this time of year. Therefore, the AMO may not be as important this season in terms of its impact on Cape Verde type hurricanes from African easterly waves. The conflicting signs of the AMO and AMM suggest that we are likely to see either average to slightly lower than normal AEW activity during the peak part of the hurricane season.
Early 2012 activity
See the Wikipedia for a summary of the season so far (two named storms in late May). Several factors have contributed to the early activity in late May (as per James Belanger, CFAN’s lead hurricane forecaster):
- Easterly wave genesis and trajectories in the southern Caribbean have been shifted northward relative to climatology this year. In fact, the first two East Pacific TCs appear to be entirely due to easterly waves generated in-situ in that basin.
- High-latitude blocking in the eastern U.S. has been unusually persistent which has allowed a series of weak upper-level cut-off lows to penetrate beneath the ridges and into the Gulf of Mexico/SE U.S.
- The interaction between the cut-off lows along with easterly waves has favored the transient-trough interaction TC pathway to occur. This process is facilitated by high-latitude ridging as it provides a region where relatively weak deep-layer shear is present.
- in the early part of the season, we may see additional activity with the formation of tropical and subtropical storms.
- even if the number of named storms turns out to be average or below average, we may still see an average to above average number of hurricanes and major hurricanes, particularly in the main part of the hurricane season (Aug/Sept).
- the Gulf of Mexico is relatively more vulnerable than say for the last few years, owing to El Nino tracks (which tend to stay away from the Atlantic coast) and the warm temperatures in the Gulf.
- assuming that we are headed for an El Nino, a key issue is when the transition occurs. If in July, then ENSO’s impact on the hurricanes will be very significant. But if in late August, then the impact will be much smaller. Note, changes in the Walker circulation and downstream wind shear impacts on the North Atlantic (that impact hurricanes) typically takes order 1 month to be manifested once moderate ENSO state is established.
I’ve already seen the first “get used to having more hurricanes because of global warming” post of the season on another site…
You are exactly right, cirby
The AGW debate isn’t about climate.
It is about restoring integrity to government science, constitutional rights of citizens, and constitutional limits on government.
It is about tearing down the invisible wall of deceit that world leaders built in 1945-46 to save the world.
The invisible wall of deceit that instead enslaved us.
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
Of course without any apparent end in sight for global temperature increase I think the climate scientists are onto something.
lolwot’s famous words:
…at least until 1998:
Here’s what 3 of the “climate scientists” (and “Team” members) have to say:
Kevin Trenberth has referred to this unexplained “lack of warming” (despite continued increase in human GHGs) as a “travesty”.
Phil Jones has referred to it as “no statistically significant warming”
But we have to wait 3 more years before this slight cooling trend is “statistically significant”, according to Ben Santer.
You only get flat since 1998 because you’ve started your trend 0.2C above the end of the 1970-1998 global warming trend line:
If global warming really stopped in 1998 temperature should have followed the blue line, not the pink line:
Temperature followed the pink line, so global warming did not stop in 1998:
Yes, “climate scientists are onto something.”
A gravy train of federal grant funds for perpetuating misinformation.
Since the invisible wall of deceit was built to save the world from destruction by “nuclear fires” in 1945-46, astronomy, astrophysics, climatology, cosmology, geology, meteoritics, nuclear, particle, planetary, solar and space sciences were compromised before Climategate emails and documents exposed the tip of this cancerous growth on government science in November of 2009.
World leaders are not to blame for trying to save the world. Their intentions were noble in building the Berlin wall, the Iron Curtain, and the invisible wall of deceit in government science. But their mode of implementation enslaved, rather than ennobled, citizens.
The wall dividing Berlin (1961-1989) is gone.
The Iron Curtain fell and the USSR dissolved in 1991.
The invisible wall of deceit that grew out-of-sight on governments after 1945-46 is now coming down!
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
“Since the invisible wall of deceit was built to save the world from destruction by “nuclear fires” in 1945-46”
If you claim global warming stopped in 1998 how do you explain the UAH satellite warming trend from 1979-present being *greater* than the warming trend from 1979-1998?
If the rate of warming since 1979 has increased in the last 14 years it cannot be true that warming stopped in 1998.
A monkey throwing darts can pick numbers that are just as reliable. Why we pay dollars when we can be paying bananas for the same thing is beyond me.
You should get used to the idea that what happens in the future will be bounded by the same limits of what has happened during the past ten thousand years, again and again and again and again. There were times of more and less hurricanes. This will continue. There will be times of more and less hurricanes. There were times of a little warmer than now and of a lot cooler than now. That will happen again.
and again and again and again……….
“Tonight’s forecast: Dark!…With scattered light toward morning.” –George Carlin.
I think this is true for most of the ‘imaginative’ predictions of future chaos and calamity. It is very easy to imagine ‘new’ and extraordinary’ rather than ‘a bit more than usual’ or ‘ a bit less than usual’. With usual being only our most recent experience.
Judith – I like the
It’s the quintessential modifier for those talking about predictions. It even covers the ‘nothing happened’ scenario given that most people are predicting ‘something happening’.
It is much more subtle, though, than ‘expect weather’ or ‘we actually don’t know’.
Until a black swan event comes along, which they are apt to do. But in looking at the bigger picture, one might ask what might be a bit different during this interglacial period that would have it stand out from all the other interglacials over the past several million years. Anything that might alter the “business as usual” periodicity of climate fluctuations? Any unusual forcing that did not exist during the past dozen or so interglacials that exists now?
What, like something wonderful enough to prevent us sliding back into the horrors of another glacial period? Something that might ensure warmth and conditions conducive for life to teem and thrive for thousands of years? The most fortuitous but wonderful thing ever achieved by human activity?
Easiest geoengineering is to add potent GHGs to get the world to warm should we slide into an ice age. Not so easy to do the reverse.
Cover the ice cap with black carbon. We want to use something we know works.
Anthropogenic warming plus nuclear winter equals anthropogenic just right. It’s a LOT easier to cool the planet than to warm it.
Not only is anthrogeoengineering easier for cooling than warming, Nature is also more likely to cool the globe, near, middle and long term, than warm it. So erring on the cooling side, by humanity, is much more fraught with peril than erring warming, which is a blessing, and plant food to boot.
A swift boot for Humin Supreme. That growth’s not the cornucopia of global warming and enhanced carbon atmosphere, that’s the catastrophe. Or apostrophe. Or sumpin’.
Note trend in “Historical Tropical Cyclone Activity Graphics”
Maue, Dr. Ryan N. “Historical Tropical Cyclone Activity Graphics.” Scientific. Global Tropical Cyclone Activity Update, May 28, 2012. http://policlimate.com/tropical/index.html
Global Hurricane Frequency — Dr. Ryan N. Maue — Updated May 1, 2012 — 12 month running sums
“Figure: Global Hurricane Frequency (all & major) — 12-month running sums. The top time series is the number of global tropical cyclones that reached at least hurricane-force (maximum lifetime wind speed exceeds 64-knots). The bottom time series is the number of global tropical cyclones that reached major hurricane strength (96-knots+). Adapted from Maue (2011) GRL. “
That’s the problem with climate science. Focus should be on understanding the past, instead it’s on what the Clients want, namely a reasoned forecas so they can justify their allocations and fees.
Analysis of the past is only seen as instrumental of the forecasting.
I expected biology, I’ve found drug research.
As a point of interest, I believe, that the last major hurricane to make landfall on the USA was Katrina, in August 2005. This is the longest time period, since records began, between such landfalls. It will be interesting to see how long this record period turns out to be.
I know it was “only” a cat 2 but I was in Houston when Ivan nailed us and it seemed pretty major to me. If you’ve been through one you never want it to happen again.
I mean Ike. See I’m trying to block it out of my memory. :)
I never heard a Hurricane Andrew survivor have a brain fart and call it Anthony or Arliss…
Brain farts R me.
Most everything that Crip says is undocumented and incorrect.
Yes, It was a long time, like within a month that another Category 5 storm hit the USA.
These are all anecdotes anyways, but the issue is that Crip says anything that comes into his head.
Sorry, my memory was wrong. Wilma was the last major hurricane to make landfall on the USA.
From the evidence of historical records there appear to be much greater periods of storminess during the LIA than there ever are during the modern slightly warmish period we are experiencing today. Presumably that is because there is greater energy differences created during the extreme periods of warm/cold during the lia rather than the moderate changes we experience today
I have never understood why this is not acknowledged more often. A heat engine’s power comes not from its absolute temperature, but the temperature difference between the heat source and heat sink.
Some months ago I came across a simple method (based on the current Arctic data) to estimate probability of the N. Atlantic’s hurricane activity more than a decade in advance:
I predict an major increase in the number and strength of land falling hurricanes this year in the US.
I predict an major decrease in the number and strength of land falling hurricanes this year in the US.
I predict an the number and strength of land falling hurricanes this year in the US will stay about the same.
Prepare to be surprised! :)
what a croc . . every time we have six puffs of wind a drop in the barometer, the authorities give it a name and the media goes into “Storm Watch” hysteria and fear mongering.
No names until it hits hurricane levels.
“But be prepared to be surprised.”
This should be the subtitle of every climate science article.
So it’s been about 7 years since a nasty landfall? The longer it goes without, the greater the odds are of one hitting. I’m going to go out on a limb here and say there might be some bad storms this season. Then again, there might not :) Who was it that said hurricanes were going to be way worse and WAY stronger about 7 years ago? I forget the guy :)
We’re not talking about earthquakes.
The 7 years thing is misleading. Hurricane Ike (2008) was the 2nd costliest storm to strike the U.S. While it was Cat 2 at landfall, it had an extremely large horizontal extent and large storm surge.
Yes, a wicked surge that started earlier than anyone expected. I remember a lot of people got stranded on the Boliver peninsula a good 24 hours before landfall.
Gustav and Irene were also major hurricanes though not at landfall but I would call anything causing over a billion dollars in damage and some deaths nasty.
I find the distinction between minor and major hurricanes to be rather arbitrary and as Irene demonstrated even a tropical storm can be quite devastating.
Rules are rules. 7 years and counting…
Land falls in the way of the cyclonic whirlwind. Are lessons learned from New Orleans and from the subsequent Houston mess of an evacuation? Are the sireeeens all hooked up to reliable grids or are they powered by solar or wind?
Landfalls, and insurance companies stand under the Falling Rocks signs and poke upward.
It’s not misleading in the context that I phrased it. Were not the storms supposed to become “stronger and more frequent”? with the rise in C02?
That was the point. If you are going to position yourself in a way that says more catastrophes (had to use spell check on that one :)) will happen because of C02 increases and then they don’t happen, just say “oh, well” and go on your way. In the mean time thanks for the money you gave me to prevent it from happening. Dr. Curry, you site one or two storms in the past 7 years. I don’t think the 7 years thing is misleading at all. What I would like to know is why you folks that have clout are not mentioning the things that are not happening that were supposed to happen because of the stunning increases in the awful gas.
The hypothesized link between hurricanes and global warming is ONLY associated with intensity, in particularly the % of cat 4,5 hurricanes. this is increasing (see my previous post http://judithcurry.com/2010/09/13/hurricanes-and-global-warming-5-years-post-katrina/). Landfalling hurricanes are small fraction of global hurricanes. Also, ACE (which has been decreasing for past 5 years) is primarily dominated by frequency and duration of hurricanes (not intensity); frequency in regions outside the atlantic have been decreasing, largely associated with the circulation patterns in the Pacific.
There have been various hypothesized links. Perhaps the 4-5 intensty hypothesis is the only one left standing, but by no means confirmed.
The increase (since 1980) shows up in every data set. There is a new reanalyzed data set coming out soon that also shows the cat 4,5 increase. The increase is there, attributing the increase to AGW vs natural variability is the controversial part.
How does the AGW explanation reconcile with the paleo record from tidal marshes shows that higher category hurricanes were more frequent long ago? And at what point is it weather, and at what point is it climate?
And how much of what is called an increase in 4 & 5 today based on having more data about what goes on in storms when they are far at sea?
How can the folks who project future cyclones know how many there were before satellites? Is that why they use “likely” a lot? No one really knows how many hurricanes have been formed if they didn’t strike land, correct? Even then how big they were? How can you say with confidence that AGW is causing an increase when you can’t tell how many nor how strong storms have been in the past. This century could be a lull for all we really know.
Some of the other links I’ve heard discussed, and so far as I know not yet falsified:
1. Starting point of hurricanes will become more varied over time as the ocean warms. The “creche of storms” will expand from its original small range in each tropical basin to cover larger and larger territory.
2. Time of hurricanes will become more varied over time as the ocean warms. The “hurricane season” will expand from June-November to start earlier and end later.
3. Predictability will fall, as seasonal patterns break down due increasing complexity of hurricane climate systems.
4. Path eccentricity will increase, as hurricanes seek course of least resistance in an increasingly complex climate neighborhood.
5. Mean path length will increase.
Anyone know if there’s anything to any of these?
Kent Draper | May 29, 2012 at 8:52 pm |
How can the folks who project future cyclones know how many there were before satellites? Is that why they use “likely” a lot? No one really knows how many hurricanes have been formed if they didn’t strike land, correct?
I believe it’s called a Ship’s Weather Log, Kent. Though tonyb would be better able to tell us something about this.
Bart R, I can see ships back to the 1500’s noting the weather. I can’t see them being able to determine the size and strength of the storms or even if they were hurricanes. Before that, I don’t believe there was a lot of travel across the Atlantic or Pacific and am not aware of any ancient Indian records of hurricanes. Back then, folks pretty much kept to the shore line. So, how would the old folks know?
Kent Draper | May 30, 2012 at 12:49 am |
I make no claims about pre-1500 hurricane records.
Pre-satellite? I expect if someone paid the various maritime record-keepers to scan through their logs, quite a bit about the frequency and location of hurricanes at sea could be derived.
I don’t pretend to know how good the observations from ships were of weather phenomena, but I expect the ships that survived ’em would be pretty well able to tell a hurricane when they found one. ;)
Though I agree, we wouldn’t get the kind of accuracy possible with satellites. Even though all three satellite records appear frequently at odds with one another in some minor way.
Ike, in September 2008, was a strong cat. 2 and was a very significant storm. Its damage field was basically the entire metro Houston area, with something approaching 100% power loss for the entire region. The storm impacted nearly 4 million people. The storm surge was the most significant in the area since at least carla in 1961. Winds damaged literally tens of thousands of homes, from the western edges of the area to the farthest east.
I would question the usefulness of a metric that does not have enough perspective to notice an Ike-level storm.
Even if one includes Ike the long term storm trend is still flat, however. The only metric that has changed in the amount of insurable structures vulnerable to storms.
deleting two of my comments and leaving the first destroys my point and delivers a completely different message than is conveyed without the deletions. If you object to the original message please delete the remaining comment. As it stands now, this is clearly not the message I meant to convey.
Maybe you should have just repeated it twice in the same post, then added “ad infinitum.”
I just thought you hit “post comment” too many times.
I obviously second your comment, as my reply also makes almost no sense.
Dr Curry, I think you inadvertently failed to spot that jeez’s 3 comments were all different…
Perhaps you should delete the third one too, then my reply, then jeez’s legitimate gripe and finally this comment too.
Then we can start again :)
Thank you – much obliged :)
“The tropical Atlantic has anomalously cooled over the past several months”
what difference does that make?
Global warming alarmists must hope for no hurricanes at all because only that can be an example of ‘climate wierding.’
The market will fluctuate, er, I mean, Maue’s ACE in the hole.
I subscribe to Weatherbell and Accuwetaher before that until BAstardi left. They’ve been calling for a below normal season for months now, with an increased chance of in-close development, which is already showing itself to be a good prediction.
I’m still waiting for Bill McKiibben to come out with one of his “Beryl has a middle name, and the name is global warming.”
Doctor Curry I noticed that your report (for a RE client) said “Gulf coast: no strong multidecadal signal.” What’s with the Gulf? Is there something special about the Gulf that gives it some greater independence from larger oscillations and such?
Most likely a number of competing factors
Well that clarifies everything.
Speaking of competing factions (er factors), did you know that the big biennial decision, risk and uncertainty conference is in Atlanta this year:
FUR was first established by Allais and Hagen 30 years ago. Might be of interest to you. Maybe. It almost always takes place in Europe, but this is a rare Stateside FUR. In Atlanta.
I do not understand what this NOAA prediction means: “there’s a 70 percent chance of nine to 15 named storms.” How could one falsify such a strange prediction? Probability run wild.
And of what possible use is such a statement? Even assuming it was “correct.”
It’s a pick up line. Say that in a bar, and you’re have to beat them off with a stick.
Several that have never been exposed to concepts such as the binomial or multinomoal distribution.
What result would make that prediction wrong, dopey? It’s utterly useless.
Haven’t heard tell too much about that second one, WHT.
Multi no mas, gather ye rosestones while May’s.
Lots of probability distributions are set up based on combinatorics, i.e. how many ways can combinations be created and what are the resultant probabilities. The Poisson distribution is another famous example of that approach. Someone can easily come up with probabilities of between N and M numbers of hurricanes in a season based on this kind of statistical bookkeeping.
It’s not my problem that you guys laugh at this sort of thing.
Sorry guys, but Webbie and I are on the same page on this one. If you sneer at “multinomial distribution,” we can fairly surmise that you haven’t made it it to square one, stats-wise. But I am happy to explain whatever you’d like explained. At least until you turn ugly. What would you like to know about comparing probabilistic forecasts?
I’m not sure if people are sneering at the idea of a multinomial distribution, or just this particular one. I know I have no issue with the concept itself, but I do sneer at many examples (as is true in anything).
Personally, I wonder why they chose 70% as their cutoff. What would their results have said if they required 80%? What about 50%? I don’t approve of only looking at variations in one value like this. That is strange.
Of course, there may be some reason for that particular value, or other values may have been discussed, but not quoted. If so, this prediction would be much less strange.
Who cares what you call it, whether you can do it, or whether WHT can spell it?
The question was of what possible practical use is such a pronouncement?
No problem with multinomial” distribution, but do not understand your reference to multinomoal distribution.
Just pointed out your typo, WHT – don’t get all huffy about it.
Can you say “Brier Score?” http://en.wikipedia.org/wiki/Brier_score
This is a way of comparing probabilistic predictions over many predictions, along the two dimensions of calibration and reliability, with the base rate subtracted out (so that genuine skill above predictions of the base rate can be seen).
We use it widely across disciplines that look at stated beliefs, probabilistic judgments and predictions, and so on.
It is a comparative method, that is, not concerned with falsification per se, but rather comparison of the relative success of different models, judges and so on.
Interestingly enough a lot of this literature first appeared in meteorology, which has a long history of probabilistic forecasts of different events. But it has traveled into psych and econ. I have a student finishing a Master’s Thesis in which he uses Brier Scores to compare the forecasting ability of several models that try to predict turning points in asset market bubbles.
Clarification… it is the decompositions of the Brier Score into parts (e.g. reliability, resolution and uncertainty, the first decomposition the Wikipedia page talks about, which is particularly helpful) that allow one to do the things I was talking about. The Brier Score itself, without a decomposition, is not terribly interesting. Also note that there are mistakes in the Wikipedia discussion (e.g. asserting that the Brier Score is only useful for continuous dependent measures, which as you can see is marked “dubious” in the Wikipedia article).
Come now, even Voldemort can’t be scum and dregs at the same time.
Too deep for me Kim.
“I do not understand what this NOAA prediction means: “there’s a 70 percent chance of nine to 15 named storms.” How could one falsify such a strange prediction? Probability run wild.”
it’s an utterly normal prediction. the mean is 12, the std deviation is roughly
3. there is a 95% change that the real value will fall between 6 and 18
and a 99% chance it will fall between 3 and 21. I suppose that if the record low ( 4 named storms) or the record high ( 28 named storms) was achieved one might “falsify” the prediction. But what is really at play here is the limitation of the notion of falsifiability doesnt work to describe how we in fact do science. No theory is ever “falsified”
Falsifying implies that some notion of “truth value” is at stake. But science can never be true, never be settled. That includes ‘settling” questions about the falsity of a theory. Theories are not falsified. They are abandoned because they dont work better than an alternative theory or they are amended, or their failure is ignored as a fluke. Quite simply, if one observed
0 storms or 30 storms one would go back to the model and ask what can we change in the model to improve the prediction and make that event fall within the range of prediction
Steven Mosher, do you actually know this to be true? If their values aren’t evenly distributed (such as having a longer tail on one end), they could have that range of values without it being just a matter of adding standard deviations. There is nothing in that quote which indicates your interpretation is right as opposed to it being something like 11 (-2, +4).
I don’t see why they’d have written it the way they did if they meant it the way you take it. Why would they be unnecessarily vague? Why wouldn’t they just write 12 +/- 3?
When somebody forgoes a simple and explicit option for something more vague, I tend to assume they have a reason. In this case, I assume the reason is their values aren’t evenly distributed. That may not be the case, but it is the implication I get.
As a side note, the same problem could exist even if their data is evenly distributed. There is nothing which says their 70% range is centered. You could get a different range of values for 70% by shifting the window used along the x-axis (and adjusting its size).
Point of modeling. If the models are Poisson regressions, and the mean is 12, then the 70% confidence interval would be about [10,14]… narrower than the [9,15] reported. Could be a Poisson regression with an overdispersion parameter. In any case the Poisson is a model for a count of events per unit time in some space, so it wouldn’t be surprising if that was the statistical model, especially with an overdispersion parameter.
Mosher: Statistical issues aside, I think that science is about finding the truth. Apparently you do not. The fact that any belief can be found to be false does not mean that no belief is true. Descartes pointed this out 400 years ago. Apparently some still have not got the message.
Getting back to the statistics, I take it that you think they are predicting a probability distribution. Since it is consistent with almost any outcome, how is it useful? How does their 70% differ from a forecast of 60% or 80%? Or 10 to 16 named storms? The precision is an illusion. Perhaps they have some probabilistic decision algorithm in mind, but they do not say what it is, so this is worse than worthless. Taxpayers pay for it.
It is not straightforward to verify probabilistic forecasts. There are a number of different skill scores used for probabilistic forecasts that don’t make sense for an individual forecast but for a population of forecasts (e.g. a collection of historical forecasts). Probabilistic forecasts are useful in decision making strategies that include hedging options.
“But what is really at play here is the limitation of the notion of falsifiability doesnt work to describe how we in fact do science. No theory is ever “falsified”
Falsifying implies that some notion of “truth value” is at stake. But science can never be true, never be settled. That includes ‘settling” questions about the falsity of a theory.”
I am currently testing a drug that I designed. The drug is designed as a pro-Drug; blood brain barrier permeable, lipophilic, neutral species. It will diffuse into a gliomal cell and there will meet the upregulated monoamine oxidase B. This enzyme will convert it into a cationic species. This conversion means that the mature form becomes concentrated within the mitochondria; generally by a fact of 1,000 with respect to the cytosol. There it will attack the mitochondrial DNA and also the ribosomal RNA, using a nitrogen mustard war-head.
I have nude mice that have an immature immune system and have been given a human primary gliomal tumor. I have 10 mice injected with saline and 9 injected with saline and drug.
It is day 7 today, yesterday the tumor volume in the drug treated animals was 42% that of the controls.
In one week the mice get their second chemo treatment, and 14 days later their last. A week after their last treatment we will sacrifice and examine the cancer load in all tissues. We will also have the death curves of the controls and treated animals.
The tested hypothesis is that the drug treatment regime will have no difference on outcome measured by changes in tumor volume, body mass, animal death rate, infiltration of the organs by metastasizing cells and in analysis of the mitochondrial function of the 19 tumor masses.
Obviously Mosh, my work and that of other biomedical workers is pointless as you point out ‘No theory is ever “falsified”’. So it makes no difference how we design our experiments or perform our statistical analysis of the various outcomes. Development of chemotherapy treatments is obviously crap and a waste of time, as any data we get is meaningless.
I join those who wonder what’s the problem. It’s obvious that NOAA just tells essentialy the one standard deviation range.
What would be a better way of expressing the result? The real distribution is discrete and asymmetric and may have a fat tail, but nothing in that makes their way worse than any other choice of similar level of detail.
I take it then that this so-called prediction is some sort of statistical projection based solely on the distribution of past events. No physics or meteorology is involved, just a statistical model, right? They should say that, because this is not weather forecasting as normally understood. Any idea what history they are projecting, or how? Where is the statistical model?
The pickup line goes:
There’s a 70% chance that you and I will have breakfast together…
Impossible to falsify.
You speak to Gulf exposure, but your region, the Georgia/East Florida/South Carolina area is receiving the second storm of the season and I get the impression the mountainous parts of N.Carolina/S. Carolina and northern Georgia are getting heaps of rain from this second event.
I would not worry too much about the reinsurance industry. According to Pielke. jr. and others, they have been over charging for the risk by a factor of something like 500% for a number of years. There is a conundrum about storm losses however: More property in storm areas means more insurance premium. Why did the insurance industry find itself under-reserved for the reality of storms? Where did the imbalance come from?
I’d read somewhere that hurricane season forecasts, though right about 56% of the time since 2000, where wrong understated the mark four times more often than they overstated it.
Is there a bias in forecasting that accounts for the lack of skill?
Though, really, 4:1 doesn’t have much statistically significant difference from 1:1, where the total is only 7 right, 4 low, 1 high.
Does anyone have the actual figures overall for hurricane predictors as a group? I imagine at as many as three forecasts a year (December, March, May), there must be someone who’s bothered to look at their skill in depth.
this sounds like it might be basic expected statistics to me. Suppose you have an observable set of predictors X, but hurricanes depend on a broader set of predictors Z, some of which are not observed. That is, observed X is a subset of relevant Z. Then if H is the count of hurricanes, it’s a statistical fact that Var(H|Z) > Var(H|X), that is, the conditional variance of H given the potential predictors Z exceeds the conditional variance of H given the observed predictors X.
Another way of saying this is: Whenever we don’t have a complete set of predictors, optimal prediction implies regression to the mean. So predictions are regressed toward the unconditional mean, and so the outcomes are always more extreme (relative to the mean) than are the predictions.
Or am I misunderstanding what you think you read?
NW | May 29, 2012 at 4:25 am |
Someone who gets my jokes!
To continue from Brandon’s comment at May 29, 2012 at 1:00 am.
Brandon (and anyone else who really cares about this),
In what follows, I use the word “judge” as a noun, really to describe any person or model or theory that assigns a probability P to an event E.
Trying to question a judge’s assignment of a specific probability to a specific event is pointless. But what we can do is this.
Let’s create a number of boxes to contain the future events that a judge assigned various probabilities P, in intervals containing P.
Box 1 will contain all the future events for which the judge’s assigned probability p faills in the interval [0,0.1]. Call these the probabilities and events P1j and E1j.
Box 2 will contain all the future events for which the judge’s assigned probability p faills in the interval (0.1,0.2]. Call these the probabilities and events P2j and E2j.
Box 3 will contain all the future events for which the judge’s assigned probability p faills in the interval (0.2,0.3]. Call these the probabilities and events P3j and E3j.
Box 10 will contain all the future events for which the judge’s assigned probability p faills in the interval (0.9,1]. Call these the probabilities and events P10j and E10j.
In general, we have Events Eij, with assigned probabilities Eij, in box j. It is going to be important to keep in mind that the number of events assigned to each box might differ: Call this number Mj for each box.
Suppose we average the probabilities assigned to the events in box j. This can be denoted Pj.
Now, we wait to see whether the events Eij happen or not. When they do, we assign an indicator Sij=1; when they don’t, we assign an indicator Sij = 0.
The proportion Bj = sum(Sij)/Mj is the actual proportion of times that events in box j occured.
So we can compare Pj to Bj, for each box j. Suppose it happened that Pj = Bj for all j (what a stroke of incredible luck, or forecasting ability, apparently but stand by): Then the lingo says that the judge is perfectly “calibrated,” or the judge’s calibration is perfect.
Of course calibration isn’t everything. Suppose I say “Every day between June 1 and Aug 31, the probability of rain over some specific square mile of Harris County (Houston) is 0.3.” I am going to be almost perfectly calibrated. But there is no real information in this forecast: It is just the base rate summer probability of rain over a random square mile of that area. The forecast is perfectly calibrated but it has no “resolution:” It does nothing to provide differential information across days.
Here is an opposite example of wny calibration alone is nothing. Suppose you know a movie critic who you always disagree with. This movie critic has awful calibration. Yet the critic is a perfect source of information for you. You go to every movie she hates and avoid every movie she likes. We would say (in the lingo of this area) that the critic is perfectly “resolved” or has great “resolution” even though her calibration stinks.
What this points up is that good probabilistic forecasting is at least two-dimensional. A good judge (or model or theory) ought to be both well-calibrated and well-resolved. So if calibration is high but most of the Mj are zero (the judge puts events into very few boxes), the judge is not much more useful than the long-run average proportion of times the event occurs (the base rate). Ideally, we want resolution (fairly equal Mj across the boxes) and calibration (Bj close to Pj most of the time).
Hope this helps.
And, of course, I screwed up the subscripts. Box j, Mj, Bj and Pj should all be Box i, Mi, Bi and Pi. Sorry… too much Jim Beam.
All approaches to seasonal forecasting are probabilistic in reality – even if expressed only as likely or unlikely. The statistics are relatively simple amounting only to intensity and frequency. More critical to prediction is understanding underlying atmospheric and oceanic patterns driving changes in frequency and intensity – and applying this skill in judging probability. Statistics are far from all that is needed to make predictions.
‘But more than just better data and technology were needed before a seasonal hurricane outlook could be made. Fundamental breakthroughs in our understanding of the dominant climate factors influencing seasonal hurricane activity were also needed…
Three main statistical techniques are presently in use. One approach first utilizes statistical regression equations to predict the likely strength of key atmospheric and oceanic anomalies. This is done by either directly predicting their strength, or by first predicting the dominant climate patterns that strongly control their strength. A second set of regression equations is then used to predict the likely seasonal activity associated with the expected anomalies.
A second and complementary statistical approach utilizes a climate-based binning technique, wherein the historical distribution of activity associated with the predicted climate conditions is isolated. This climate-based analog approach allows the forecaster to focus only on those seasons having similar climate conditions, and differs from the pure regression equations that are often derived using all seasons and therefore all sets of climate conditions.
A third statistical approach developed by NOAA for use in their 2008 forecasts utilizes regression equations that relate coupled ocean-atmosphere dynamical climate model forecasts of key atmospheric and oceanic anomalies to the observed seasonal activity. In this way, dynamical predictions can be utilized to forecast the upcoming seasonal activity without a direct count of the exact number of named storms and hurricanes produced by the model. A second dynamical approach is to directly count the number of named storms a given climate model predicts (e.g., Vitart et al. 1997, Camargo et al. 2005).’ http://www.ldeo.columbia.edu/~suzana/papers/Global_Guide_Seasonal_Forecast_Chapter.pdf
I don’t believe the ENSO forecasts BTW. These are not getting any better and the reason has to do with heuristics – the past decade or so since the 1998/2001 climate shift being fundamentally different in nature to earlier decades.
‘Thirty-year hindcasts for the 1981–2010period yielded average correlation skills of 0.65 at 6-month lead time, but the real-time predictions for 2002–11 produced only 0.42. The fact that the recent predictions were made in real time, in contrast to the partially hindcast design in the earlier studies, introduces another difference with consequences difficult to quantify but more likely to decrease than increase the recent performance measures.’
Good old boy experience (courtesy of Claus Wolter) seems more likely.
‘Stay tuned for the next update by mid-June 2012 (this is in flux, hopefully sooner) to see where the MEI will be heading next. La Niña has gone through a second-winter stage similar to 2008-09, and consistent with expectations formulated right here in late 2010: big La Niña events have a strong tendency to re-emerge after ‘taking time off’ during northern hemispheric summer. Based on the evolution of recent atmosphere-ocean conditions, the MEI has obviously already reached ENSO-neutral conditions, with little chance of dropping back into La Niña conditions any time soon. As stated two months ago, there is a distinct possibility that we could see a switch to El Niño during the next few months. However, all multi-year La Niña events of the last 13 years have shown a tendency to weaken or even disappear during this time of year (as in 2000, 2001, 2008, 2009, and 2011), with only 2009 showing a clear-cut switch to El Niño by the summer of that year.
As noted before, all of the ten two-year La Niña events between 1900 and 2009 either continued as a La Niña event for a third year (four out of ten), or switched to El Niño (six out of ten), with none of them ending up as ENSO-neutral. The year 2012 promises to remain “interesting”.’ http://www.esrl.noaa.gov/psd/enso/mei/
I give an 80% chance of a 3 year La Nina. I give a 100% chance of increased frequency and intensity of La Nina over the next decade or three.
I have never liked the hypothesis that human reasoning is probabilistic. The math of probability is based on a set of conditions, the random variation of an infinite number of identical events, that is as far from the unique events we typically reason about as one can get. Numerous experiments show our reasoning not to be probabilistic, but they are instead regarded as showing poor reasoning, saving the hypothesis. How we actually reason is thus unknown. There are, after all, alternative mathematical possibilities, other than probability theory.
We are ‘reasoning’ about the future and the ideas of probable and improbable are central. It is not after all a concrete event but the chances of the event..
Anecdote alert: in early 1981, we moved a forestry building onto our block inland from Noosa, SE Queensland. No cyclone had crossed the coast in SEQ since the biggie in 1974. The building was perched on railway sleepers on a high, fairly exposed ridge – we could see about 11-12 miles from it – not anchored in any way. We moved straight in, as our caravan had burned out. That first night, a cyclone crossed the coast nearby. It peaked for two hours from 1 – 3 a.m. We sat alert (and probably alarmed (Australian political reference)). The building (a small barracks for forestry workers) didn’t move. We reckoned that if it could survive a cyclone when not tied to the ground, it should be fine thereafter. As it has been.
After seeing reporting on near hurricane Beryl, expect to see daily reporting on Hurricane farts now and how AGW increase the number of hurricane farts that in turn have a positive feedback loop.
Despite all the fancy pancy reports and models created by megaflop Kray computers, NOBODY can tell how the hurricane season will develop. Those who say they are able to forecast are blatently lying and those who were right after all were just plain lucky. THIS was my emperical result of the last couple of seasons.
Hi, Judith. Pre-season forecasts are mildly interesting but far more interesting (to me) is the level of skill at forecasting activity seven to ten days ahead. Good medium-term (seven to ten day) forecast skill at formation probability, region of formation and movement offers much greater social and economic benefit than does a seasonal forecast of the number of named storms.
An interesting study would be one of this medium term skill versus the broad seasonal patterns (AMO, PDO, etc). My bet is that certain seasonal patterns provide far greater medium-term forecast skill than other patterns.
Seems like that information would be of greater benefit to responders and insurers than is a pre-season forecast of the number of named storms.
Hi David, a post on this is coming later in the week. I TOTALLY agree that this is the more useful forecast, but the reinsurance sector wants seasonal forecasts, and the public (or at least the media) seems interested in this
The Mann-caused flimflammery of The AGW theory has taken many turns and brought a lot of disparate issues and events — like hurricanes in the Gulf to HIV/AIDs in Africa — under the climate astrologers’ big tent. We should not, however, forget that…
…outgoing long-wave radiation from the Earth is one-seventh of that which the UN’s computer games … predict … [CO2] less than half the rate the UN had imagined … rapid global cooling of the past seven years. Sea surface temperatures have fallen for five years. Sea level has not risen for three years … Worldwide hurricane intensity in October 2008 was at its least for 30 years. Global sea ice shows little trend in 30 years. The ice sheets of Greenland and Antarctica are thickening. The Sahara is greening … The correct policy response to the non-problem of “global warming” is … to have the courage to do nothing.
Bush the Great had the courage.to do nothing.
Into each life some rain must fall,
Some days must be dark and dreary. — Longfellow
It is not for you to know the times or the seasons (and)
No one knows when that day or hour will come. — J.C. (the other one)
Dr. Curry: You may remember that my wife and I live on our sailboat from which we do humanitarian and other volunteer work in the northern Caribbean. This year we will be staying in the area, taking refuge at Salinas, PR – a fair hurricane hole. Wish us luck – our boat is the only home we own.
Hi Kip, sounds pretty nice, with the caveat of the hurricanes. send me an email if you ever need a forecast :)
Dr. Curry, Thank you for your kind offer. If I get nervous, I’ll take you up on it!
Over the years could it be that the change in the climate prediction business based on human-caused CO2 has slowly changed from worries about polar bears in the Arctic and the calving glaciers the size of Rhode Island off the continent of Antarctica, to whether you will find yourself in the eye of a hurricane in Florida?
“…likely to see 7 to 13 tropical storms – with a most likely value of 10.”
“a 70% chance that the [ACE] number will be between 28 and 152. This is partly due to the current uncertainty in the evolution of the El Niño/La Niña cycle over the next few months.”
So we’ve talked about the destructive power of hurricane winds, the bad.
When do we get to the good: http://www.makanipower.com/home/
(Or at least the do-no-evil.)
And the ugly: http://www.dailymail.co.uk/news/article-2141766/Wind-Farm-South-Wales-Green-light-365m-blight-countryside.html .. or is it? http://www.bbc.co.uk/news/uk-england-norfolk-18263811
Contrast wind turbines on the French countryside to the barren landscape left after the first world war. Travel from Calais to Normandy and one goes through the Somme region. I didn’t think of that until now.
We would joke with the kids that it looked like a scene from teletubbies with the turbines on the rolling green. Cripes, that was the Somme for crying out loud.