U.S. weather prediction: falling behind

by Judith Curry

“It’s a national embarrassment. It has resulted in large unnecessary costs for the U.S. economy and needless endangerment of our citizens. And it shouldn’t be occurring.

What am I talking about? The third rate status of numerical weather prediction in the U.S. It is a huge story, an important story, but one the media has not touched, probably from lack of familiarity with a highly technical subject. And the truth has been buried or unavailable to those not intimately involved in the U.S. weather prediction enterprise.” —  Cliff Mass, UW

Cliff Mass (see also his blog) has a very provocative and insightful post at Storm Watch7 Weather Blog entitled The U.S. Has Fallen Behind in Numerical Weather Prediction: Part I.  Read the whole thing, it has lots of illustrative graphics.  Here are some key excerpts:

Weather forecasting today is dependent on numerical weather prediction, the numerical solution of the equations that describe the atmosphere. The technology of weather prediction has improved dramatically during the past decades as faster computers, better models, and much more data (mainly satellites) have become available.

U.S. numerical weather prediction has fallen to third or fourth place worldwide, with the clear leader in global numerical weather prediction (NWP) being the European Center for Medium Range Weather Forecasting (ECMWF). And we have also fallen behind in ensembles (using many models to give probabilistic prediction) and high-resolution operational forecasting. We used to be the world leader decades ago in numerical weather prediction: NWP began and was perfected here in the U.S. Ironically, we have the largest weather research community in the world and the largest collection of universities doing cutting-edge NWP research (like the University of Washington!). Something is very, very wrong and I will talk about some of the issues here. And our nation needs to fix it.

But to understand the problem, you have to understand the competition and the players. And let me apologize upfront for the acronyms.

In the U.S., numerical weather prediction mainly takes place at the National Weather Service’s Environmental Modeling Center (EMC), a part of NCEP (National Centers for Environmental Prediction). They run a global model (GFS) and regional models (e.g., NAM).

The Europeans banded together decades ago to form the European Center for Medium-Range Forecasting (ECMWF), which runs a very good global model. Several European countries run regional models as well.

The United Kingdom Met Office (UKMET) runs an excellent global model and regional models. So does the Canadian Meteorological Center (CMC).

There are other major global NWP centers such as the Japanese Meteorological Agency (JMA), the U.S. Navy (FNMOC), the Australian center, one in Beijing, among others. All of these centers collect worldwide data and do global NWP.

The problem is that both objective and subjective comparisons indicate that the U.S. global model is number 3 or number 4 in quality, resulting in our forecasts being noticeably inferior to the competition.

You first notice that forecasts are all getting better. That’s good. But you will notice that the most skillful forecast (closest to one) is clearly the red one…the European Center. The second best is the UKMET office. The U.S. (GFS model) is third…roughly tied with the Canadians.

I could show you a hundred of these plots, but the answers are very consistent. ECMWF is the worldwide gold standard in global prediction, with the British (UKMET) second. We are third or fourth (with the Canadians). One way to describe this, is that the ECWMF model is not only better at the short range, but has about one day of additional predictability: their 8 day forecast is about as skillful as our 7 day forecast. Another way to look at it is that with the current upward trend in skill they are 5-7 years ahead of the U.S.

Most forecasters understand the frequent superiority of the ECMWF model. If you read the NWS forecast discussion, which is available online, you will frequently read how they often depend not on the U.S. model, but the ECMWF. And during the January western WA snowstorm, it was the ECMWF model that first indicated the correct solution. Recently, I talked to the CEO of a weather/climate related firm that was moving up to Seattle. I asked them what model they were using: the U.S. GFS? He laughed, of course not…they were using the ECMWF.

A lot of U.S. firms are using the ECMWF and this is very costly, because the Europeans charge a lot to gain access to their gridded forecasts (hundreds of thousands of dollars per year). Can you imagine how many millions of dollars are being spent by U.S. companies to secure ECMWF predictions? But the cost of the inferior NWS forecasts are far greater than that, because many users cannot afford the ECMWF grids and the NWS uses their global predictions to drive the higher-resolution regional models–which are NOT duplicated by the Europeans. All of U.S. NWP is dragged down by these second-rate forecasts and the costs for the nation has to be huge, since so much of our economy is weather sensitive. Inferior NWP must be costing billions of dollars, perhaps many billions.

The question all of you must be wondering is why this bad situation exists. How did the most technologically advanced country in the world, with the largest atmospheric sciences community, end up with third-rate global weather forecasts? I believe I can tell you…in fact, I have been working on this issue for several decades (with little to show for it). Some reasons:

1. The U.S. has inadequate computer power available for numerical weather prediction. The ECMWF is running models with substantially higher resolution than ours because they have more resources available for NWP. This is simply ridiculous–the U.S. can afford the processors and disk space it would take. We are talking about millions or tens of millions of dollars at most to have the hardware we need. A part of the problem has been NWS procurement, that is not forward-leaning, using heavy metal IBM machines at very high costs.

2. The U.S. has used inferior data assimilation. A key aspect of NWP is to assimilate the observations to create a good description of the atmosphere. The European Center, the UKMET Office, and the Canadians using 4DVAR, an advanced approach that requires lots of computer power. We used an older, inferior approach (3DVAR). The Europeans have been using 4DVAR for 20 years! Right now, the U.S. is working on another advanced approach (ensemble-based data assimilation), but it is not operational yet.

3. The NWS numerical weather prediction effort has been isolated and has not taken advantage of the research community. NCEP’s Environmental Modeling Center (EMC) is well known for its isolation and “not invented here” attitude. While the European Center has lots of visitors and workshops, such things are a rarity at EMC. Interactions with the university community have been limited and EMC has been reluctant to use the models and approaches developed by the U.S. research community. (True story: some of the advances in probabilistic weather prediction at the UW has been adopted by the Canadians, while the NWS had little interest). The National Weather Service has invested very little in extramural research and when their budget is under pressure, university research is the first thing they reduce. And the U.S. NWP center has been housed in a decaying building outside of D.C.,one too small for their needs as well. (Good news… a new building should be available soon).

4. The NWS approach to weather related research has been ineffective and divided. The governmnent weather research is NOT in the NWS, but rather in NOAA. Thus, the head of the NWS and his leadership team do not have authority over folks doing research in support of his mission. This has been an extraordinarily ineffective and wasteful system, with the NOAA research teams doing work that often has a marginal benefit for the NWS.

5. Lack of leadership. This is the key issue. The folks in NCEP, NWS, and NOAA leadership have been willing to accept third-class status, providing lots of excuses, but not making the fundamental changes in organization and priority that could deal with the problem. Lack of resources for NWP is another issue…but that is a decision made by NOAA/NWS/Dept of Commerce leadership.

I should stress that I am not alone in saying these things. A blue-ribbon panel did a review of NCEP in 2009 and came to similar conclusions (found here). And these issues are frequently noted at conferences, workshops, and meetings.

JC comments

Cliff Mass is spot on.  I and other members of the meteorological community have lived with this ridiculous situation for decades.  Here are some insights from my personal experiences.

My company CFAN purchases the ECMWF data set, at an annual cost of 168,000 Euros.  That cost is a very big fraction of our annual income, but it is so much better than NCEP’s forecasts that we don’t waste much effort using the NCEP products.  In fairness, their latest version of the Climate Forecast System (CFS) is significantly improved relative to the previous version, and the forthcoming new version of the GFS (15 day) is supposed to be a significant improvement.  But ECMWF and the other models are also making ongoing improvements that will continue to keep them well ahead of NCEP.

I have worked on many field observation projects and process modeling projects that have as an objective to improve the treatments of these processes in weather and climate models.  Even for U.S. based projects, ECMWF provides active participation in these projects, sending employees to meetings, etc.  Whereas no one from NCEP is anywhere to be seen.  Over the past several decades I have visited ECMWF about a dozen times. I have visited NCEP once, to tell them about our revolutionary new hurricane forecasting methods based on ECMWF.  Their response was to tell us about their big plans for the future, which we were already pretty much doing in 2007.

One of the key ingredients to the success of ECMWF is the integration of research with development of the operational forecast model.  The original idea for NCEP was that the Geophysical Fluid Dynamics Laboratory at Princeton was to be the research arm of NCEP (well it wasn’t called NCEP in the old days).  But this never worked, and GFDL evolved more in the direction of ocean modeling and climate modeling, and has next to nothing to do with NCEP.

Even if NCEP had more funds and a better research arm, I still don’t think there would be much of an improvement, since NCEP has as an allergy to anything ‘not invented here.’  And there is a fundamental problem with the priorities at NOAA, IMO.

I have a vague recollection that in the 1990’s there were some congressional hearings about NCEP falling behind the Europeans, and the conclusions were that NCEP was ‘good enough’, that we didn’t really need the cadillac of weather forecasting centers.  The other government agencies that relied on NCEP forecasts (e.g.  transportation, agriculture, defense, etc). thought the forecasts were just fine.  But the economic applications of good weather and seasonal climate forecasts are rapidly growing and possibly boundless.

NOAA’s priorities have clearly been on the development of coupled climate models for the greenhouse warming problem, and not on weather and seasonal climate forecast models.  This climate modeling priority for GFDL (which really has outstanding capabilities) has sapped the resources that could have been used for the development of better weather and seasonal climate forecast models, which have a far greater socioeconomic value than do century scale climate prediction models.


84 responses to “U.S. weather prediction: falling behind

  1. I guess I’m just out of the loop, but how do forecasts eight days ahead vs seven days ahead save all this money (other than purchasing data, which I assume still runs the EU a loss overall)? Not being a troll, just curious what the applications are.

    Also, OT, but I highly recommend reading (but not comment trashing!) the great back and forth that has been between John Kennedy and Greg Goodman. That is science at work, being hashed out fairly politely, right here on this blog – in case anyone doubted it could happen : ). Super informative.

    http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-186032

    • I live in the Pacific Northwest of the US.
      I would like an accurate forecast 8 hours ahead.

      Some quotes from todays official US Weather Service forecast for the Pacific Northwest.

      Link will change as it is a link to the ‘current’ forecast
      http://forecast.weather.gov/product.php?site=NWS&issuedby=SEW&product=AFD&format=txt&version=1&glossary=1

      Sorry for the caps…that is what the NWS uses.

      THE MODELS WERE HINTING THAT A PUGET SOUND
      CONVERGENCE ZONE WILL DEVELOP LATE TONIGHT IN RESPONSE TO INCREASED ONSHORE FLOW. THE NAM PLACES THIS FEATURE IN ITS TYPICAL LOCATION NEAR THE KING/SNOHOMISH COUNTY LINE WHEREAS THE GFS WAS FARTHER N.

      THE NEXT…MUCH WETTER SYSTEM…WILL BEGIN TO IMPACT THE CWA ON MON.THE MODELS APPEARED TO BE HAVING SOME TIMING ISSUES WITH THIS SYSTEM

      A BAROCLINIC BAND MAY STALL OVER THE AREA ON TUE…BRINGING HEAVY RAIN TO THE AREA…ESPECIALLY THE COAST…OLYMPICS AND NORTH CASCADES. THERE IS PLENTY OF JUICE WITH THIS SYSTEM. HOWEVER…THERE ARE SOME BIG TIMING ISSUES BETWEEN THE MODELS.

      THE MODELS ARE HINTING TOWARD DRIER WEATHER NEXT WEEKEND AS A RIDGE DEVELOPS OFFSHORE

    • Steve Milesworthy

      The 8-day to 7-day comparison implies that the 6,5,4,3 etc. day forecasts are also better.

      An example of the benefit (in the UK) was given recently to a parliamentary enquiry. The better forecast, combined with the flood forecasting capability, gave the emergency services 24 hours’ extra time to move rescue resources (including inflatables and so forth) into the Cumbrian area prior to a serious flooding event.

  2. Judith
    Somewhere I read that the EU has the major advantage of relying on the very large US temperature records by which to configure the input conditions to their weather models.
    Could you comment on the difference of the US only having scattered Pacific islands and Ocean stations vs the EU having US data for input, assuming the same quality models and computers.

    • David, all surface weather data is made available to all of the national meteorological agencies, this has been coordinated by the UN WMO for many decades now.

      A bigger issue is the assimilation of satellite data. The Europeans and the U.S. both contribute substantially to the satellite data base. ECMWF makes far more extensive use of the satellite data in its data assimilation process.

  3. Tom Choularton

    The UK Met Office have an excellent weather forecasting model and of course are prominant in climate modelling. The difference is that compared to NOAA and NWS they are very commercial. They sell their forecasts directly to industry and do most media forecasting in the UK. There is a private forecasting industry in the UK but they are small and most of the big forecasting contracts are with the Met Office. Similarly ECMWF provide their results to National forecasting agancies in Europe including the UK Met Office. In contrast NOAA seem to give their data away and forecasting in the US is dominated by commercial forecasting companies. This means a lack of investment in producing the forecast models in the US through revenue from selling forecast products. Welll that’s how I see it.

    • There is an important point hidden here. Many in the Republican Party (including, for example, Rick Santorum have long wanted to privatize all weather forecasting in the US. They (and the Department of Commerce) have always resisted allowing cost recovery for US government weather forecasts. They have also resisted investing in better computer systems to bring US forecasting up to European standards.

      Coupled with the disaster of forcing NOAA to work with the AF on a new generation of weather satellites (the AF backed out leaving NOAA holding a very expensive and still not launched set of cans) US weather forecasting has been decimated by philosophy of let the market and the military run the show.

      • Eli – let’s drag politics into this. There is only one important point here, and it’s not hidden.

        JC’s recommendation: Get NOAA out of the climate modeling business, and put DOE in charge of GFDL and their climate modeling activities. The U.S. needs to get serious about weather and seasonal climate forecasting. The problems at NOAA/NCEP are so overwhelming I don’t even know where to start.

        Regarding your defense of James Hansen (who is indefensible) – ““Political tags – such as royalist, communist, democrat, populist, fascist, liberal, conservative, and so forth – are never basic criteria. The human race divides politically into those who want people to be controlled and those who have no such desire.” — Robert A. Heinlein”

      • Of course politics is involved, this stuff requires serious money, and there are a bunch of weather forecasting companies out there whose interests do not coincide with NCEP improving. If the NCEP forecast is hugely great, they have nothing to sell.

      • Eli Rabett | April 1, 2012 at 4:28 pm |

        Here you go, letting facts and accuracy get in the way of a good narrative.

        Why can’t you let Kate have her fictional realm from the genius of Heinlein?

        Isn’t it mean to insist on actual truth, when there’s artistic truth?

      • Eli,
        The track record of government forecasting is rather poor. The tornados hitting Dallas were not at all predicted until the preceding hour or so.
        So we have the party of government workers and governemnt unions demanding ever more money to do at best a mediocre job pretending that Republicans are out to hurt the American people.
        And it is painfully clear to anyone paying attention that yet another marginal cost of the AGW obsession is the quality of actual meteorological work.

      • hunter,

        It was a last-minute (well, last hour or so) outflow boundary ‘movement’ that moved into the area from the north (Oklahoma) that aided/provided the ‘forcing’ that kicked off that weather event; how do ‘models’ handle such ‘micro’ features when input from atmospheric ‘soundings’ (windsondes) are available from only 3 to 5 sites per state (at what, 12 hour intervals?) and I doubt that finer-grained ground observations are input from ASOS (airport) sites and fed back directly into hourly RaP (used to be RUC for Rapid Update Cycle) models for updated computation/new hourly forecasts (nowcasts)?

        From observing the SPC (Storm Prediction Center) operational output (primarily Mesoscale Discussions), they seem to base their ‘trigger pulls’ on actual occurrence ‘field activity’ (e.g. the persistent or sudden appearance of cumulus in an area of interest pursuant to T-storm initiation) rather than any sort of model as well.

        _Jim

        .

  4. This is news to me. Last I checked UK Met office had stopped issuing long range forecasts because of their abysmal and embarrassing string of botched predictions of bbq summers and mild winters.

    • richard verney

      My understanding that this is only with respect to forecasts put out into the public domain..

      As regards private commercial arrangements, no doubt the Met Office will supply whatever the client requires. One suspects that the same approach would be takem with respect to government briefings.

  5. Several threads ago I used the term: mediocre regulators who are exceptional infighters. The agenda driven EPA is headed in the same direction for the very same reason: abandonment of what they knew well to become politicized and at the beck and call of eco-lobbiests and political types. NOAA, NASA and apparently NCEP have already arrived at this destination.

    Voila, the answer:
    1)Remove the funding for EPA until it fits into a shoebox, then reconstitute it, vision, mission and expert staff as a ground, water, air pollution regulation agency with no climate funds. It will have plenty to do.

    2)Remove the current upper administration of Department of Energy and refocus upon energy, acquisition, storage, transmission, and utilization.

    3)Place climate change into its proper perspective, a research wing of the Department of Defense. Working for the DOD doesn’t give one a lot of leeway to go off onto a “rescuing the world from itself” advocacy mission.

    4) Close all back door climate change funding. Transparent budgets.

    5) and this one I see we agree upon, get the weather back into NOAA.

    Now I know that abstinence doesn’t work, and expecting Congressional people saying Just No to GreenPeace and other eco-lobbiests is not persuasive nor effective; however, having all candidates for State and Federal office reveal who and how much lobbyists give to whom they have access, plus a public, accessible registry may be a beginning to limit the influence of these narrowly focused groups.

    As far as academia is concerned, funds flowing to academics would come through NOAA, DOD and private entities. Cleaner and traceable. Now, too much money sloshing around and hard to keep track of.

    Some spur of the moment thoughts. I haven’t had a chance to sleep on it.

    • The UK Met Office is under the MOD, and that hasn’t stopped them patsying the CO2 line. Even government legal instruction toward impartiality doesn’t stop the BBC being an AGW advocate, with one of its environmental correspondents, Richard Black, being an activist and working to see global government established via Rio+20 (there’s a word for acting against a democratically elected government, now what is it, oh yes, treason).

  6. Judith

    I have visited the UK Met office several times in the last few months in order to carry out research into my articles as it has a very impressive archive and library It also has a state of the art building and inside it much state of the art equipment staffed by some very good people (John Kennedy engages with us sometimes here at Climate etc concerning SST;s and also privately with people such as myself).

    The Met office moved here to Exeter around 5 years ago from near London, so this enabled them in effect to ‘start again.’ from scratch, It brought many of its existing people with them but also took the opportunity of gaining new people. It has a Met office chair and other connections at fast growing Exeter University and good connections with the various European Forecasting centres.

    In short they are a vibrant, well funded, well connected organisation who seem to have a sense of purpose and have been given a new lease of life through their move to their brand new premises lock stock and barrel a few years ago . (They are being groomed for sale by the UK Govt so no doubt have to meet tough targets which keeps them on their toes)

    I don’t know the state of play with US centres, but I suspect that they may have become moribund and complacent and can’t compete with better funded better motivated organisations.

    It must be said however that it is a standing joke locally that the Met office would make better weather forecasts if they looked out of their windows instead of relying on their computers-their forecasts for our area 15 miles away from their offices are often wrong, with serious repercussions for local tourism . So if the MET office is supposed to be so good I hate to think what the US services are like :)

    tonyb

    • Steve Milesworthy

      My understanding is that the Met Office forecasts fall down for the local area due to the weather being very different on the coast than in the broader area including inland. I suspect the error is not in the details of the forecast, but is a detail that is lost due to the fact that the local and national TV forecasts naturally focus on broad areas and not on individual coastal resorts.

      Torbay (the biggest seaside resort in Devon) can often be in bright sunshine while most of the rest of the county is cloudy. The Met Office data on average solar incidence pick up this stark difference.

      • Steve

        I am in the next bay to Torbay-Lyme Bay-and am geographically as the sea gull flies around seven miles away from Torquay

        . About five years ago the sunshine totals between our bay and Torbay started to diverge. The reason was that our weather station on the sea front remained the old fashiuoned type whilst the Torbay one was upgraded to digital which included a system that made their sun recorder hyper sensitive, so our ‘bright'[ became their ‘sunny.’

        Having said that the country consists of numerous micro climates to some extent or other-one side of Dartmoor often has different weather to the other side and similarly the north and south coasts of Devon can often be different to each other. The key is the wind direction and in recent years we have had noticeably more easterlies than westerlies.

        When weather stations ‘move’ as they frequently do, they are moving from one micro climae to another which renders comparisons difficult which makes me suspicious of the global temperature calculations .
        Are you from this part of the world?
        tonyb

  7. NCEP, where you pay more and get less, inefficiency is our middle name :)

  8. peterdavies252

    While I am not a climatologist or a meteorologist I sense that more interface between the two disciplines is urgently needed.

    Short term weather forecasts seem logically feasible for large land masses since grid observations will apply shortly to adjourning grid areas of interest and local weather forecasting seem to be much improved as a direct result of more contemporaneous data being available.

    Where large areas of ocean are concerned, observations are much less available and other, satellite based, techniques are used but the quality of weather forecasting on the shorelines seem consequentially less reliable.

    The medium term climate predictions for regions would appear to be far less successful as there seems no obvious links between localised weather data and the building of regional trend data.

    Longer term global climate trends seem even less connected and to be purely model based. The present controversy around climate change and the measurement of regional effects and future trends seem unlikely to go away anytime soon.

  9. BYOB ASAP PDQ, FWIW. The alphabet soup flows thick!

    This smells like good old political economy at work in DC. Like every state gets to manufacture a chunk of the Bradley M113; and we find it almost impossible to close a handful of DoD sites scattered across the land. These federally funded units bring money and prestige to their locations. I wouldn’t be a bit surprised to learn that, behind the scenes, certain representatives and senators just can’t agree to reorganizations of duties and affiliations, because their local ox would be gored.

    What I don’t know–some of you probably do–is how geographically dispersed the relevant federal units are.

  10. Weather models diverge from reality within days. Cliamate models diverge from reality before they are even turned on – but that’s another story.

    Longer term hydrological models – months to potentially decades – is based on standing patterns of sea surface temperature as in this current forecast.

    http://www.bom.gov.au/climate/ahead/maps/rain.national.hrweb.gif?20120321

    The pattern of more flooding in the mid east coast – already reeling – is the result of warm water in the eastern Indian Ocean.

    There are many relationships derived from these standing patterns. Here’s one for
    North America – http://oceanworld.tamu.edu/resources/oceanography-book/oceananddrought.html – Note in particular the drought potential in a -ve PDO and +ve AMO. You are in for a long term dry spell.

    So the question that engages me is why there are 2 sciences? There is a dweeb science that attibutes everything to carbon dioxide – and excludes vast realms of scientific enquiry. Yeah dweeb sounds about right.

  11. One little quibble–the US is not “falling behind” as we have been fairly consistently behind (something like 5-8 yrs?) for a long time:
    http://www.emc.ncep.noaa.gov/gmb/STATS/html/aczrnmn4.html

    A major and difficult issue not addressed here (Cliff’s list is a good one) is that the US splits its operational atmospheric modeling across three different centers (NOAA, AF, Navy) leading to much duplication of effort and less bang for the buck overall. The Europeans and UK each have one united group that serves all their common needs. It would take Presidential/Congressional involvement to change this reality, and given the “good enough” status we have, the sub-optimal way we do business in this country will likely continue. I can also attest that resources being diverted to long-range climate issues have taken from resources that otherwise would have been devoted to shorter term forecasting improvements. Bureaucratic hurdles are also quite a hinderance to more rapid progress.

    One piece of good news is that much of the top notch research happens here in the US. The other nations are just better at quickly getting it into their operational modeling systems :-)

    • operational atmospheric modeling across three different centers (NOAA, AF, Navy)
      You left out NASA

    • Evan, good point about the AF and Navy. The AF and Navy have specific needs that are not met by the NOAA forecasts (they possibly share the same frustrations with NOAA that I do). The Navy actually tried basing its forecasts off UKMO, but that turned out to be problematic.

      • Full disclosure–I work in the AF modeling shop and do a fair amount of work cross-coordinating with the other agencies. Absolutely right that all three have different and unique mission needs, but those needs could be served as three branches from one tree (like in the UK) as opposed to three trees, saving a lot of resources.

        We actually licensed and started running UKMO (Unified Model or UM) to initialize our operational regional WRF runs, but I personally see little difference in skill versus using GFS. From my perspective (full disclosure #2–this is my area of work) using UM and GFS and others in an ensemble is the right way to go–they are all “pretty good” and there is much information to be gained by using them all instead of in isolation.

        BTW, love the blog, keep up the great work.

  12. Dr. Curry,
    With a great deal of respect I ask if this is really a serious analysis?
    I would say that since the USA is very big area of land, surrounded by two oceans on the mainland, exposed to the Arctic in the north and the Gulf to the south and the Rockies. I doubt if a lack of precision is really a sign of failure.

    • I was raised on a farm and many activities have a start time that is based on a weather forecast concerning all the elements, Rain, Sun, Wind, Temperature for days, weeks, months, even years. Different planting times, harvesting times, even choosing different crops. If a forecast is wrong and it could have been better, that is really a sign of failure.

      • Pooh, Dixie

        Your point is well-taken. To the citizen, the important information is what is likely to happen to his patch, and when.
        Further, the important information about water is different from what is important about land. In particular, coastal and lake forecasts differ in content and granularity (timing, wind force and sea state). Finally, their importance is man-critical, referring to the importance of being right.
        The U.S. broadcasts what are/were known as MAFOR codes (MAYFOR in Canada). It was good to know the sea state, wind speed and direction hour by hour before deciding on a course.

    • Latimer Alder

      @hunter

      Europe is a pretty big place too. The Atlantic to the West, Siberia etc to the East, the Mediterranean and Africa to the South. Arctic to the North. And the Alps as mountains. Seems to me that there is little qualitative difference between the two.

      And from the UK perspective, where our positioning and topography makes the weather very fickle, (all four seasons in 24 hours is not just an Old Wives Tale) it seems that forecasting for the unchanging plains of the mid-west for example ought to be a doddle.

    • Latimer,
      Your answer makes my point: The world is very big. Weather is very complex. Predicting is very difficult, especially about the future.
      Frankly I think any degradation in weather forecasting here will be shown to be due to the influence of AGW. AGW distorts everything in its sphere of influence.

    • What I gather from this is that while the Euro model is Europe-oriented, it’s global. Otherwise, there wouldn’t be a reason to pay the hundreds of thousands of dollars for the product.

      Which raises a different question: isn’t it cheaper overall to simply buy this data from them than to make a major investment here?

      Of course, if the US organization(s) are really that dysfunctional, there may be other, stronger reasons to MOAB them than to save a few hundred thousand here and a few hundred thousand there.

  13. Latimer Alder

    Interesting perspective that the US considers that it is a better use of resources to concentrate on trying to tell what the ‘climate’ will be like for our great*N granchildren in 100 years time rather than on telling Joe and Joanna Sixpack what is actually going to affect them within a week.

    Given that Joe and Joanna are the ones paying for this, did anybody ask them if these were truly their priorities? Or did it all just get hijacked by the practitioners?

    Because if I was a nasty suspicious-minded cynical person (aka a realist), I might think that it is a far easier career move to concentrate on playing with untestable century-level models away from the public gaze than in putting my cojones on the line every time I try to say whether its going to snow in Minnesota tomorrow.

    And I also get to boast at parties that I am a ‘climate scientist’, and try to look down upon mere ‘weathermen’. Safe in the knowledge that – until recently at least – I had a secure sinecure on the public’s dime.

  14. Worth repeating myself.

    Best in Chrome:

    http://hint.fm/wind/

  15. Why not privatize weather forecasting. All we are saying is give capitalism a chance.

    • Wagathon,

      Erik Craft was a buddy of mine in grad school. At the beginning of his career, he did a lot of economic history of weather forecasting, especially in the US. This is his best-known paper, and very interesting:

      Erik D. Craft, The Value of Weather Information Services for Nineteenth Century Great Lakes Shipping, American Economic Review, 1998, 1059-76.

      Happily, although normally paywalled, noaa makes it publicly available here:

      http://www.nssl.noaa.gov/users/brooks/public_html/feda/papers/Craft1998GreatLakesWarnings.pdf

      The paper you’d really want to read is this one:

      Erik D. Craft, Private Weather Organizations and the Founding of the United States Weather Bureau, Journal of Economic History, 1999, 1063-71

      It is linked here but unfortunately this is paywalled, and I can’t find a free version of it:

      http://www.jstor.org/discover/10.2307/2566688?uid=2&uid=4&sid=55973312163

      However, much of the material in that JEE paper is summarized in this online entry that Erik wrote, so you can get some sense of the private/public provision dilemma from this:

      http://eh.net/encyclopedia/article/craft.weather.forcasting.history

      Hope this is of interest. I’m not saying Erik has all the answers, but he’s worth reading on the subject.

    • The economic argument for privatization depends on two factors:

      1. Rigour: there is a sound means to assure only one buyer can obtain benefit of the resource;
      2. Exclusivity: no two buyers can make use of the same resource;

      With the data provided through massive measurement collecting at the national level and intensive computation built on top of that data, there’s certainly value to be had.

      Can effectively the same information be kept in one set of hands? No. An ambitious innovator (or fifty) could certainly do for weather what Google did with maps, and more cheaply and effectively gather the same data. It would cost more overall if this were done, and the benefits of dozens of overlapping datasets held in dozens of hands is not as great as the benefit of accumulating the same amount of data in one group.

      Can only one buyer make use of the information at a time? No. You could have a million copies of weather forecasts in the hands of people who would benefit from the information, and not one of them would be diminished for it.

      A minarchist would prefer if such an enterprise were spun off so far from government as possible, and did not have associations with law enforcement, tax collection and law making that wouldn’t improve the science of weather and do tempt politicians to politicize observation and analysis.

      Still, it isn’t “Capitalism” to privatize the major data gathering of common information. This said, there’s no reason to prevent private parties from doing their own thing too, as incremental improvement comes from innovative experiments and the pressure the private pool puts on the commons to improve.

      • You could be describing the Old Farmer’s Almanac and its forecast of global cooling was prescient.

      • A forecast is prescient by definition.

        A prescience that is wrong is excused by the forecaster’s friends, held up to ridicule by his enemies, and forgotten by people with something better to do.

      • e.g., the ‘Hockey Stick’ wasn’t prescient to those outside the cult of global warming and Western academia should have ridiculed it at the time it was published as being nothing more than a feckless hoax and scare tactic because it is historically inaccurate on its face. It is only later that we learned it was scientific fraud and that would be an additional category by which a forecast may be measured.

      • No, an accurate forecast is prescient. A failed forecast is a government project.

      • Wagathon | April 1, 2012 at 12:32 pm |

        Sorry, I have something better to do.

      • What you do speaks so loudly that I cannot hear what you say. ~Emerson

  16. Not all US Forecasting is falling behind. There is one that is ahead.
    http://www.aer.com/
    Atmospheric and Environmental Research, is, I think, a US Company.
    AER does skilled long range forecasts that, I have read, are generally are more skilled than forecasts by NOAA, NASA or the Europeans.
    Here is a link to a good many papers by Judah Cohen of AER
    http://web.mit.edu/jlcohen/www/papers.html

    • Herman, the private sector weather forecasters in the U.S. is where the real action is (including my company CFAN). However, these companies (incl AER) rely on the global models produced by NCEP, ECMWF, etc. They may run limited area mesoscale models. But it is a very expensive operation to run operationally global weather prediction models and do all the data assmilation.

      • globalwarmingmaybe

        Question to Dr. Curry
        Has NOAA published 2011 value for the Atlantic ACE index?
        (I need to update http://www.vukcevic.talktalk.net/AHA.htm)

      • Is this what you were looking for?
        Maue, Ryan. “Global Tropical Cyclone Activity Update.” Scientific. Policlimate, December 6, 2011. http://policlimate.com/tropical/index.html
        “December 1, 2011: The official end of the North Atlantic hurricane season:
        Total number of storms was exceptional (19) with 7 hurricanes and 3 major storms. Not so much outside of the Atlantic…”
        Abstract

        Tropical cyclone accumulated cyclone energy (ACE) has exhibited strikingly large global interannual variability during the past 40-years. In the pentad since 2006, Northern Hemisphere and global tropical cyclone ACE has decreased dramatically to the lowest levels since the late 1970s. Additionally, the frequency of tropical cyclones has reached a historical low. Here evidence is presented demonstrating that considerable variability in tropical cyclone ACE is associated with the evolution of the character of observed large-scale climate mechanisms including the El Nino Southern Oscillation and Pacific Decadal Oscillation. In contrast to record quiet North Pacific tropical cyclone activity in 2010, the North Atlantic basin remained very active by contributing almost one-third of the overall calendar year global ACE.

        Be sure to scroll down the page. Various graphs from 1970 through 2011 (Frequency, ACE, Tropical Storms vs. Hurricanes, Power Dissipation). Identifies 2010 and 2011 tropical cyclones by name and ocean basin. Note that the Southern Hemisphere lists 2 and South Indian & South Pacific lists 1 to date. (Different seasons, I assume.)

      • Sorry. Botched the blockquote. Starting with ‘Be sure to scroll down the page ….’ is my comment, not Maue’s.

      • The ACE per storm on my website is up to date according to the most recent North Atlantic best-tracks. Simply add them up…

        http://policlimate.com/tropical/

      • Thanks
        2011 North Atlantic ACE =127, if I added it all correctly
        http://www.vukcevic.talktalk.net/AHA.htm
        (vukcevic)

      • Sounds like the kind of project that Google might be well positioned to take on. They could do this kind of work on their server farms at night when utilization is low. These days, supercomputers are basically just big parallel arrays, like server farms.

  17. When I visit the UK, I make a point of listening to the Shipping Forecast; put out several times per day. It really is a thing of beauty.

  18. Cliff Mass makes some good and very important points. Cliff Mass’ institution UW is one of many fine institutions in the United States who have offered advice to NCEP but have not been allowed to make a difference. JC mentioned our visit to NCEP 2 years ago where we demonstrated our 10-day hurricane forecasts that outperformed NOAA and still do. As soon as we mentioned that we based our projections on ECMWF forecasts the audience was no longer interested. The discussion turned not to the needed improvement of their forecasts but to a cacophony of criticism of ECMWF and their products. The leadership of NCEP was not interested in an improvement that was not invented at NCEP.

    I have come to conclude that NOAA’s dismal record in weather and climate research and prediction comes form poor leadership. Many decades ago, NOAA was the beacon on the hill and something to aspire towards for the meteorological services of many countries. But during the last two decades the leadership at NOAA at so many levels has been lacking. And this lack of leadership has permeated the organization from the top to the bottom. What can be said about an organization whose current Undersecretary (Jane Lubchenco) first action was to change the depot of NOAA ships from Lake Union in Washington to Newport Oregon (Oregon is her home state). NOAA’s climate program was responsible for the creation of the International Research Institute at Columbia University. How the location of the IRI and how director’s were chosen is another sordid story but after 10 years and $150M of NOAA money, we have an organization whose efforts and forecasts, and influence world wide, are arguably surpassed by smaller and simpler less expensive organizations both overseas and the US. I might add that the private sector, more savvy with their money, choose these smaller entities rather than the IRI to produce their climate predictions. This raises another question: if the IRI is an example of NOAA’s efforts in climate, why does NOAA think it has a case for providing Climate Services??

    Which brings me back to NCEP and Cliff Mass’s comments. What to do with an organization that has chosen to be second rate, by the simple act of erecting a firewall around itself and not allowing interactions with outside communities that in many ways have surpassed the level of enterprise and expertise of NCEP. If I were the administrator of NOAA I would do a number of things. It is clear to me that a top to bottom reorganization is necessary. I would make a strict divide between weather and climate. As JC mentions, GFDL may well fare better in DOE as their influence on the NWS and NCEP has been minimal for decades. Ditto for NOAA’s Climate Office if it can justify its continued existence at all. But the largest change that I would do is to make NCEP more like ECMWF and UKMO, where they are regularly reinvigorated with new blood. In addition, they hold regular workshops, employ consultants and welcome interaction. This is in such great contrast to NCEP and, in my opinion, why the organizations have different trajectories.

    • Tom Choularton

      The way to make them more like UKMO is to allow them to bid for forecasting contracts in competition with the private forecasting industry. That would change things

  19. EPA Agrees to Dismiss Well Contamination Case Against Range

    The U.S. Environmental Protection Agency agreed to end a lawsuit that would’ve forced Range Resources Corp. (RRC) to fix natural-gas wells the government said were contaminating water in Parker County, Texas.

    i.e. some gas is natural, and not due to fracking.

  20. Re: JC’s recommendation: “Get NOAA out of the climate modeling business, and put DOE in charge of GFDL and their climate modeling activities.”

    Permit me to suggest an alternative. Move GFDL / NWS up a notch, directly under Department of Commerce. Short and medium range estimates really do impact commerce. Let GFDL / NWS organizations get closer to the end user.

    I personally think that DOE has reputational issues (at least with me). Carol Browner was a Czar, variously titled Energy Czar or Coordinator of Energy and Climate Policy or Assistant to the President for Energy and Climate Change. I do not know what remnants of agenda she left behind in DOE.

    One would need coordination of data definition, content and integrity between NOAA and NWS / GFDL (Geophysical Fluid Dynamics Laboratory?)

    • If the private sector is doing a better job with this, it seems to me that simply dissolving some of these appendixes would be a better option.

  21. In case anyone doesn’t know, Jeff Masters’ Wunderground site has REALLY good free, interactive, and pretty timely ECMWF model data output for several layers and parameters! Check it out!
    http://www.wunderground.com/wundermap/

    Geo

  22. Quibble:

    Falling behind = fell behind and now slowing catching up due to lack of competitive computing resources compared to ECMWF.

    NCEP GFS winter 500-hPa anomaly correlation skill scores for 5 – 8 day NH Z are actually quite competitive with the other global centers.

    There is little skill difference past 7.5 days for any deterministic NWP model.

  23. I do find it extraordinary that the USA, who pretty much invented computers and supercomputers and owns most of the worlds supercomputing resources, can be falling behind in numerical weather prediction.

    See supercomputer top 500 site:
    http://www.top500.org/list/2011/11/100

    I thought the accuracy of forecasting was about 70-80% at 3 days, and therefore beyond 3 days isn’t worth using for much.

  24. JC’s recommendation: “Get NOAA out of the climate modeling business, and put DOE in charge of GFDL and their climate modeling activities….”
    I second the motion. Better yet, dump climate modeling entirely. They have not produced any meaningful forecasts that I can think of and are not likely to do it anytime soon if ever.

  25. Who says numerical forecasting should be the method of choice? Why throw millions down the drain on computers that can calculate junk faster (GIGO) when other techniques can be far better and cost a fraction of the price. The prime example of course is Piers Corbyn at WeatherAction in London. Using his SLAT method, be doesn’t just do better than the UK Met Office, but trounces them.

  26. Get NOAA out of the climate modeling business, and put DOE in charge of GFDL and their climate modeling activities. The U.S. needs to get serious about weather and seasonal climate forecasting.

    I don’t think this is necessarily a weather or climate thing. As I understand it the UK Met Office uses the same fundamental model for weather and climate problems.

  27. As Eli understands it Judith is complaining that her company has to pay a substantial amount to get access to software necessary for its work, whereas this should be available to them for no cost from her government.

    This is called rent seeking, not a surprise

    • It seems the Rabett has lost his wits.

    • Eli, it looks like you’re trying to claw out rabbit warrens in basaltic lava flows. An emerging standard in quantitative sciences is that authors make their data and algorithm available so that others can replicate their analyses. Many of us view this as a victory for the progress of science and knowledge formation, but in your view this constitutes “rent seeking” when demands for such information is made? Crazy.

      • Bob K. | April 6, 2012 at 1:22 pm |

        I’m not sure I follow which you’re criticizing?

        Eli Rabett for incidentally identifying past rent-seeking activity as rent-seeking activity (it was, there’s no getting around it, by strict definition), or Eli Rabett for seeking to evenhandedly apply the same emerging standard many of you would (including me) view as a victory if it were real? (Or Eli Rabett for being Eli Rabett?)

        At the point the emerging standard is no longer just emerging, no longer optional, no longer something flouted by some, when it is as evenhandedly applied as standards of weights and measures and the full benefits of common practices flow to the entire marketplace of ideas, then the activity ceases to be rent-seeking, by strict definition, and simply becomes part of the level playing field for exchange of research information.

        No?

      • actually I’m not seeking rent, my statement was intended to illustrate how little worth I thought NCEP had relative to ECMWF, and I was putting some pretty big money where my mouth was. Personally, I am a big fan of the private sector weather companies. In the private sector, if something you are doing doesn’t make sense, you go out of business pretty quickly. My objection is to U.S. govt spending so much $$ on something that isn’t very good.

      • +5

      • Judith

        But can the private sector afford a lovely shiny new building like the Met office has here in Exeter?

        http://www.metoffice.gov.uk/corporate/pressoffice/images/hq.jpg

        I am very impressed by their library but perhaps Richard Betts can tell us why we can’t use the internal ‘public’ spaces adjacent, which includes a splendid cafe. Us hard working ibrary users have to make do with a coffee machine :)

        tonyb

      • curryja | April 6, 2012 at 2:12 pm |

        “My objection is to U.S. govt spending so much $$ on something that isn’t very good.”

        Which is what happens when one makes the mistakes the US govt has made: a funding model driven by what amounts to charity, lax controls and inverted incentives.

        If the funding is based on performance improvement, not on process; if the controls control for meaningful outcomes instead of political talking points, if the incentives align with objectives, then NCEP becomes not a pile of taxpayer-paid-for data files whose clients can only be rent-seekers by the very statist nature of the system, but a worthwhile standards body for information exchange.

        The same logic as applies when the very broken FOI process (so exploited by McIntyre’s circle of hackers under Mosher) is replaced with open data in government (which would have meant Mosher et cie wouldn’t have had any incentive to taint themselves by descending to CRU’s level in a mud-slinging match over a very, very small amount of data and code), applies to fixing NCEP.

        If the government only provides access to rent-seekers, it makes rent-seekers of all who operate under the government. That satisfies no one.

      • Dr. Curry,
        +5

      • Bart,
        The answer to your last question is to replace the question mark with a period. Eli’s calling McIntyre’s or Mosher’s actions “rent seeking” is odd, and if anything it’s the other way around. When a scientist publishes results based on an analysis of data but refuses to release the data or algorithms to others to check results or replicate work, *that* is rent-seeking. Why? Because that scientist wants his or her paper to carry the same level of influence and peer-respect as others who do make such information available. That scientist is seeking rent from the goodwill accrued by the scientist’s community gained from maintaining high standards for research and publication. A properly functioning “marketplace” would deprecate the non-cooperating scientist’s work.

      • BobK. | April 6, 2012 at 9:15 pm |

        Your argument holds some water.

        Rent-seeking breeds like bunnies. (No offense, Eli.)

        A properly functioning market of ideas ought deprecate the work of data hoarders; the inversions of incentives in science are manifold and ought be fixed. This isn’t a bad place to start.

        Still doesn’t make FOI a good system, or not rent-seeking in the cases set out.

  28. Cees de Valk

    A note from Europe: if we could only have the conditions for developing private weather business in place here that exist in the US … So if NCEP would improve its freely available tax-funded forecasts to the same level as ECMWF’s, then the European national weather institutes would not be able to charge those high prices for their tax-funded forecasts.

  29. curious george

    Fortunately we have now very accurate (and very scary) 100-year climate predictions. Are they so much cheaper to produce?

  30. I’ve got to ask, are these models:

    1. ‘FEM” (Finite Element Method) type models (say, modeling 10 x 10 x 10 km grids/boxes on the earth linking the effects grid to grid) or

    2. something ‘rule-based’ (AI or ‘expert-system’ based) where troughs, surface winds, surface lows, upper level winds and pressures (high pressure centers, lows etc) etc are ‘tracked’ and forecasts inferred from those existing conditions on some sort of gridded basis?

    I expect it is 1. above, whereas a human eye/mind inspecting the same existing weather charts/upper level soundings depicting present conditions uses 2. to extrapolate a forecast.

    My experience with FEM modeling is limited to EM tools e.g.
    Ansoft HFSS which solves for Maxwell’s equations in 3-space using linked tetrahedrons to compute conditions in and adjacent to conductors (conducting elements) and E and H fields into/out of dielectrics (air, plastics, foam, etc.) Success using HFSS is highly dependent on the correct selection of tetrahedron size and minimum geometry; too small and the number of tets exceeds CPU/compute platform memory resources (as well as extending computational time) and too few reduces accuracy to the point of perhaps missing important features in the EM structure under study.

    _Jim

  31. Gerald Browning

    It has nothing to do with computer power or satellite data.
    The equations of motion that are used in all global weather prediction models are derived for the atmosphere above the so called boundary layer.
    (The equations assume hydrostatic balance that is not applicable
    in the turbulent boundary layer.) When running such a model, the wind speeds near the lower boundary increase unrealistically and the accuracy
    of the model is destroyed within 24 to 36 hours as the errors at the lower boundary propagate upwards in the model. There is a manuscript by Sylvie Gravel that shows how this error is the dominant one in a forecast
    with an operational global weather forecast model (manuscript available on request).
    To overcome the unrealistic increase in the speed at the lower boundary,
    modelers include a parameterization that is nothing more than a dissipative
    term that increases in magnitude near the lower boundary. Unfortunately,
    the parameterization is not physically accurate so the model does not behave as the real atmosphere. The obvious question then is how can a
    weather prediction model that has such an obvious large error stay on track, The answer is that it can do so only by inserting new observational wind data every 6 to 12 hours in an attempt to reduce the errors that have been building up very quickly.
    Clearly this problem has nothing to do with the amount of computer power. And the satellite data is useless unless there is in situ data, e.g. radiosonde data, to help with the complicated inversion of the satellite
    radiance data in clear skies. And if there is radiosonde data, there is no need for the satellite data.
    Finally we note that these physically inaccurate parameterizations are present in climate models with no way to bring the model back on track.
    Judge for yourself what that means for all of the claims that are made based on these models.

  32. Happened to see this post and think that nothing much has changed since then. An update might be a timely reminder that meteorolgy should be given a greater priority over global climate research, especially for longer term weather forecasting.