by Judith Curry
A new study finds that human-caused warming in the west tropical Pacific was not responsible for a series of frigid North American winters experienced over the early 2000s.
A number of papers have been published in recent years arguing that global warming — via declining Arctic sea ice — is causing increased snowfall and colder winters in the Northern Hemisphere.
A 2014 paper published in Science by Tim Palmer entitled Record breaking winters and global climate change argued that rising greenhouse gas emissions may have played a role in the severe 2013-2014 winter in the U.S. midwest via a mechanism whereby global warming has caused changes in the location of the jet stream tied to warming of the surface waters in the tropical west Pacific.
A new paper by Sigmond and Fyfe contradicts Tim Palmer’s notion the notion that tropical Pacific changes have increased the probability of the unusually cold winters observed in recent years.
Tropical Pacific impacts on cooling North American winters
Michael Sigmond & John C. Fyfe
Abstract. The North American continent generally experienced a cooling trend in winter over the early 2000s. This cooling trend represented a significant deviation from expected anthropogenic warming and so requires explanation. Previous studies indicate that climate variations in the tropical Pacific contributed to many mid-latitude climate variations over the early 21st Century. Here we show using large ensembles of fully-coupled, partially-coupled and uncoupled model simulations that in northwest North America the winter cooling was primarily a remote response to climate fluctuations in the tropical Pacific. By contrast, in central North America the winter cooling appears to have resulted from a relatively rare fluctuation in mid-latitude circulation that was unrelated to the tropical Pacific. Our results highlight how decadal climate signals – both remote and local in origin – can together offset anthropogenic warming to produce continental scale cooling.
Published in Nature Climate Change [link]. The paper can be read on readcube [link]
The methods used in the paper combine global observations, and a series of climate model experiments that include fully coupled, partially coupled (constrained by observations) and uncoupled (atmosphere only) simulations. Excerpts:
Having established that the models used in our study reproduce the observed linkages between the tropical Pacific and North America, we now investigate their simulated trends in the early 21st Century. First we employ the ensemble of fully-coupled CanESM2 simulations to explore potential links between variations in the simulated ΔSAT (surface air temperature) trend and variations in the simulated SAT trend averaged over NWNA (northwest North America). [There is] a positive trend in ΔSAT (corresponding to a weakening zonal temperature gradient) and a relatively small cooling trend over NWNA. This indicates that had the simulated tropical Pacific variability in the model been aligned in time with that observed that it would have likely simulated the observed cooling over NWNA. This is our first line of evidence that the observed winter cooling over NWNA from 2001-2002 to 2013-2014 was a remote response to decadal changes in the tropical Pacific. Our second line of evidence is obtained from an ensemble of partially-coupled CanESM2 simulations where surface wind stress in the tropics is constrained to follow its observed monthly evolution from January 1979. In all of these so-called “pacemaker” experiments the observed trade wind intensification and associated ΔSAT increase is associated with cooling surface temperatures over NWNA. These results are consistent with relationships that exist between winter-to-winter fluctuations and 14-year trends in ΔSAT and NWNA SAT.
Indeed, it appears that the Aleutian Low weakening observed over the early 21st Century, arguably the most pronounced feature in recent Northern Hemisphere SLP (sea level pressure) trends, can be attributed to tropical Pacific climate variations. Moreover, the associated cold air temperature advection was sufficient to overcome the externally-forced warming to cause cooling over NWNA. By contrast, tropical Pacific related SLP trends over the east coast of North America drove southwesterly winds to produce warming, rather than cooling, over CNA (central North America). Hence, tropical Pacific variability cannot explain the observed winter cooling over CNA.
It is clear that tropical Pacific SAT trends played an important role in early 21st Century mid-latitude climate trends, particularly over NWNA. We ask if this is due to cooling in the central to east Pacific, or due to warming in the west tropical Pacific? To address this question we employ uncoupled simulations using CanESM2. We perform control runs with prescribed SST and sea-ice averaged over the period between 1997 and 2007, and simulations that use the control SST and sea-ice except over the key regions.
In response to warming in only the west tropical Pacific a slight warming of CNA is found, contradicting the notion that west tropical Pacific warming was responsible for the observed CNA winter cooling. Our analysis indicates that tropical Pacific changes during the early 21st Century produced a warming impact over CNA.
Using our uncoupled simulations, we investigate the speculation [of Palmer] that the probability of these unusually cold winters may have increased due to the tropical Pacific changes. To arrive at statistically robust conclusions, we extended the uncoupled control simulation and the simulation with SST changes in the tropical Pacific region, where the SST and sea-ice fields were taken from observations. We find that it is less likely to find a colder winter in the perturbed simulation than in the control simulation. Hence, our results contradict the notion that tropical Pacific changes have increased the probability of the unusually cold winters observed in recent years.
If the recent CNA winter cooling cannot be attributed to tropical variability, then what was its cause? To address this question we return to the large ensemble of coupled CanESM2 simulations and note that in three of 100 ensemble members a stronger than observed CNA winter cooling was simulated. Figure 6 shows the average simulated climate trends over the five ensemble members with the largest CNA winter cooling. As in observations this cooling is the result of northerly winds associated with a ridge of increased SLP over the west coast of North America and CNA. The composite shows that outside the mid-latitudes there are no climate trends that are substantially different from the forced response. This indicates that the observed CNA winter cooling over the early 21st Century was not a response to decadal changes in the tropical Pacific, but was instead the result of a local internally-generated fluctuation in circulation that was unrelated to the tropical Pacific.
Our analysis shows how remotely and locally generated decadal climate variations have offset anthropogenic warming to produce North American winter cooling over the early 21st Century. Similar to the slowdown in the rise of global-mean surface temperature this cooling is very likely to be a temporary feature, as future decadal variations could be opposite in sign and amplify anthropogenic warming of North American winters.
I featured this paper for several reasons. The first reason is that I really like the way the climate model experiments were designed, particularly in the use of simulations that are constrained by observations in key regions. This methodology is similar to that used by Kosaka and Xie discussed in a previous post Pause tied to equatorial Pacific surface cooling.
The second reason is that the U.S. winter temperatures drive energy demand, determining the price of natural gas (and profits for energy traders and regional power providers) and also the amount of carbon emitted in keeping the nation warm. My company’s energy sector clients are already asking about next winter’s temperatures, and whether we can expect a typical La Nina pattern. Depending on the situation, we can start to get a read on next winter’s ENSO sometime during the period May-July. For this year, all the signs are for ENSO to be in negative territory for next winter — whether this means La Nina or neutral (with negative shades) can’t be decisively determined at this point.
Global seasonal climate forecast models are getting more sophisticated, with an improving track record, but their skill is not very high at making a forecast 6 months out (for winter, initialized in early summer). Hence statistical seasonal forecasts of ENSO and long range weather forecast patterns are also made by forecast providers. Untangling the relevant signals from past observations often provides tantalizing relationships that seem to work for a few seasons, then stop working. Which of course implies that the complete climate dynamics in play weren’t adequately accounted for.
The key challenge is to integrate our understandings of the mechanisms and transitions of the modes of multi-decadal, interannual and seasonal climate variability so that we can develop better seasonal climate forecasts. Studies such as this new paper by Sigmond and Fyfe are providing key insights into the predictable versus unpredictable parts of seasonal climate variability.
Pingback: Record breaking N. American winters not due to climate change – Enjeux énergies et environnement
Thank you, Professor Curry, for your persistence in getting to the bottom of deception disguised as “consensus science.” The BrExit vote suggests the end is near.
No more escapes from the EU:
Judith Curry said:
The impact that the weather has on natural gas prices is so great that, if one could reliably predict the weather a few months out, it would give a trader a great advantage.
But of course, as Nassim Nicholas Taleb explains in The Black Swan, man’s ability to predict is severely limited:
Zbigniew Brzezinski is generally viewed as one of the foremost Middle East experts after serving as Carter’s National Security Adviser. He was a guest on one of the morning shows while protesters were pushing for the ouster of Egyptian President Mubarak. Brzezinski was emphatic that the Egyptian President would never resign. Never he repeated.
Literally, Mubarak was gone the next day. So much for the predictive capabilities of experts.
… And if anthropogenic warming is a mythical creation, are the study’s conclusions any different?
Thus far, everyone has failed to create a global circulation model that doesn’t give warming from increased CO2, despite some 40 years of trying.
If you could find such a model, then it’d be quite interesting to run a similar study with it. But at this point… meh. I don’t think it’ll ever happen. The climate models are getting quite good at helping us understand even interdecadal variations, and as their skill keeps increasing, they aren’t predicting any less warming. Their sensitivities are still quite solidly within the consensus projections based on models, observational data, and paleoclimate data.
Actually, AFAIK, nobody’s really tried. In fact, it’s my guess that during the early model-building exercises, any model that failed to give warming would have been thrown out because they already assumed the “greenhouse effect” would produce warming.
If you know of any real efforts to code models that otherwise worked but didn’t produce warming, why not post links.
Whoa, Whoa, Whoa totally incorrect. They are all programmed to include ln forcing for Co2, but what I remember reading it wasn’t until they allowed a supersaturation of water vapor that they could not only match measured warming, but they warmed to much and started using aerosols to cool them off to match measurements.
Unfortunately though the surface hasn’t warmed from Co2 much at all, so now we have a broken model that they keep using other hacks to cool them off because they all run warm.
AK is correct. No one has displayed a model that cools when CO2 goes up, even though such a model is clearly possible. It merely has to include one or more or the prevalent natural change hypotheses, which the modelers steadfastly refuse to do. Clouds, ocean, sun, emerging from the LIA, chaos, you name it.
The US NSF says officialy that natural century scale climate variability does not exist. How then are the people they fund to say otherwise? At what point does this refusing to consider well known hypotheses become a paradigm protection scandal?
To that first point, quite the contrary. I’ve heard of work from the ’60s and ’70s which showed quite a correlation between model skill and GHG-induced forcings; they had trouble producing models that could replicate the real world while also showing no warming from increased CO2.
But, those were the early models, and I’m insufficiently familiar with what’s been done since. The models have changed significantly since the ’70s, and there are considerably fewer free variables for people to tweak. Most of the looser variables have been pushed out by actual physical measurements; this give the climate modelers considerably fewer options for alternative models.
Probably the best chance there for a low-sensitivity model to emerge is to check that the variables accurately sample the physical uncertainties in time, space, and interactions. From what I understand, there’s a lot of progress being made on this front.
And let me tell you, I’m an “alarmist”, but if I could produce a model with low CO2 sensitivity, that could reproduce the real climate with equal skill as current GCMs? I’d publish that in a split second. It’d be a *fantastic* thing to have on your CV, and it’d be of immense use to the scientific community, as it’d highlight areas of climate modeling that are both uncertain and highly relevant to the Big Question of sensitivity.
No, they’re programmed to calculate the optical depth, given a certain pressure and concentration of gases. Meaning, they calculate the radiative properties of the gas, and that couples to the temperatures via physics. But they never assume that CO2 causes warming. (That’d be obviously stupid: you can’t use a model to show something that you started by assuming).
The physics from clouds, ocean, and the solar radiation are all included in the models. Chaos (on ~year or ~decade timespans) is an emergent phenomena of the math and physics, though it’s tamped down over multi-decadal timespans by natural damping factors.
You can’t “include a hypothesis” in the models in the way that I think you mean. You have to include some piece of physics. The climate in the models emerges from the physics and numerics, rather than being imposed from above. That’s why they’re useful; otherwise it’d just be masturbatory self-confirmation.
This is also where many of the arguments against global warming fall flat – they don’t have any physics to back them up. There’s no new mechanism that would explain how CO2 fails to warm, or how the warmer air fails to hold more water, etc. You need to have the physics, something fundamental that can actually be expressed in terms similar to a physical law.
“Thus far, everyone has failed to create a global circulation model that doesn’t give warming from increased CO2, despite some 40 years of trying.”
Unfortunately, it doesn’t matter what the global circulation does or does not do. The Earth has cooled since its surface was molten (creationists may beg to differ, but that’s my assumption). Cooled, not warmed.
CO2 provides no heat. It’s not even a good insulator. A vacuum flask provides far better insulation than a notional CO2 flask, and a vacuum flask possesses no heating ability either.
Any model that shows CO2 warming merely shows that its creator is somewhat deluded. CO2 is excellent stuff. Food for the staff of life.
The bumbling fumblers promoting the reduction of CO2 can’t even tell you what what would happen if the climate miraculously stopped changing! Foolish fatheads, one and all!
We know England was far from basking even before Tambora when the Arctic was clearly described as open by Banks and the RS in 1817. In fact that period was a shocker for snow and cold.
We know of storms and bad winters when the Arctic was clearly described as open in the 1920s. (Knickerbocker Storm was 1922.)
On the other hand, wasn’t there plenty of ice in the Arctic during North America’s legendary winter of 1888? And for England’s big freezes of the early 1960s.
(Of course, we’re supposed to pretend – to keep those jet trails streaking toward climate conferences – that the Arctic ice has been static for centuries and not going up and down like Bill’s trousers. Just thought I’d play Let’s Pretend We Stop Pretending for a moment.)
Great quote: “and not going up and down like Bill’s trousers.”
If you think there is a problem with Climate conferences, you are attending the wrong ones. There will be a climate conference in London in September, with many skeptics. https://geoethic.com/london-conference-2016/
Having written 2 articles now on Arctic ice variability around the 1820’s and 1920’s it should be obvious that ice variability is considerable.
There is another period of low and fragmented ice cover around the mid 1550’s whereby the North East passage was possibly navigated by Russian explorers. The Vikings also sailed round northern seas that were often low on ice, but interspersed with harsh periods.
I have gathered a great deal of information from personal visits to the archives of the Scott Polar Institute in Cambridge and the Met Office library in Exeter. Some day I hope to write Historic variability in Arctic Ice part 3, covering the 16th century arctic ice variability.
In the meantime however I plough on with researching a CET type temperature for 13th century Britain, a period of considerable cold but interspersed with a few very hot summers and very mild winters and many extreme weather events.
That the historic climate had often been very variable with frequent severe weather events seems to have been replaced with a unsupportable notion that the climate didn’t change much until modern man came along. Baffling, as the records show otherwise. Perhaps the idea has arisen as the recent climate has often been relatively benign?
Are we seeing a reversion to the considerable variability of the past? I dunno.
Tonyb, if reverting I could certainly do without the 14th century. St. Mary’s Wind and Grote Mandrenke would make 1362 the worst year to be outdoors in England, though being over on the continent in Edward III’s army in 1360 might mean getting clobbered by giant hail or freezing to death. Those soggy cold famine years of 1314-16 would have been the most lethal event of the century, were it not for the Black Death later on.
Of course, the greatest of all climate disasters hit not England but the continent in 1342. For once, when talking about the Magdalene Flood, the word “unprecedented” might apply.
It’s clear that, as you point out, the previous century was no pussycat (Divine Wind!) but the combination of disease, famine, war, cold, flood, storms and even drought after 1300 would do me in. I’m so used to radiant winter days and summers fanned by maritime breezes that I’m a total climate sissy.
Well, one day the Holocene will be over or we’ll get caned by a Bond Event as a rehearsal. Maybe then the 1300s won’t seem so bad.
“wasn’t there plenty of ice in the Arctic during North America’s legendary winter of 1888?”
Extremely unlikely with a warmer AMO anomaly in the late 1880’s.
You could be right (and I wrong).
Lamb was of the opinion that waters east of Greenland were more open prior to the big freezes of the 1880s and even 1960s, as well as early 1600s etc. He writes of ice increasing substantially around Greenland/Iceland only well after the brutal winters of the early 1960s, with increases around Newfoundland/Labrador coming a decade later.
But Lamb also writes of a huge ice barrier not far from the Faroes in 1888. (1888 was the year of Nansen’s Greenland expedition, but no mention by him of particularly open conditions. He might have had other things on his mind.)
All too complicated for me. Open water where whalers and navigators went didn’t mean the whole Arctic was like that. Maybe you had to be there.
“He writes of ice increasing substantially around Greenland/Iceland only well after the brutal winters of the early 1960s, with increases around Newfoundland/Labrador coming a decade later.”
Well naturally, as the AMO cooled as the result of a positive NAO/AO regime. A negative NAO/AO regime is norm during solar minima, including the Gleissberg Minimum of the late 1800’s.
Here we show using large ensembles of fully-coupled, partially-coupled and uncoupled model simulations that in northwest North America the winter cooling was primarily a remote response to climate fluctuations in the tropical Pacific. By contrast, in central North America the winter cooling appears to have resulted from a relatively rare fluctuation in mid-latitude circulation that was unrelated to the tropical Pacific. Our results highlight how decadal climate signals – both remote and local in origin – can together offset anthropogenic warming to produce continental scale cooling.
The results highlight how really terrible Climate Models are. They have no clue as to why the Climate Models do not ever produce skillful forecasts.
Why weren’t Michael Sigmond & John C. Fyfe rowed out to sea and drowned and this paper natural causes destroyed? Has all order broken down? Oh wait… it still has an alarmist conclusion and floats the fear that natural caused combined with AGW will be the death of all. Thank God!
That’s right. Anybody who questions even one article of the one true faith is a denier and must be burned at the stake.
… the profoundest thought or passion sleeps as in a mine, until it is discovered by an equal mind and heart. ~Emerson
Record Coldest Day evah in June 26th Shepparton Australia at 8.0 C [in records].
Near Moyhu , Nick.
Hows that JCH? Waited for this.
Next 2 years will be a constant reminder of dropping temperatures just for you.
Seeing the local factors can be so impressive, Judith, I hope none occur to put your external factors forecast off.
And you are also ridiculous.
Also coldest ever in Yarrawonga [I’ll linger longer]
The Bureau of Meteorology reported that during the day on Sunday, Shepparton, with 8 degrees reported its coldest June maximum temperature on record, as did Yarrawonga, with 7.7 degrees, and Kilmore with 4.6 degrees.
Note the temperatures are in freefall with El Nino fading, [See Moyhu charts]
get used to egg on the face, you threw a lot.
“Here we show using large ensembles of fully-coupled, partially-coupled and uncoupled model simulations that in northwest North America the winter cooling was primarily a remote response to climate fluctuations in the tropical Pacific.”
It is hard to understand why so many people believe that their models explain the complex physical world. I guess it must provide some comfort to think that what was unknown and unpredictable and wild, has now been tamed.
If somebody looks through the comments about the weather made in the family bible and decides they can plant corn a week earlier, that is somebody who believes his regional climate model can predict the future based upon past climates. Not betting on the future is a complete impossibility, so we go forth… and we pick a day to plant the corn.
The predictions concerning the weather that were made in Old Testament times were far more modest than the predictions that Warmists/Alarmists are peddling these days.
Buy hey, I understand. The Warmist/Alarmists believe that if they can get their hands on that CO2 control knob, they can control the weather.
It’s all part and parcel of the modern project envisioned by Descartes and Hobbes, “asserting the capacity of the human will and the capacity of man to make himself master and possessor of nature” through the scientific enterprise.
However, there is less difference between the omnipotent God as posited in the Bible and omnipotent Man as posited by the Warmists/Alarmists than one might think. As Michael Allen Gillespie concludes in The Theological Origins of Modernity:
Marxists, neoliberals, neoconservatives and New Atheists (or SAM, the “skeptic and atheist movements” as Massimo Pigliucci calls them) have all drank lavishly of the Progressive Kool Aid.
And indeed, the modern project has had its share of success. As Gillespie goes on to explain:
But despite its shortcomings, the modern project is still widely believed in, as Gillespie goes on to explain:
And because Modernism is so flattering to the scientific enterprise, it should come as no surprise that many, if not most, scientists are true believers in the modern project.
Glenn, I don’t believe JCH was referring to the text of the Bible itself, but the way the book was used by rural families to record significant facts like family births, deaths, and weather trends.
My parents were both farmers, my father from German peasant stock. Both were extremely pious Christians. So I was referring to this:
But as my parents well knew, every once in a great while, at least in West Texas where they lived, you get a hard freeze after Good Friday.
So what are you saying, that JCH was implying a bit more empiricism?
I doubt it. Cross-cultural studies reveal that’s not really how human beings operate:
I realize that Modernism teaches that we made a “clean break” from all this ancient history, that we began with a “clean slate” and a “fresh start” upon the advent of the modern project.
But the behavior of the Warmist/Alarmists indicates the old ways of superstition and myth are still very much with us.
The Good Friday, whose date shifts almost by a month, is not a good reference point.
My guess was/is that he was referring to the way that old Bibles included a number of blank pages to be used for recording births, deaths, and other interesting events. A farmer might be able to go back a century or two and find references to late hard freezes, or other items that might constitute a risk.
That’s one scholar’s view.
I’ ve rambled on about the subject on occasion.
The evolutionary biologist David Sloan Wilson (an advocate of multi-level selection theory, where group cohesion is important, as opposed to individual-level selection theory, where group cohesion is not important) made an interesting claim about myths (religious, national, etc.):
There seems to be some truth to this. Here, for instance, is an historical example:
Yes. And the construction of “fictional” genealogies as a metaphor for tribal alliances among pre-literate “tribal” societies would also fall into that category. IIRC there was a discovery (early 20th century?) that certain Islamic populations under British(?) colonial rule actually used tribal genealogies as a legal record of tribal alliances and obligations.
The genealogies in Genesis may record similar phenomena. There are differences among the various lists, and these may well represent records from different times of “tribal” alliances and obligations during the conversion from purely a “tribal” system to a religious or urban-dominated federation.
Couldn’t find the ref’s I wanted with a quick search, and don’t have time for a longer one, but maybe I’ll dig them up sometime later if there’s interest. IIRC it’s a pretty widely accepted phenomenon, or was during the later 20th century.
The creators of the European Union also attempted to formulate a mythology that would serve as the glue that would bind the union together:
However, the myth created by the founders of the European Union doesn’t seem to have prospered and served its purpose as well as the Batavian myth did for the Dutch Empire:
Lol… just hilarious.
And we here in the United States also have our national mythology, which as James Baldwin pointed out in The Fire Next Time, is largely fictitious. But nevertheless, it has been highly adaptive for some Americans — those who could “get it,” as the Rev. Martin Luther King, Jr. put it. Baldwin writes of our national mythology as follows:
Unlike the separatists — like the Chicano movement or La Raza, which Judge Curiel belongs to — the Rev. King always advocated assimilation and integration:
When does the family Bible tell you to plant for the year 2089?
What is ‘lol, just hilarious?’
JCH: Lol… just hilarious.
What in particular was that a response to?
HFCs, CFCs, and ozone… rinse, repeat.
Of course, if you are serious you don’t use slobby terms like “record-breaking” and “climate change” relying on their popular/emotional loading. You at least qualify carefully, but at best you just don’t use the lax expressions at all.
The slob terminology is handy though. You can dismiss Tim Palmer without having to read his schlock.
I would like to add my voice to others’ here: The climate models they used have no skill, cannot predict anything anywhere, and their findings are of no value whatsoever. Models need to be tested (successfully) before being used.
Agree. See details on CanES2 below.
” The climate models they used have no skill, cannot predict anything anywhere, and their findings are of no value whatsoever. ”
Argument by assertion. a common anti science tactic.
Skill can be measured.
Merely asserting a model “has no skill” is not an argument. It’s an assertion.
Calculating the skill will give you better ground to stand on.
Then you have to assess the skill relative to other types of forecasts.
For example. Every GCM beats the skeptical climate models on multi metric tests because there are no skeptical climate models that output multiple metrics.
SM, so I give specific CanES2 skill metrics below. Pretty bad. Only took a half hour of googlefu. In this case the assertion was correct.
I don’t understand how a scientist can take the Wikipedia “Forecast skill” article seriously. It has a scientific-looking formula, but it does not define any of the symbols defined in the formula. Moreover, it seems to be a simplistic definition of a skill for a one-dimensional case only, totally unsuitable for climate models.
Grrrh .. it does not define any of the symbols USED in the formula.
Be curious, George, and click on linkies:
… or read the references.
“For example. Every GCM beats the skeptical climate models on multi metric tests because there are no skeptical climate models that output multiple metrics.”
What utter nonsense. Even the IPCC says that forecasting future climate states is not possible.
As to GCM skill, if 73 climate models produce differing outputs, then at least 72 are wrong. Even if you claim one is correct, you can’t say which one, or whether it will be correct tomorrow. A pointless waste of time.
An individual model will produce differing outputs, depending on initial parameters, and number of iterations. Lorenz did useful work, after discovering this. As the IPCC said, climate is chaotic.
All climate models are a complete waste of time, and never produced anything useful. The blind leading the halt, so to speak. You remain clueless as to the nature of chaos. Don’t worry, so are most scientists, more’s the pity.
One does not have to find other models better or worse to determine the skill level of a particular model. My contention is that there is probably no competent model capable of being formulated at the present juncture.
By definition a skill score *is* a comparison to some other model:
A statistical evaluation of the accuracy of forecasts or the effectiveness of detection techniques.
Several simple formulations are commonly used in meteorology. The skill score (SS) is useful for evaluating predictions of temperatures, pressures, or the numerical values of other parameters. It compares a forecaster’s root-mean-squared or mean-absolute prediction errors, Ef, over a period of time, with those of a reference technique, Erefr, such as forecasts based entirely on climatology or persistence, which involve no analysis of synoptic weather conditions:
If SS > 0, the forecaster or technique is deemed to possess some skill compared to the reference technique.
Your contention about “competent” models is meaningless because there is no absolute definition of competence. When talking about model skill, there is only fidelity to observations relative to some other technique based on an error metric. If your definition of “competent” is zero error, well, you’re right — we can’t obtain that goal and never will. OTOH, reasonable people understand that a model is useful within the limits of its estimated uncertainty and don’t expect the impossible.
Wake me up when “skeptics” produce CMIP5-compatible model that beats this ensemble:
It should also be more “competent” at things other than surface air temperature. Good luck.
Brandon, I understand how to compute a skill of a model to predict a temperature in Berkeley at 2016/06/29,23:55. Substitute Singapore, Paris, Tokyo, or Cairo. Substitute a 1-hour rainfall for temperature. Substitute atmospheric pressure. Substitute a percentage of cloud cover. Substitute another date or time. How do you measure ONE skill for all of the above?
Why is it that Warmists are always demanding that realists produce a climate model? As the IPCC say, future climate states are not predictable.
Don’t you think it’s a bit odd that Warmists keep stridently demanding that others provide the miracle that the Warmists can’t, as well as being something that the High Priests of Warming say won’t work, anyway?
Climate models. Even Warmists admit that they are both pointless and useless. What kind of fool would call this scientific?
A climate fool, obviously!
MF, precisely the point. The “skeptics” only have to produce a model that doesn’t (miraculously to you) warm when you double CO2? How hard can it be? But they haven’t, Maybe it is hard after all because it defies both physics and observations.
As the skill is easy to calculate: please publish the skill of climate models predicting record breaking American winters. That’s (partly) what this post is about.
Record-breaking winters are weather, not climate. The don’t do year-by-year predictions and no one does.
Each one of those will be its own skill score.
We might find that one GCM does well on GMST, but is bad in the higher latitudes relative to the tropics whereas a different GCM is overall poorer at GMST but does better than the first model at high latitudes. Perhaps a third GCM does better than both at precipitation. The whole idea behind CMIP was to do those sort of comparisons in a standardized way at the regional and grid level.
Main point of all of this is that saying a model has “no skill” in *absolute* terms is meaningless. Skill is only meaningful for comparing models *relative* to some other reference model — which can be anything you want.
You wrote –
“Main point of all of this is that saying a model has “no skill” in *absolute* terms is meaningless. Skill is only meaningful for comparing models *relative* to some other reference model — which can be anything you want.”
I see. I prefer models which are useful. A model whose output is compared with another to determine usefulness is plainly stupid. By that standard, if two models agree, then they are both perfectly skilful.
Nonsense. A model that does not relate to reality is at best a toy, and not worth a brass razoo, except as an example of what to avoid.
Climatologists keep exhibiting the symptoms of mental retardation by performing the same actions – programming toy computer models using the same Warmist pseudo physics – and expecting a different outcome.
I wish it were otherwise, but, sadly, it’s not.
Speaking strictly for myself, I presume that self-proclaimed expert climate realists ought be able to demonstrate their superior knowledge by putting it into practise. You know, like advancing knowledge of how things work instead of cluttering up the Internet and airwaves ranting about how stoopid and deluded warmunists are.
Being more useful in other words. Speaking of:
Neither is weather, hence various weather services issue probablisitc estimates. In like fashion, the IPCC publishes ranges of possiblity. Because everyone knows earthquakes aren’t predictable, seismologists do something similar:
What a bunch of rent-seeking grant-whoring pseudo-scientists, hey? No cookies! Obviously plate tectonics is a dead theory!
Meteorologists at least use normal physics. Even so, they have great difficulty in bettering the naive forecast, which can be performed by a reasonably intelligent 12 year old.
Unfortunately, laws prevent the employment of child labour, generally. Also, children tend to get bored, and move on to something more interesting.
As an example of the lack of forecasting skill by well funded experts, you need to go no further than the British Met Office’s services to the BBC. The Met Office forecasts over many were inaccurate to the point that the BBC sacked the Met Office, and called for tenders from organisations prepared to be accountable for their forecasts.
The scientists jailed for manslaughter in Italy due to incorrectly foreseeing earthquake activity, were released on appeal, after claiming that it was impossible to predict earthquakes, and that past activity was not an accurate predictor of the future. Not their fault, you see.
Backside covering, by presenting guesses – even best guesses based on science – is meaningless. The assessed likelihood of an occurrence of an earthquake, flood, dam failure, or whatever, is meaningless in real terms.
Chance of rain tomorrow – 70%. How much? Where? Can I pour concrete without rain affecting the job? Can I plant a field without getting too much rain?
Climatology explains absolutely nothing. Climate is the average of weather. Meteorologists or geologists may not be able to predict the future, but they can explain why things have happened. This is called science. It’s driven by innate curiosity.
Keeping in mind that we’ve specifically discussing GMST, “reality” is also a model. Frex, one gets different skill scores using HADCRUT4 vs GISTemp as the model of reality.
No, just equally skillful. Perfect skill using this formula …
… is when the model output exactly equals the observational model output because 1 – (Ef / Erefr) = 1 – (0 / x) = 1 for any positive value of x. E can be any number of different error metrics, MSE is popular.
Pretty clearly the reference model used in this method should have some bearing on estimated reality to be useful. Persistence (no change) and climatology (the mean of observations over some suitable time interval) are popular reference models. Only you would think that meteorologists would be so retarded as to not grasp such an obvious concept. Nice try though.
Proper sceptics don’t insinuate vast global conspiracies to use “fake” physics. Fun fact, meterologists were using global circulation models as early as 1978. The stuff I learn doing your homework for you!
Properly educated, a reasonably intelligent 12 year old knows the difference between weather and climate and understands that predicting the former is an initial conditions problem whilst the latter is typically approached as a boundary value problem. Only the most obstinate of dullards insist that state of the art climate modulz produce long-term weather forecasts …
… OTOH, perhaps all with you is not lost.
Your friendly neighborhood climatologist would be all too happy to explain this to you after the fact …
… if only you hadn’t already decided that (s)he’s using post-normal physics to demonstrate the causality.
Indeed. Perhaps one day you’ll display some actual familiarity with either.
You wrote –
“Pretty clearly the reference model used in this method should have some bearing on estimated reality to be useful.”
I’m curious as to why you insist on comparing one model to another reference model which “. . . should have some bearing on estimated reality . . .”
Why not compare the first model’s output to actual reality, measured to the best resolution of your instrumentation?
I know this might cause consternation amongst model fixated Warmists, but it would at least remove one confounding factor.
Climatological models have no utility, as far as I am aware. I am sure you can respond with at least one documented case where a climate model proved useful where all other scientific advice failed.
Or maybe not. I leave it to you.
That’s already being done, Rufus. Let’s review the formula again:
See the error terms Ef and Erefr? We *could* simply use the raw error term Ef and skip Erefr, but suppose our model has two output variables with different units, say temperature in Celsius and rainfall in millimeters. Taking the ratio of the error terms allows us to more easily say things like, “this model’s temperature forecast is worse than its rainfall forecast”.
Not only do meteorologists use normal physics as you so charmingly put it, they’re also apparently numerate to boot. Who knew?
“Our analysis shows how remotely and locally generated decadal climate variations have offset anthropogenic warming to produce North American winter cooling over the early 21st Century.”
My solar based forecast predicted a long cold hit to the NE U.S. commencing from around 6/7 Jan 2014. After the event, the best analogues I found where in winter the AO went negative, and the NAO went positive with stormy wet mild conditions in the UK, presumably caused by similar NE Pacific ‘warm blob’ blocking, was a cluster in the last solar minimum in the late 1800’s.
Reblogged this on The Ratliff Notepad.
Reblogged this on I Didn't Ask To Be a Blog.
I’m beginning to appreciate more the difference between monthly means and daily extremes. Certainly for heating fuel use, monthly means matter. But as a matter of human nature, wrt climate changemany of the storylines gravitate toward what happens at the extremes ( because monthly averages are too boring ).
Here are the average number of daily highs lower than various thresholds for CONUS only GHCN stations ( number of occurrence divided by number of valid stations for a given year ). There does not appear to be any trend larger than the inter-annual variability. Meaning: extreme cold in the US continues to be normal.
Depending on the verisimilitude we invest in its conclusions, doesn’t this paper represent the summum of academic treachery in that if it acknowledges the power of science to discern that the real cause of relatively cooler winters during the early 2000s was not man-made, then we all must also acknowledge the paper further prove what every honest scientist has already learned: that the AGW hypothesis can never be scientifically validated. But instead, the paper continues to cling to notion that AGW is real.
For false Gods to exist, there must be real Gods.
Even if we accept as fact that the naturally-caused cooler winters during the early 2000s would have been even cooler without some measure of systemic man-caused global warming, what does this study purport to say was the magnitude of this AGW effect? What could the study ever conclude about man’s warming effect on nature without knowing how much colder — albeit for purely natural reasons — is was in the early part of the 2000s from the usually-naturally-lesser-cold winters, when there is no measure for how cold a winter really ought to be, especially when the study is saying that the colder than normal winters of the early 2000s must have been unnaturally high due to AGW?
What determines N. American winters are the climatic oscillations from ENSO to the NAO etc which have no correlation or tie in to the hoax of AGW.
Remember AGW theory calls for an increasingly more +NAO going forward seem to be wrong on that as well as everything else.
I place no value whatsoever on these results. The reason is all the modelling was done with CamES2 from CMIP5. And it produces nonsense.
According to AR5 WG1 chapter 9 appendix 1, CamES2 has a grid resolution of ~2.8 degrees. That is the lowest of all the GCMs. It produces among the highest discrepancies to balloon, sat, and surface temperatures since 2000, and arguably the most significant tropical troposphere hotspot (which does not exist in reality). To see how awful CanES2 is, read Ken Gregory’s guest post at WUWT October 24, 2013. According to Gillet et. al 2012, the CanES2 TCR is 2.3-2.4. (Skeptical Science discusses this paywalled paper in depth.) The observational energy budget TCR according to Lewis and Curry is ~1.3-1.4. CanES2 runs hot by 1.75x.
There is no way such a poorly performing model should be relied on to conclude anything concerning decadal regional downscaling, for example here NwNA. The direction of inquiry is legitimately interesting, agreed. This particular result is not.
ristvan: I place no value whatsoever on these results.
I think that the value comes from the demonstration, yet again, that announced results based on current knowledge and model proficiency (e.g. Palmer, 2014) can not actually be relied upon. In the hands of experienced practitioners (e.g. Palmer et al; Sigmund and Fyfe), the models do not even unequivocally produce any consistent outcomes. Or, as is sometimes written on this site, there are so many degrees of freedom (option available to the modelers) that just about any predetermined outcome can be obtained. I don’t know that any of the modelers intentionally made choices in order to get their desired outcomes, but the range of possibilities is vast.
The study results should not be relied upon for policy purposes or personal decisions (I agree with you there), but they are a contribution to the now enormous pile of modeling approaches that can be compared to future data.
For that audience that thinks that the model outputs are really, really valuable, this is another cautionary tale.
Having established that the models used in our study reproduce the observed linkages between the tropical Pacific and North America,
that’s a start. If their models make forecasts for the future. perhaps those out-of-sample data can test them.
Are we headed for a new solar minimum?
Type Blog Post
Author Judith A. Curry
Date Jun 27, 2016
Accessed 6/27/2016, 3:20:13 PM
Website Type Scientific
Abstract We can conclude that the evidence provided is sufficient to justify a complete updating and reviewing of present climate models to better consider these detected natural recurrences and lags in solar processes. – Jorge Sánchez-Sesma
In pondering how the climate of the 21st century will play out, solar variability has generally been dismissed as an important factor by the proponents of AGW. However, I think that it is important that scenarios of future solar variability and their potential impacts on climate should by considered in scenarios of future climate change.
Blog Title Climate Etc.
Date Added 6/27/2016, 3:20:13 PM
Modified 6/27/2016, 3:26:52 PM
“Solar Cycle Progression.” Accessed May 22, 2016. http://www.swpc.noaa.gov/products/solar-cycle-progression.
makes the case that solar activity is very weak and probably going to become weaker going forward.
Anytime we read “Here we show using large ensembles of fully-coupled, partially-coupled and uncoupled model simulations that…” we need read no further. Models cannot show anything about climate. At most they can suggest a possibility. Models are not observations, They are people playing with equations..
Worse than that, there’s a tacit assumption that these big ensembles are actually sampling the space of all possible samples. That’s (probably) not true. They tend to all be based on the same assumptions. They make similar mistakes: for instance, they all seem to be getting the monsoon effect wrong. In the same way.
The fact that they aren’t scattered all over the map WRT phenomena that should be unrelated (to supposed differences in model programming) strongly suggests that they all have a set of built-in assumptions that are wrong.
Not necessarily. Systematic biases can also result from things like insufficient resolution; resulting as interactions between grid size and the physics, or between grid size and the sub-grid parameterization. And in fact, some of the biases do markedly decrease as resolution increases.
But that distinction is a bit hard to tease out, as sub-grid parameterizations depend on the size of the grid. You can’t separate one from the other.
Maybe it’s better to say that as resolution increases, the physics gets easier to do.
BW, quite so. This ensemble is all results fromone of the two lowest resolution models in CMIP5. See my comment above. Consequently, it is extra parameterized. Consequently, it performs very poorly. See my comment above for several specifics. Consequently, as much as the post heading may be welcome news to skeptics, it just isn’t a solid or trustworthy paper.
We read that “A new study finds that human-caused warming in the west tropical Pacific was not responsible…”
But we do not know that this warming is human-caused to begin with. Hence the statement is worse than false; it is vacuous, referring to something that is is not known to exist.
Agree. The whole thing seems another rather shoddy climate model video game exercise.
A thousand bumbling buffoons given immense resources, may chance upon something scientifically useful.
Unfortunately, they would be incapable of recognising it.
Why do we keep funding the buffoons?
Because we can’t identify the buggers in advance. They’re cunningly disguised as normal people. Their buffoonish nature emerges later. Chaos at work – unpredictability writ large!
> A thousand bumbling buffoons given immense resources, may chance upon something scientifically useful.
Which is why we should never give up on Denizens.
Aw, come on Willard. If the thousand of bumbling buffoons have collectively made tens of thousands of comments, there must be a number of scientifically useful nuggets. Perhaps you can unearth them for us?
> Aw, come on […] Perhaps you could […]
Sure, TonyB – would you like a coffee too?
MikeF gave me the puck in front of your open net. This one’s on Denizens. Go team!
Judith hammers home the nail with this threesome: “…mechanisms and transitions of the modes of multi-decadal, interannual and seasonal climate variability…”.
I see three data sets that can combine. Simple math learned in grammar school tells us how many combinations can occur in a three way multiple data combination. Seasonal forecasting is therefore still in the outside realm of actionable decisions to prepare for such events much in advance, even if we know the full modes in each set.
Therefore, rank and file citizens of every flag must fall back on themselves to be prepared for any and all weather (let alone Earthquakes), or left begging for mercy when in the storm. Thinking that government entities, or the science they choose to support, will prevent injury from Mother Earth, serves only to enlarge the death toll.
Cool, Pamela … or possibly hot,
with the prospect of late afternoon storm.
Pingback: Study: Record cold and snowy North American winters not due to climate change – within normal variation | Watts Up With That?
“human-caused warming in the west tropical Pacific ”
Care to show me proof of this?
It became embarrassing to to say that climate change (warming) should give freezing winters. All are happy now, except that the winters are sstill freezing cold with lots of snow.
That is when the caramel rolls taste the best at the the Four Corners Café.
Don’t let it get around…
Pingback: Weekly Climate and Energy News Roundup #231 | Watts Up With That?
Pingback: Is much of current climate research useless? | Climate Etc.
Reblogged this on Climate Collections.