by Judith Curry
Rather than reducing biases stemming from an inadequate representation of basic processes, additional complexity has multiplied the ways in which these biases introduce uncertainties in climate simulations. – Bjorn Stevens and Sandrine Bony
Science has published an interesting opinion piece by Bjorn Stevens and Sandrine Bony entitled What Are Climate Models Missing? [link] to abstract (paywalled). Here are some excerpts:
General Circulation Models [GCMs have gradually morphed into Global Climate Models, and with the more recent incorporation of models of the biosphere and the associated cycles of important chemical nutrients, Earth System Models.
The increase in complexity has greatly expanded the scope of questions to which General Circulation Models (GCMs) can be applied. Yet, it has had relatively little impact on key uncertainties that emerged in early studies with less comprehensive models. These uncertainties include the equilibrium climate sensitivity (that is, the global warming associated with a doubling of atmospheric carbon dioxide), arctic amplification of temperature changes, and regional precipitation responses. Rather than reducing biases stemming from an inadequate representation of basic processes, additional complexity has multiplied the ways in which these biases introduce uncertainties in climate simulations.
For instance, a poor understanding of what controls the distribution of tropical precipitation over land, and hence vegetation dynamics, limits attempts to understand the carbon cycle. Similarly, uncertainties in arctic amplification of warming hinder predictions of permafrost melting and resultant changes in soil biogeochemistry.
Although the drive to complexity has not reduced key uncertainties, it has addressed Smagorinsky’s question as to what level of process detail is necessary to understand the general circulation. There is now ample evidence that an inadequate representation of clouds and moist convection, or more generally the coupling between atmospheric water and circulation, is the main limitation in current representations of the climate system.
That this limitation constitutes a major roadblock to progress in climate science can be illustrated by simple numerical experiments.
In idealized simulations of a waterworld that neglect complex interactions among land surface, cryosphere, biosphere, and aerosol and chemical processes (see the figure), the key uncertainties associated with the response of clouds and precipitation to global warming are as large as they are in comprehensive Earth System Models.
Differences among the simulations in the figure are especially evident in the tropics, where the sign of cloud changes and the spatial structure of the precipitation response differ fundamentally between models. This diversity of responses arises because, at low latitudes, the coupling between water and circulation is disproportionately dependent on the representation of unresolved processes, such as moist convection and cloud formation. The mid-latitudes show more robust responses because much of the energy transport is carried by baroclinic eddies; these, too, are fundamentally coupled to water, but they are much better described and resolved by modern GCMs, as foreseen by Smagorinsky.
The uncertain interplay between water and circulation that underlies differences in the response of the climate system to warming can be expressed in terms of more specific questions. For instance, how do marine boundary-layer clouds depend on their environment? Or how do atmospheric circulations couple to moist convection through surface and radiative fluxes? The first question ends up being key to explaining the intermodel spread in climate sensitivity, the second to the pattern of the regional response to warming. Differences in regional responses also influence ocean circulations, and hence how oceans take up heat, as well as patterns of precipitation, and hence how the land biosphere takes up carbon.
A deeper understanding and better representation of the coupling between water and circulation, rather than a more expansive representation of the Earth System, is thus necessary to reduce the uncertainty in estimates of the climate sensitivity and to guide adaptation to climate change at the regional level.
JC comments: Stevens and Bony make the important point that adding complexity to Earth Systems Models (e.g. carbon cycle, atmospheric chemistry, more complex land surface processes) doesn’t help improve the fundamental deficiencies of climate models.
Stevens and Bony focus on clouds, which is their area of expertise. Clouds are arguably the greatest reason for disagreement among climate models. However, IMO the more fundamental problems with climate models lie in the coupling of two chaotic fluids – the ocean and the atmosphere. The inability of climate models to simulate the evolution of and connections among the teleconnections and interannual to multidecadal circulation regimes is the biggest source of problems for understanding regional climate variability.
Taking climate modeling back to basics to address the interplay between atmospheric water and the atmospheric circulation, and the complex couplings between the atmosphere and ocean, require going back to basics and looking at a hierarchy of models and a range of model structural forms. Better understanding and simulation of the climate requires that improve our understanding and treatment of these processes in climate models. It is pointless to worry about aerosols, carbon cycle etc in the context of climate models until these more fundamental issues are addressed.
Judith writes: “However, IMO the more fundamental problems with climate models lie in the coupling of two chaotic fluids – the ocean and the atmosphere. The inability of climate models to simulate the evolution of and connections among the teleconnections and interannual to multidecadal circulation regimes is the biggest source of problems for understanding regional climate variability.”
I assume you’re referring to ENSO with your comment about interannual coupled ocean-atmosphere processes. Modelers have been trying to simulate ENSO for decades and they’re still off in a different world. Guilyardi et al (2009) is a great overview of the model inabilities to simulate ENSO.
It’s tough for them to find anything the models do correctly. The most telling sentence in the paper is:
“Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power et al. 2006).”
As I wrote as the closing to my post about Guilyardi et al:
Do you think that sentence will make it to the IPCC’s 5th Assessment Report? It should.
Gerald North (Texas A&M) saw some of this a decade and more ago. http://www.masterresource.org/2010/04/climate-model-magic-washington-post-today-gerald-north-yesterday/. But he and others did not want to speak up too much given personal relationships–a setback for science in retrospect.
If it was more than zero, I have a hunch Stevens and Sandrine may have just reduced the amount of money they need to spend on IPCC-related Christmas cards.
They might be forgiven if they are proved wrong, but not if they are proved right.
“…a setback for science in retrospect.”
No. Maybe for climate science, but not for other earth sciences. But CS egos are too fat and too bruised to look around. Incredible.
oops…comment in response to Rob Bradley
a bit of elaboration…
A lot of groundwater (GW) researchers and practitioners have looked at simplicity vs. complexity, model parsimony, These issues are also importance in decision contexts and setting research and engineering priorities–determining values of information and values of control
Certainty GW and CS problems have their differences both in conception and implementation, but both can involve a high level of complexity and significant uncertainty. Both have potential difficulties with V&V. Both have time-scale issues, e.g., impractically of conducting model post-audits to gain confidence in the model’s ability in a public arena, inability for testing the model(s) at the time-scale of practical interest. Estimating key properties can be a challenge, e.g., sample support may be an issue as can determining ‘effective properties’ in scale dependent systems.
Still, to me it appears that the GW community has been much more cohesive in efforts to make progress than CS, although my perceptions of the latter is much more limited. A big difference to me (again perception?) is that GW folks have had many more years to progress and were fortunate to not become involved in a knot of internecine ideology-based warfare. Still, I hope that we can make the effort not to besmirch all of science or even all of the modeling corners in science.
It almost seems that suffering is a prerequisite for anyone who wants to engage in climate science modeling. And that does not make for a productive atmosphere.
Leaving GW it is interesting to point out that certainly one industry–the chemical process industry (CPI)–has been able to operate for the better part of a century with ‘simplified’ models, and their efforts have been quite profitable. It is a question of determining what is needed and to what level of detail. However, I would be remiss if I failed to point out that having the ‘luxury’ of constructing and operating pilot plants is no small advantage. [‘Earth’ pilot plants are a bit of a problem.] Still one has to play the cards one is given, and when the hand is dealt.
GCM’s claiming AGW fail to incorporate sound chemical engineering practice.
The chemical industry had the incentive of commercial drivers with large very real failures of hundreds of millions to billions of dollars of immediate losses or penalties if they got it wrong. They also understood the basic limitations of nonlinear chaotic systems.
e.g., see the Exchange of Letters between Dr Pierre Latour
Note Latour’s extensive list of where GCM’s fail, beginning with
“about measurable, observable, controllable, stable and robust characteristics of the dynamic, multivariable nonlinear atmospheric temperature control system ”
See further material at Roger Sowell’s blog
David L. Hagen said:
“The chemical industry had the incentive of commercial drivers with large very real failures of hundreds of millions to billions of dollars of immediate losses or penalties if they got it wrong.”
On target. They focused on what they needed to know, including constraints and engineered that to their advantage.
“They also understood the basic limitations of nonlinear chaotic systems.”
Maybe it is an over-simplification, but ChE’s learned to constrain their systems to linear and occasionally other low order regions. That is, they designed in the regions in which they could predict with a high degree of confidence. Certainly this circumstance is not guaranteed with the earth’s climate system. ChE’s could test their modeling concepts and processes in the lab and pilot plants–again a big advantage compared to what climate scientists face at present.
That may lead one to conclude that for CS one needs to know more fundamentals, i.e., more research is needed, but that conclusion could be a bad conclusion. There is no guarantee that one will 1.) correctly identify and 2.) then quantify all of what needs to be known–and do that in a manner that fits the time constraints problem. The problem is not one of knowing as much as we can about the climate system (changes), but–recognizing several potentially disastrous outcomes resulting from the full range of actions available–to devise a timely reasonalbly optimal path forward.
Put another way: “Something really bad may happen if we do A and something else really bad may happen if we do B, etc. The clock is ticking. What do we do?” may be much more important than “do we understand everything about climate change at a fundamental level? OK now let’s go.”
Judith, Thanks for highighting this paper. It reinforces points I have made at Annan’s among other places. We actually have a paper on this issue in fluid dynamics accepted in the AIAA journal that should appear before the end of the year. Basically, adding more “physics” doesn’t increase accuracy if in fact it becomes more accurate to constrain all the additional parameters with data. Well constrained simple models can be dramatically superior.
Thanks David, pls let us know when your paper is available
Sorry for the typo Should read “if in fact it becomes more DIFFICULT to constrain all the additional parameters with data.”
David Young | June 16, 2013 at 10:43 pm | Reply
“Basically, adding more “physics” doesn’t increase accuracy if in fact it becomes more accurate to constrain all the additional parameters with data.”
Constraining parameters with data. Gee, that almost sounds like something out of the scientific method. You know, like hypothesis (what you think the parameters should be), prediction (the result of the parameterization), and test (does the hypothesis explain the measured result of experiments). Lather rinse and repeat unitl the hypothesis predicts the experimental results. Then you have a candidate for a theory if the final step, replication by others, is successful.
Constraining the existing parameters with data should be the first step. And not just any data. The data itself is often insuffucient in scope and quality such that it’s pencil whipped into “better” fitness for purpose. Better in this case too often means a better fit to desired outcomes (cough cough hockey stick cough cough).
Note the correction to the sentence you critique. Start over, rewrite.
Brian, be quiet. Springer is now a fluid dynamics expert. And just yesterday he invention the Li on battery for Dell
Yeah and Mosh is an ” engineer ” with a major in English and Philosophy, eminently qualified and knowledgeable to comment about fluid dynamics.
What a jerk!
heh. 2 smart guys that are almost as smart as they think they are.
I don’t have a problem with the corrected sentence.
English and philosophy, huh. That explains why he can’t engineer his way out of a paper bag. But it doesn’t explain why he sucks at English and philosophy.
Yes, that’s Mosher’s qualifications and he once posted here that he enrolled in Math and Physics, did not find them challenging enough and so switched to English and Philosophy. Pretty challenging subjects!
What are climate models missing? A realistic model of Earth’s heat source – the Sun.
Why? Post-1945 policies were based on J. Robert Oppenheimer’s fears of “Death – The Destroyer of Worlds”
Solar accumulation & amplification
Models appear to be missing how solar variations drive climate and the natural (cloud?) amplification of solar variations. See
David Stockwell’s solar accumulation theory.
Stevens and Bony conclude:
So, if I understand this correctly, the key problem of the GCMs is that adding complexity has only increased uncertainty and they are still unable to simulate the behavior of our climate with relation to clouds, water (in whatever form) in the atmosphere and the oceans.
This seems so basic to me that it effectively means that the GCMs are unable to make any meaningful projections of our planet’s climate, because of the insurmountable uncertainties involved.
Steven Mosher has once written that models may not have all the answers, but they are the best thing we have.
Stevens and Bony seem to believe that they are unable to provide us any real information about the future because they do not really understand the past.
So they may be “the best thing we have”, but they aren’t good enough to base any policy decisions on.
The “best we have” is probably an honest recognition, noted in passing from the earliest AR reports and then ignored (for financial reasons!?), that non-linear, chaotic, processes are inherently unpredictable on relevant time scales. Not just in detail, but in overall sign and extent of change.
The policy question then becomes, “How badly can we screw up if we pretend we have a reliable guide to action in the GCMs?” The answer is, “Very badly indeed.” Megadeaths already attest to that.
Personally, I think the predictive value of a complex but wrong model is no better than a straight line regression through all the climate data we have over geological timescales.
When anyone can wiki that there are about 167,300,000,000,000,000,000,000 atoms in a teaspoon full of water — the scale of which I am sure even Darwin would concede is beyond any real appreciation we can expect from mutated monkey brains — wouldn’t ‘ya think academics could at least get the statistics down. And yet, Wegman gets up ther before Congress and lets everyone know the kind of idiocy Mann and his gaggle of sycophants were up to and no cares.
There was only one figure in this short 2-page article and it is astounding. It is shown in a blog at WUWT on this same article a few days ago.
The figure compares four highly simplified GCMs for how they treat clouds and precipitation. Absolute opposite predictions are achieved for certain regions, particularly the tropics. This suggests to the authors that the fundamental understanding of clouds and precipitation is lacking in the models–the authors are suggesting stepping back from further elaboration of the models until a better fundamental understanding is achieved. This cuts at the heart of the grant-swinging mantra that newer and faster supercomputers are all that is needed to make progress.
A question is whether the simplified process that they follow might also be used for certain other poorly understood processes. For example, aerosols have been identified by Hansen as the greatest unknown. Would it be possible to create additional simplified models that would isolate the effects of aerosols for comparison across existing GCMs?
“What are the climate models missing?”. What is the phlogiston theory missing? The sooner climate modelling is seen for the hokum that it is the better. That is the first step in establishing a real science of climate, one which uses the scientific method and rigorously tests its hypotheses against observations without having to rely on arbitrary fudge factors such as flux corrections and positive feedback.
Insufficient information without seeing the paper. It is specifically looking at the equilibrium cloud feedback assuming a water world with a specified increase of 4 C in sea-surface temperature. This is as though the primary effects of radiation and water-vapor feedback have already happened to give the 4 C, the remaining question was how the clouds modify this in different models. Apparently this varies, but we don’t know by how much or in which direction. The surface temperature can’t change in this model because it is given. Is it equivalent to a 1 degree modification of the 4 C, we don’t know. So they are not questioning the primary effects, just the secondary cloud effect in the tropics. I’d still like to know what the result was, however academic it is not to consider land and ice effects too.
Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz  wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable. http://rsta.royalsocietypublishing.org/content/369/1956/4751.full
A full representation for all dynamical degrees of freedom in different quantities and scales is uncomputable even with optimistically foreseeable computer technology. No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned. http://www.pnas.org/content/104/21/8709.long
Lacking the math and the computer power.
The concept is to use a single-model framework to systematically perturb poorly constrained model parameters, related to key physical and biogeochemical (carbon cycle) processes, within expert-specified ranges. As in the multi-model approach, there is still the need to test each version of the model against the current climate before allowing it to enter the perturbed parameter ensemble. An obvious disadvantage of this approach is that it does not sample the structural uncertainty in models, such as resolution, grid structures and numerical methods because it relies on using a single-model framework.
As the ensemble sizes in the perturbed ensemble approach run to hundreds or even many thousands of members, the outcome is a probability distribution of climate change rather than an uncertainty range from a limited set of equally possible outcomes, as shown in figure 9. This means that decision-making on adaptation, for example, can now use a risk-based approach based on the probability of a particular outcome. http://rsta.royalsocietypublishing.org/content/369/1956/4751.full
Lacking a suitable perturbed model framework.
OK, I’ll be the one to ask it.
Isn’t the better question:
What aren’t climate models missing?
Lacking data – http://climate.nasa.gov/news/908
In a scientific problem as potentially complicated as climate, there is another modeling practice that is increasingly important: AOS models are open-ended in their scope for including and dynamically coupling different physical, chemical, biological, and even societal processes.
The rationales for coupling are to investigate potentially significant feedbacks (e.g., radiative properties for different airborne crystalline ice structures, changes in air and water inertia due to suspended dust and sediments, and water and other material exchanges with plants and biome evolution) and to achieve ever fuller depictions of Earth’s fluid envelope. Besides adding to the overall complexity of AOS models, coupling increases the number of processes with a nonfundamental representation (i.e., similar to a parameterization), because, for the most part, the governing equations are not well determined for the model components other than fluid dynamics. When adding a new coupling link, there is no a priori guarantee of seeing only modest consequences in the AOS solution behavior. http://www.pnas.org/content/104/21/8709.long
Increased coupling increases the potential for structural instability in the non-linear models.
You CH, I think there are the cycles where the surface of the ocean is sometimes warm and sometimes cool. Now for the surface to go from warm to cool, and back to warm, there has to be a shift in the change in the relative transfer of heat from the sea surface upward to the atmosphere and downward to the depths.
When the surface is cool the heat must either becoming latent heat or warming the depths.
When the surface is war the heat must either becoming sensible heat or transfer to the depths is slowed.
There are any number of processes – but a dominant effect is upwelling.
Which comes first? The Chicken or Egg?
Does CO2 drive clouds or clouds drive CO2?
David Stockwell summarizes where models fail:
Similarly see: STILL Epic Fail: 73 Climate Models vs. Measurements, Running 5-Year Means
These discrepancies suggests a massive systemic warm bias with major errors in physics. e.g. not just in the magnitude but in very sign of the feedbacks from clouds.
Roy Spencer challenges which comes first the chicken or the egg?
Which is the cause and which the effect?
Does increasing CO2 cause warming which reduces clouds?
Or do reduction in clouds cause warming which increases CO2?
Or do we have both? And to what degree of each?
How can we test each?
Climate science and the GHG’s have yet to quantify and distinguish the foundational fundamental cause and effect issue. Until those foundational issues are resolved, quantified, verified and validated, GCM’s will likely not get any closer than the impasse seen over the last two decades.
The global land temperatures have been on track for a 3C increase with doubling of CO2. Why haven’t any of the deniers plotted this trend?
That’s a rhetorical question, because if they did, they couldn’t deny it.
This is what a first-order fit to an ECS of 3C looks like laid on top of the BEST land data.
And what’s the deal with the focus on GCMs?
Andrew Lacis said here:
Webbie, We’ve been over and over this including on the thread where Lacis was commenting. Its all about the dynamics and the feedbacks and those depend on a lot more than radiative physics. Wake up, the nonlinear world is governed by very complex laws that go way beyond simple conservation of energy, which tells us essentially nothing. There are those nasty laws that mass and momentum are conserved.
Webbie, Thomas Hobbs summarized your problem:
“It is the scope of the writer and not the bare words by which any text is to be interpreted and those who insist on single texts, casting atoms of scripture as dust before mens eyes, can derive no thing from them clearly, an ordinary artifice of those who seek not the truth but their own advantage.”
Davy, Was that Tommy Hobbes or Calvin & Hobbes you were trying to quote?
Deniers like to play CalvinBall and make up their own rules concerning conservation of energy and other laws of nature.
There is little to suggest that any of the recent warming was due to greenhouse gases. The data shows cloud cover decreases between the 80’s and 90’s – a net forcing of 1.8W/m2 in ISCCP-FD including 2.4W/m2 warming in SW and 0.5W/m2 cooling in IR. Cloud cover then increased in the 1998.2001 climate seen also in Project Earthshine and ICOADS cloud observations in the Pacific.
Focusing in on the MODIS era – the change in SW reflection is sufficient to account for any warming in ARGO with no trend in IR.
So how potent is CO2?
Needless to say I am totally unimpressed by webby’s simple minded gross oversimplifications. He is a little like Laplace’s demon.
‘We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.’
—Pierre Simon Laplace, A Philosophical Essay on Probabilities
Perhaps a little more like a Laplace’s long haired leaping gnome.
Your comment at 1.09.
You need to take a couple of steps further back Webby. 1810 was the coldest decade since 1690. BEST shows this at the start of their record,, but only gives a hint of the sharply rising temperature prior to 1800 which led to a number of warm periods between 1690 and 1800. This can be seen in the reasonable fit between BEST Global (essentially Europe with a bit of America) and CET.
Here is the same temperature set with co2 overlaid on it (of course sceptics do that) which also better illustrates the earlier recovery period mentioned that BEST does not cover due to their timescales.
The period from 1700 onwards for several decades is well known as an especially warm period, as was noted here by Hubert Lamb on page 12 and 13 of this study;
“The remarkable turn of the climate of Europe towards greater warmth from soon after the beginning of the eighteenth century and affecting all seasons of the year in the 1730′s seems to have produced little comment at the time, though by then the temperatures were being observed with thermometers and entered into regularly maintained observation books in a number of places.”
In his compilation book ‘Climate since AD 1500’ edited by Phil Jones and incorporating work by a number of scientists, they also note the widespread warm periods around 1630 and 1550 and the cold interval that separated them (as shown in my reconstruction);
Stepping back enables you to see the natural variability much better and illustrates the problems in the extrapolation of material shown in your graphic without taking this large factor into account.
May I recommend the following books to help you with your history education?
‘The little ice age’ by Professor Brian Fagan
‘History and climate’ edited by Professor P D jones
‘Little ice ages ancient and modern Vol 1 and 2’ Jean grove
‘Climate history past and present’ Hubert lamb
‘Times of feast times of famine –a history of climate since the year 1000’ by E Le Roy Ladurie
The global land temperatures have been on track for a 3C increase with doubling of CO2.
It is 1.4 deg C.
Sorry, the large negative excursions in the 1800s have the signature of short term cooling due to volcanic eruptions, Tambora in particular.
This is described by a BEST reconstruction, if you want to see the details of that go to the BEST home page and see.Summary of Results.
That is remarkable, a severe cooling that started five years before the volcano even erupted. With the hardest frost in Centuries in February 1814 15 months before the eruption making it even more remarkable
Fortunately I wrote of the period as Dickens was born in 1812 and I followed his life through the medium of CET.
Since this is about what the climate models are missing, again I did not include the negative forcings from volcanic eruptions. That info is given in the BEST evaluation:
The estimates of sulfate productions were taken from :
C. Gao, A. Robock, and C. Ammann, “Volcanic forcing of climate over the past 1500 years: An improved ice core‐based index for climate models,” Journal of Geophysical Research: Atmospheres (1984–2012), vol. 113, no. D23, 2008.
Gao did the research as part of his doctoral thesis and it is quite impressive in its comprehensiveness. Check out this graph that Gao constructed to portray the high volcanic activity before 1900.
Gao did not do the graph you reference it is Briffas and it is incorrect.
Impressive work on reading Dickens. This backs up the fact that ice core records show enormous dumps of sulfates into the atmosphere estimated at 53.7 TG in 1809 and 109.7 TG in 1815. Then there were dumps of of 17 TG in 1831 and 40 TG in 1835 and a small one of 4 GT in 1861.
Compare that against the isolated BEST record in the same time interval
Very noisy temperature records but that’s the problem with quality control back then. It’s not like the Victorian era was yet completely on board with science.
The BEST team did their own reconstruction with a simple scaling against the sulfates.
The BEST team used a ECS sensitivity of 3.1C per doubling of CO2, which compares against the 3C value that I used. As you can tell, I do this stuff partly to fact check and audit the work of others.
Tamino did similar work and Hansen as well starting in the 1900’s where they concentrated on the volcanic activity in the 1960’s.
I can add this volcanic forcing information to my semantic server, so thanks for being so insistent on pointing this out. That’s why this site is useful — the deniers keep poking at the stuff that they want to hide.
Tonyb hit the nail on the head. A recovery from the Little Ice Age that has nothing to do with anthropogenic CO2 was already underway. There have been similar episodes in the historic past i.e. the Roman Warm Period, the Medieval Warm Period, and now the Modern Warm Period. There’s nothing but handwaving that makes the Modern Warm Period different in scope or cause from the others.
Given the disastrous effects of global cooling and its undeniable occurence as recently as a few hundred years ago we should hope that CO2 has a lasting warming effect. Whether it does or not remains to be seen.
You remain incorrigible. A cooling that began 5 years before Tambora erupted is swept aside as you transport yourself into further imagined dire effects that volcanoes may have on the climate.
Its obvious that some of them have SOME effect but its often grossly overstated as Michael Mann did with his piece ‘ underestimation of volcanic cooling in tree-ring based reconstructions of hemispheric temperatures’ concerning the apparent 1258 eruption. He couldn’t find the expected 2 degree c cooling in his tree ring reconstruction but that was not surprising as the eruptions began years before the ‘small’ dip around 1258.
Have they found the 1809 volcano yet or is it another Briffa Jones and D’arrigo interpretation?
This from Yalcin;
“Together with the similar magnitude and timing of the 1809 volcanic signal in the Arctic and Antarctic, this could suggest a large tropical eruption produced the sulfate and Antarctic tephra and a minor Northern Hemisphere eruption produced the Eclipse tephra. Nonetheless, the possibility that there were coincidental eruptions of similar magnitude in both hemispheres, rather than a single tropical eruption, should not be discounted. Correctly attributing the source of the 1809 volcanic signal has important implications for modeling the magnitude and latitudinal distribution of volcanic radiative forcing.’
I agree with that in so much we need to differentiate between those volcanos that do have an effect (however short lived) and those that have gone missing or have no noticeable effect. Dr Mann, Briffa etc need to demonstrate their evidence better than they do and you need to be rather more sceptical of everything you read. .
You might be interested in this presentation from Cambridge University
presented by Dr Clive Oppenheimer
As a bonus and perhaps illustrating the historic record does have a place in chronicling our climate. here is a short extract from some notes I found in the archives of Exeter Cathedral
1783 ‘Extra poor relief in extreme cold’
Now, could that be a reference to the Laki volcano?
Tell Gao that he should retract his doctoral thesis. He did place a significant sulfate event at 1809, half the size of Tambora 5 years later.
It is convincing to be able to go from the data for carbon emissions to a warming signal with perhaps three parameters, an adjustment time for co2 sequestration, an ECS value for co2 doubling, and a baseline co2 value.
The second order effects of CO2 outgassing and volcanic activity color in the details. As a student of scientific reasoning, the summary provided by the BEST team looks solid an d I really can’t find much wrong with it.
Where is the competing model?
“where is the competing model?”
I will say that the argument went on for a while about simple models versus complex models. Folks like me arguing that the model was too simple and missed a bunch of other stuff we knew. And the folks arguing for simplicity making the point that our added complexity really didnt change the answer. so, when Muller asked the question “where’s the competing model?” I was kinda dumbstruck. ya. At that point one could make a skeptical argument that the existence of a simple working model didnt preclude the possibility of unicorns causing the effect. But this move, which works in the game of philosophy or blog wars, wouldn’t work with working scientists. The simple reason being that one cannot test a theory that depends on “something else that may be causing this” Strange in that instant I realized that science isnt skepticism. It uses skepticism as long as skepticism is useful. Questioning a working theory when you have no alternative is a different game than science. It’s a game played in frat houses at 4:20.
Mosh, Frat-level filosofy, apropos for a blog borne from an engineering school environment. What was the biggie,kappa tau?
WEbHubTelescope: Where is the competing model?
Stevens and Bony showed (it is well-displayed in the figure in their perspective) that the models do not agree on the sign of the cloud cover change Hence their conclusion: There is now ample evidence that an inadequate representation of clouds and moist convection, or more generally the coupling between atmospheric water and circulation, is the main limitation in current representations of the climate system.
They are not the first to publish that in Science, and I daresay they won’t be the last.
The effects of cloud cover changes depend on whether they happen in the tropics or at the poles or mid-latitudes, whether they happen in summer or winter, and whether they happen at night or in daylight. Very slight differences among these possibilities produce differences in whether the models forecast warming or cooling.
Wind that cloud hypothesis backwards. Slowly start removing the CO2 from the atmosphere. The temperature starts dropping and then the CO2 starts to decrease even more due to less thermal and biotic actvation. Eventually the earth averages less than 273K, as the water vapor activation follows suit. The clouds are a shadow of their former self.
Again, where is the counter theory?
Even at a frat-level, I would like to review it.
Clouds are not theory – they are a fact of life and it seems that cover over varies over time. This should not be surprising in my view.
There is little to suggest that any of the recent warming was due to greenhouse gases. The data shows cloud cover decreases between the 80′s and 90′s – a net forcing of 1.8W/m2 in ISCCP-FD including 2.4W/m2 warming in SW and 0.5W/m2 cooling in IR. Cloud cover then increased in the 1998.2001 climate seen also in Project Earthshine and ICOADS cloud observations in the Pacific.
Focusing in on the MODIS era – the change in SW reflection is sufficient to account for any warming in ARGO with no trend in IR.
“Questioning a working theory when you have no alternative is a different game than science.”
That’s why alchemy, astrology and phrenology were popular for so long.
WebHubTelescope: Wind that cloud hypothesis backwards. Slowly start removing the CO2 from the atmosphere.
Why? The point of policy is to change the future. Stevens and Bony make the point that the models can’t reliably predict the future cloud response to future CO2 accumulation.
Surely you are not trying to convince us readers that the Earth system is as symmetric about the present as your models are?
WebHubTelescope: Again, where is the counter theory?
Four were displayed in the article. Al the theories are too inaccurate for public use. Among the Cargo Cultists there was no alternative theory, and in general lack of an alternate theory is not evidence that any current theory is accurate.
We are hard up against the strong Planck response negative feedback. Given that, the first derivative is not zero and any perturbations are linear to first order. Water vapor is definitely a positive feedback and all clouds will do is shift height since the lapse rate is constant to first order.
Ok, I am waiting for your second order corrections .
Derive what happens when clouds are slightly higher in altitude.
And what happens to the high altitude cirrus clouds?
Cloud height dropped 30 to 40m over the last decade. They will drop further as La Nina intensifies in the next decades.
Dumbass simplifications notwithstanding – cloud responds to secular changes in ocean and atmospheric circulation.
The “competing model” is the real climate. When the toy models grow up and can simulate the aspects of the real climate that can be expected given a system that exhibits spatio-temporal chaos, then we will know the models are good for something other than creating hundreds of thousands of tons of CO2.
Chief, “Cloud height dropped 30 to 40m over the last decade. They will drop further as La Nina intensifies in the next decades.”
There is about a degree C drop per degree of latitude as you move towards the poles from the boundary of the tropics. There is about 50m drop in the tropopause altitude per degree latitude as you move towards the poles from the boundary of the tropics. There is just something about spherical shapes that eludes the Webster. Where the energy is and where the clouds are matters. He really should read that paper on the relative importance of meridional and zonal heat flux and brush up on the meaning of asymmetry.
WebHubTelescope: Water vapor is definitely a positive feedback and all clouds will do is shift height since the lapse rate is constant to first order.
If they shift in height, do they also spread out and occlude more of the sunlight? Is the change greater in daytime or nighttime; in summer or in winter? In temperate zones or the tropics.
What is the effect on the transport of warm vapor to the upper troposphere by thermals with increased absolute humidity?
Surely you do not think that the (equilibrium?) calculated lapse rate can answer those questions.
What part of “Nobody has a demonstrably accurate model” can you not understand?
Re: “GHG forcing is globally uniform”
For evidence to the contrary, see Fred Haynie The Future of Global Climate Change
CO2 variations increase with latitude from south to north pole, and change phase – lagging temperature.
You should go see what StevieMac’s demonstrated about the reliability of much recent tree ring analysis. Briffa took long careful aim at the rat of MWP and blew out a whole side of the hull.
i think the typo obscures your point.
nevermind i guess it’s not a typo.
I’m making great progress toward the goal of making certain that every typo adds layers of meaning.
Presentation of Prof. Murray Salby in Hamburg, Germany, on 18 April 2013 with the title: Relationship between Greenhouse Gases and Global Temperature
Temperature drives CO2, with some help from soil moisture, on all time scales.
That is a piece of red meat served up by Salby. His entire thesis is so hairy that we ought to call him Furry Murry.
He had an approach where he integrated the temperature anomaly. He doesn’t realize that this strengthens the hypothesis that co2 is the forcing function. Check it http://imageshack.us/a/img441/104/integralco2snipimage.jpg
Temperature of a carbonated drink, earth’s oceans, drives CO2
That is basic physics.
They have the right sensitivity but they have cause and effect totally backwards.
HAP pulls out the old rubber glove gag. What’s next, a rubber chicken?
A 40% change in partial pressure of CO2 would have to reflect a very obvious change on ocean temperature. And in that case, the partial pressure of Argon gas would also have to increase.
Where is the evidence for that? Are you afraid of science?
Salby was caught fudging and adjusting the chart data last year:
I guess some myths are just immune to correction
In the consideration of the water – moist atmosphere problem, please don’t forget the contribution of cosmic rays in creating water droplets and their varying inputs as a function of the number of sunspots. I know people would like to ignore their contribution because most climatologists don’t know how to put them in, but you will not solve the problem by ignoring them. I refer you to an old article by a good physicist Nir Shaviv who calculated Climate Sensitivity to CO2 in 2006 (www.sciencebits.com/OnClimateSensibility) to be 1.3 degrees per doubling , a value which I felt to be conservative at the time (he assumed all warming besides that due to cosmic rays to be due to CO2) which many recent calculations seem to be approaching. The number is most likely lower that that. As a high energy physicist myself I can appreciate his calculations, but find the Global Models ill defined and unclear about what is in and what is out. Your comment about not adding any more physics to the model until you solve the water – moist atmosphere problem is disingenuous because you can’t sove it without better physics.
What are GCM’s missing?
Sufficient data to validate against.
Which could be obtained, were efforts made to collect data better.
Also, wouldn’t hurt if someone ran Holocene-spanning GCMs dithered to the level of resolution of the Marcott infographic, as that would be a pretty impressive validation test.
Oh, and the undithered results of validated Holocene-spanning GCMs? Those would have as much resolution as one could wish, and likely produce reliable statistics about so much more than we know now, that we can’t even speculate what we’d see with ten millennia of simulated weather data that so closely matches the real thing we can’t tell it apart on the paleo record.
‘Figure 12 shows 2000 years of El Nino behaviour simulated by a state-of-the-art climate model forced with present day solar irradiance and greenhouse gas concentrations. The richness of the El Nino behaviour, decade by decade and century by century, testifies to the fundamentally chaotic nature of the system that we are attempting to predict. It challenges the way in which we evaluate models and emphasizes the importance of continuing to focus on observing and understanding processes and phenomena in the climate system. It is also a classic demonstration of the need for ensemble prediction systems on all time scales in order to sample the range of possible outcomes that even the real world could produce. Nothing is certain.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full
Hmm, there was a bunch of work done in the vein, I saw some preliminary work presented at AGU (hmm 2 years back?) on precipitation in Iberia and GCM results, also some north american tree ring work.
As I recall they were using the GCM as a prior of sorts. There was also some work presented on using the GCM to fiigure out which areas were the most fruitful to look at ( from a signal to noise perspective) One interesting point was raised by Eugene Wahl as I recall. The presenter showed one result from a paleo recon and another from a GCM. they differed. Wahl asked ‘how do we tell which is correct?” smart question.
There was also some cool stuff done with forward growth models.
For my money since paleo constraints on ECS are more important than GCM constraints I’d love to see more GCM/proxy comparison work. Ar5 ( I recall– i didnt read much outside by review area ) did have a bunch of stuff on these recons..
BartR, “Also, wouldn’t hurt if someone ran Holocene-spanning GCMs dithered to the level of resolution of the Marcott infographic, as that would be a pretty impressive validation test.”
Marcott had nice resolution after sacrificing precision. The problem with paleo is the amplitude uncertainty, frequency uncertain and biasing of different proxies toward optimum ranges. To make sense out of the ocean paleo you have to have a kick butt ocean model which is the problem to begin with.
There are three different pictures of the paleo temperature for the same area. Different binning and smoothing i.e. sample thickness. A small change in the local currents can make a huge difference in the event timing. If you want the past flat so you can push your carbon scheme, just average away. If you want the truth, you are going to have to dig a little harder.
Climate models don’t take into account the multidecadal oscillation.
Knight et al.
The quasi-periodic nature of the model’s AMO
suggests that in the absence of external forcings at least,
there is some predictability of the THC, AMO and global
and Northern Hemisphere mean temperatures for several
decades into the future. We utilise this to forecast decreasing
THC strength in the next few decades. This natural
reduction would accelerate anticipated anthropogenic THC
weakening, and the associated AMO change would partially
offset expected Northern Hemisphere warming. This effect
needs to be taken into account in producing more realistic
predictions of future climate change.
To help those who find the concept of chaos difficult to visualise, here is a very,very, very,very simple model of the 3-body problem where 2 of the gravitating masses have been fixed and the 3rd mass is constrained to move according to newton’s law of motion.
Let it run for a couple of minutes in its basic state and note the pattern of the particles track. Then make a slight change in the start position of the particle from 0.4 to 0.39 and again let the model run for a couple of minutes.
The pattern of the track is totally different. This demonstrates how chaotic systems are essentially unpredictable both in track and distribution of the track given a small perturbation in start position. Imagine the path represented the wiggles of the jetstream, one would stand no chance of predicting what would happen given a further small perturbation to the system.
Also remember even the computer activity is restricted to numeric rounding so each calculation essentially perturbs the “real track” of the particle in its calculation.
What can be predicted from the system is the furthest the particle can move from the centre of gravity of the system because of the law of conservation of energy.
In the case of the earth’s climate the starting point in modelling needs to be to identify what are the predictable factors equivalent to the maximum distance from the centre of gravity in the system.
Imagine now that the sun’s were not fixed in the model, all would be moving and the only certainty is the maximum distance that the particle would move from the centre of gravity.
In a nutshell climate modelling has to be equivalent to predicting the patterns in the 3-body animation but much, much , much more complex.
Best of luck and remember the science is settled.
The models are definitely missing cloud.
These are intimately associated with changes in ocean and atmosphere circulation.
ENSO – http://s1114.photobucket.com/user/Chief_Hydrologist/media/Loeb2011-Fig1.png.html?sort=3&o=43
Arctic cloud – http://www.arctic.noaa.gov/detect/detection-images/climate-cloud-mamseries.jpg
North Pacific cloud – http://s1114.photobucket.com/user/Chief_Hydrologist/media/Clementetal2009.png.html?sort=3&o=50
A cloud haku.
More clock than cloud is
the gaseous mist-eerie-
ous uni-verse… ‘om…’
Wot happened when I typed?
It’s supposed ter read,
More cloud than clock is
the gaseous mist-eerie-
ous uni-verse …’om …’
Mouse ran up the cloud.
Lightning struck and thunder boomed;
Mouse ran out the clock.
clouds reflect more heat
lead to cool earth trend – how long
even kim don’t know
I think I’ve never heard so loud
The quiet message in a cloud.
What are climate models missing?
b. Track Record
d. Predictions agreeing with observation
A: All of the above.
They are missing the boat. It is foundering in high seas of skepticism.
“Taking climate modeling back to basics to address the interplay between atmospheric water and the atmospheric circulation, and the complex couplings between the atmosphere and ocean, require going back to basics and looking at a hierarchy of models and a range of model structural forms”
I’ve been doing just that for over 5 years now and it resulted in my attempt at a ‘New Climate Model’ which integrated observations with basic physics to provide a coherent overview.
The essence is as follows:
i) The global air circulation is set by top down solar and bottom up oceanic effects.
ii) The pattern of the circulation at any given moment represents the current balance between those two forcing elements.
iii) Changes in the pattern of circulation represent changes in the rate of energy flow through the system.
iv) Climate changes are a result of circulation changes.
vi) Circulation changes are always a negative system response to any forcing element other than changes in mass, gravity or ToA insolation.
Build new models on the basis of that analysis and they will become far more realistic.
Does your model accurately reproduce the climate changes ovedr the past 800,000 years?
I’m sure it would if we had the relevant data about jet stream and climate zone positioning for that length of time.
What we do have evidence of is equatorward zones and jets in ice ages and poleward zones and jets in intergkacials and lesser similar shifts during the current interglacial whenever the globe cooled or warmed.
Tonyb referred to:
“the sharply rising temperature prior to 1800 which led to a number of warm periods between 1690 and 1800.”
Which I have previously pointed out to Leif Svalgaard as a counter to his contention that the historical record does not match solar activity well enough to draw conclusions.
Leif pointed to high solar activity in the 1700s in order to try and show that there was little or no connection between climate and solar activity but was apparently unaware that the period was much warmer than the LIA or the subsequent period.
He made the common error of assuming that the 1700s were a cool period being closer to the LIA than we are today.
In the 1700s there was both higher solar activity than during the Maunder Minimum and warmer temperatures.
There is a popular misconception -including amongst sceptics- that the period from around 1350 to 1850 was a 500 year long deep freeze. The term LIA has been hijacked from its proper meaning that covers the last 3000 years or so of fluctuating glacier movements. The LIA as we think it of it was much more variable with some periods of very cold winters AND blazing summers. The temperature went up and down considerably, with the 17th and 19th Century being cooler generally than the 18th.
I am sure you know all this but the popular misconception of a monolithic period of cold is hard to shift and damages our ability to see the huge amount of variability that models-because they do not have the fine focus of showing annual variability- tend to miss, not helped by their often vague dating and limited spatial relevance.
I should also mention that my proposal incorporates the effects of cloudiness variations because zonal poleward jets produce shorter lines of air mass mixing and thus less clouds than periods of equatorward meridional jets.
I prefer that explanation to the Svensmark hypothesis.
One thing that the models seem to be missing is CO2. I understand that in its place a change in ‘external forcing’ is presumed to act at the top of the model atmosphere.
One thing that the modeling culture seem to be missing is the encouragement of owners and operators to see the models as something other than a rather complicated means of displaying what they think should happen if CO2 levels continue to rise.
There are about 20 (different ?) models sponsored by the IPCC. The models are never published so we know nothing sbout them. We don’t know which important dlimate processes are modelled and the extent yo whicn the sub models have been validated saparately. I dom’t even know whether academia have access to such information and what issues are being debated. Have they got reliable figures for ocean transport delay?
Fron what I have seen no one has satisfactorily modelled the on/off nature of climate change, yet we have been in an off period for 15 years.why? My own belief is that this is a consequence of the ‘steps and stairs’ of quantum theory. Many thousands of different states of the CO2 molecule are posssible. Normally we see trmperature change as a continuos process It will appear discontinuous if a large enough proportation of CO2 molecules arruve at the same step at the same time. Who can say that did not happem in 1940 and 1998?. I am not suggesting yjat all these different states occur in sequence, but like most dynamic systems some will dominate, ‘pulling’ others nearby in the spectra. This dynamic behaviour is extremely difficult to simulate. Because of the isotopic nature of the elements in CO2 we need to know which vibrations dominate/ particularly if they absorb or release large photons of energy.
I haven’t noticed a link to the full paper yet so here you go:
Where have all the models gone, long time passing?
Where have all the models gone, long time ago?
Where have all the models gone?
Climate “scientists” misused and abused them everyone.
Oh, when will they ever learn?
Oh, when will they ever learn?
With sincere apologies to Peter, Paul and Mary.
Years ago I said that the modelers were trying to keep their toys on circular tracks on the ceiling. It was raining trains then and it is raining trains now. They’ve got MAGICKal trains at the UK Met Office, courtesy of creative writing at the University of East Aggleyness.
“There is now ample evidence that an inadequate representation of clouds and moist convection, or more generally the coupling between atmospheric water and circulation, is the main limitation in current representations of the climate system.”
They’re finally starting to listen.
“Apart from all other reasons, the parameters of the geoid depend on the distribution of water over the planetary surface.” — Nikolay Sidorenkov
“[…] the more fundamental problems with climate models lie in the coupling of two chaotic fluids – the ocean and the atmosphere.”
The coupling is NOT chaotic in tuned aggregate.
When will climate scientists STOP ignoring the climate information conveyed by earth orientation parameters??
Mainstream progress is unacceptably & intolerably slow. Suggestion: Tactically-targeted funding cuts are immediately due.
IMHO the main thing the climate models are missing is an accurate computation of convection. To calculate the rate of change of the surface temperature or lower atmosphere you need to know the flux of heat. That is, the total rate at which heat crosses a horizontal surface at some height z (units W/m^2). You need to know this heat flux very accurately, because to work out the rate of temperature change you need to differentiate the heat flux with respect to z. The heat flux is made up of at least two components: (a) heat flux due to radiation, (b) heat flux due to convection (i.e. hot air rising). Climate debate focusses almost exclusively on (a), with very little consideration of (b). It’s sometimes claimed that (b) is handled by the lapse rate, but that’s wrong, the lapse rate tells you nothing about the convective heat flux. To compute this accurately you need to solve the coupled equations for fluid flow and heat transfer. A state-of-the art computation would have hundreds of grid points in the vertical direction to compute this accurately. Yet GCMs typically only have about 20 vertical layers.
Paul, “hot air rising” is thermal convection and it isn’t very important.
See the opening illustration here:
Thermals account for just 24W/m2 of surface cooling. The Big Kahuna is labeled “latent”. That’s moist convection. It accounts for 78W/m2 of surface cooling. Radiation plays in the middle accounting for some 40W/m2 of surface cooling. At the surface moist convection is by far the most important mode of cooling and, remarkably, the poorest modeled mode of surface cooling. It’s a travesty.
Heh, the unremarkable ‘Missing Cooling’. Whatya bet it’s not been radiated out into the Galaxy?
Paul, Isaac Held has an old post where he does some detailed modeling of convection in a small region with lots of grid points. He observed that the solution was very sensitive to the size of the physical domain of computation. The problem I think is that this convection is chaotic so it is a difficult computational problem.
What are GCM’s missing? Reliable accuracy.
It is inappropriate to generalize why GCMs are not now reliable with so little information available to reach an informed conclusion. There are many such models and unless a person breaks down the code of each model and compares them, there is no valid reason to believe that they are unreliable for the same reason(s).
Of course, that will not stop some here from rambling on about how they believe they know exactly what needs to be done to improve model reliability.
Reliable model development is a frequently long and difficult process. Deciding what model(s) to use is usually much simpler. You use models that have consistently matched observed conditions within a predictable margin of error.
Can anyone site another field where people advocate implementing major government policy decisions based upon the outputs of models that are known to produce unreliable results? In this aspect, climate science and the implementation of climate policies seem unique. Unfortunate, but true-imo.
What’s missing is modeling the earth as a water world instead of a dry rock. If the atmosphere was 99.97% nitrogen and 0.03% CO2 and the surface was dry rock like the moon then doubling CO2 would cause a mean temperature rise across the entire sphere of 1.1C. Water changes everything. Modelers invented, out of whole cloth it appears, so-called water vapor amplification which transforms 1.1C of well modeled sensitivity into 3C of poorly modeled sensitivity. It appears now that is quite wrong and water vapor amplification is non-existent. My take on what happens is that the cloud deck rises by about 100 meters for every CO2 doubling but cloud temperature remains unchanged and instead the lapse rate between cloud and ground changes. This is called lapse-rate feedback. It’s a negative feedback and its magnitude is not well known. If the same temperature cloud is at a higher level in the atmosphere then by definition there is less greenhouse gas between the cloud top and space and more greenhouse gas between the cloud bottom and the ground. This gives the heat in the cloud a less restrictive radiative path to space and a more restrictive path back to the ground. A mere 100 meter change in cloud height equates to a 1.0C lapse-rate feedback and thus nullifies the effect of doubling CO2. Critically, where there is little or no water on the surface to evaporate and form clouds there is nothing to nullify the warming effect. So over land, especially where the land is frozen, we should see a larger warming effect. In fact this is what we do observe.
Your theory may be correct, but I think you’s have to admit that there is large leap from a general theory and developing a model that reliably performs. Many of us have theories, but they are worth little. Mine involves the interaction with the deep oceans, but until someone can demonstrate a GCM that reliably performs…..well it is all just meaningless talk
Rob the ocean is over 70% of the surface and over 90% of the heat capacity. It isn’t subject to variation from urban heat islands, land use change, albedo change (except a small % due to seasonal sea ice extent change). The entire surface is even all at almost exactly the same elevation.
Ya figures out what the ocean does in response to C02 change and everything else is details.
As of now the best empirical data (which isn’t very accurate at such tiny wattages) says the ocean basin is accumulating heat at the rate of 0.5W/m2. This is enough to raise the basin temperature 0.2C per century. Per CENTURY. That isn’t cause for alarm.
Climate models do not appear to adequately include persistance. See Hurst Kolmogorov Dynamics
Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
Koutsoyiannis have shown that HK deviations are about 2x the standard deviations of conventional statistics. the late 20th century “rapid warming” attributed to anthropogenic contributions appears to be well within natural variations once these are recognized etc.
Climate models (about which I know nothing from direct exposure to them, but a good deal from how they cause believers in them to act in the “debate”) lack a “global temperature” record that is what climate scientists say it is (i.e., global mean surface temperature), and they lack handlers eager and willing to question the “settled science”, based upon the models’ poor performance. There are no “best” models, either.
What are climate models missing.
That is really easy.
It snows more when earth is warm and it snows less when earth is cold.
This is likely the most important thing the theory and models are missing.
This not missing in the actual ice core data. It is plain as day and night.
The result of this is that the Climate Models show no skill.
The climate models say that as earth warms ice disappears.
October 2012 to May 2013 says that as earth warms more ice appears.
Ice retreated and allowed earth to warm since the little ice age.
We are warm now and the snows have started that will take us to cool again.
LOOK AT THE ACTUAL DATA.
When models are fixed, they should do what the data does do.
I recommend everyone watch this presentation.
The introduction is not in English, but the Presentation is in English.
He talks about differences in Models and Real Life. Excellent!
Two different competing premises here,
The first is that of trying to predict [model] a complex non linear chaotic climate system.
The second is that the climate system is in a box that just doesn’t change very much and reproduces itself fairly reliably on a yearly, decadely and even millennial time scale.
The first gives rise to weather which is constantly changing and becomes unpredictable [“Australia, a land of droughts and flooding rains”] for different reasons at daily weekly monthly and yearly levels.
The second gives rise to climate change in years, centuries and millenia [ice ages and hot spells] at a very slow rate.
Most people try to confuse climate change and weather. Models are predicting short term changes [weather] and calling it climate change.
Consequently as some one put it eloquently earlier All models are biased to warming ie self fulfilling models.
The chance of the earth warming up from year to year is 50.05 percent. [When on a roll “the trend is your friend”]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years. Climate models should basically be random walk generators reverting to the mean of the last few centuries.
They should have the capacity to put in AO’s, ENSO’s, Bob Tisdale and Tamino, Judy and WUWT plus the IPCC and make a guesstimate model yearly to 5 yearly with the ability to change the inputs yearly to reflect the unknown unknowns that occurred each year. No models should be able to claim to tell the future more than a year out because the chaotic nature of weather on short term time frames excludes it.
Any 5 year model should have a straight line saying this is the average and we expect it might deviate [up or down slightly] because of XYZ.
The fact that all climate models are absolutely programmed to give positive warming and thus cannot enter negative territory[ which is just under 50% on long term averages] means that all climate models are not only wrong but they are mendaciously wrong.
Mendacious means lying for all the global warmists who produce the current model ensembles as gospel.
The last ten thousand years has had periods of a few hundred years of warming and and then a few hundred years of cooling and then warming and then cooling that have alternated inside the same bounds. The next ten thousand years will most likely do the same.
Judging by the rubbish that comes out of the mouth of Schmidt and Trenberth etc, it isn’t the models that need a shake-up so much as those programming them. In engineering we never place any faith in such grossly unvalidated models. Hindcasting is no real test; as Lindzen put it, hindcasting is like taking a test with the answer sheet in front of you. It is excessively easy with so many parameters to hindcast without ever knowing which were the more important. Prediction is the only metric to judge by and on that score the models patently do not work either spatially or temporally. Not really surprising when you cosider the enormity of the task!
If modelers want now to claim that the pause is due to natural variability well welcome to the skeptic camp; we’ve been saying since the start that natural variability was not correctly accounted for.
And if the models are all we have then it would just prove we have nothing. Happily we actually have enough data to say that the human fingerprint is just not there..
GCMs fail validation because they cannot even predict the past — that is what backcasting is all about. To demonstrate an ability to actually predict the future we’d have to wait 30, 40, or 50 years.
I have as much disdain as anyone for climate models as an excuse for decarbonizing the economy. But it would seem to me that hind casting would be even more difficult than forecasting. If, of course, you care about accuracy.
There are two major areas of flaws in climate models in this layman’s eyes. Lack of knowledge of the climate processes: forcings, feedbacks, known unknowns, unknown unknowns, etc. And measurement of climate parameters: sea and and land temperatures, rainfall, clouds, levels of GHGs, etc.
It seems to me that even if climate scientists ever got the first area sufficiently right to model global climate (which I doubt is even possible given the nature of the system), the data available in the present is always going to be better than the limited data from the past.
The only thing a GCM that accurately hind cast would tell me would be that the programmer was a really good tuner.
I suppose if a GCM could be shown to hind cast for multiple past historical periods, on a millennial scale, separated by hundreds of thousands of years…
Maybe this belongs on the tilting at windmills thread….
Climate science desperately needs the opposite of what evolutionary science requires; Intelligent Design.
I am a believer in the Big Thud theory of CAGW – that is was plopped into the world, full grown, in an over heated congressional hearing room in 1988 after a brief gestational period in the fevered mind of James Hansen. And it has been expanding at an accelerating rate ever since.
I believe in the Natural Selection theory of AGW. A scam that could survive it’s obvious absudity was selected, naturally. ;)
Sorry, “absurdity”. Damn monkey at the keyboard. ;)
Ape Bad Andrew, you are an Ape, not a monkey.
My 10,000 monkeys all object. They’ve never agreed before.
maybe’s he’s referring to one of those monkey’s that scientists taught how to type out shakespere
If you started a good model ten thousand years ago, it would stay in bounds for ten thousand years until now. It might not match the real cycles, but it would stay in the same bounds and have similar length warming and cooling periods. Put in the model that it snows more when oceans are warm and wet and that it snows less when oceans are cold and frozen the model would stay in bounds and bounce between hot and cold just like the actual data does.
I believe the role of freshwater systems is poorly accounted for by climate science.
Glad to see you back. Missed your trenchant comments…
JC says: “Stevens and Bony make the important point that adding complexity to Earth Systems Models (e.g. carbon cycle, atmospheric chemistry, more complex land surface processes) doesn’t help improve the fundamental deficiencies of climate models.”
I interpret what they say as not so much ‘deficiencies’ as ‘uncertainties’. Adding levels of complexity doesn’t alter the basic result much, nor does it narrow the uncertainty spread much, is what they are saying. This is fundamentally different from a suggestion that the models are deficient, which is a stronger statement than the paper had. Even simple GCMs give a climate sensitivity around the IPCC range, and adding bells and whistles like aerosol chemistry, an interactive ocean, etc., don’t alter this result much. If there is a ‘deficiency’ it is that the spread of the uncertainties continues in even the most sophisticated models, and they suggest this is at least partly in the tropical cloud processes from their idealized study. So, even if the models aren’t doing this properly, do they at least bracket the correct behavior rather than all being biased in the same way? The former situation would gauge the size of the uncertainty more than the latter.
You advocate “bigger computers” and then ask: “what am I missing?”
It’s not the “size” of the computers, lolwot, it’s simply the “GIGO” effect.
The biggest computer imaginable won’t get the right output if you feed garbage in.
– Forget GCMs for predictions (or projections) of future climate; they are unable to do this today (not because of their size, but because of the limited data)
– Forget inputs based on theoretical deliberations or the so-called “principles of physics”
– Forget subjective interpretations (using false logic) of dicey paleo-climate proxy data taken from carefully cherry-picked periods of our planet’s geological past.
– Instead, get more real-time physical observations, especially on the effects and impacts of natural forcing factors and variability, an area where climate science is still in its infancy (and where IPCC has conceded that its “level of scientific understanding is low”.
– Use GCMs to get a better understanding these natural factors before attempting to attribute past warming to anthropogenic factors.
The models do nothing at all correctly. Water vapour, the main greenhouse gas, does not cause the mean surface temperature to be warmer. At the base of its troposphere the atmosphere of Uranus is hotter than Earth’s surface, but it receives no radiation from the Sun down there 350Km below TOA. Nor is the temperature due to any internal energy source. The energy has come from the Sun over the life of the planet, but not by radiation. So the models would fail dismally there.
surely we just need some bigger computers to allow cloud to be simulated in GCMs at high resolution
what am I missing?
Yes, for this particular problem, I think that would be a factor of 1000, which is 15 years away according to Moore’s Law. Even then the clouds are not fully resolved but they would be ten times better resolved.
lolwot, “surely we just need some bigger computers to allow cloud to be simulated in GCMs at high resolution.” Naw, just bigger HDTV monitor. The clouds are just painted on the little snow globe anyway.
Here’s what you “are missing”, lolwot:
These studies have already been made, using superparameterization to better simulate the behavior of clouds (Wyant et al. 2006).
These studies have shown that the net overall feedback from clouds at all latitudes is strongly negative, rather than strongly positive, as previously assumed by all the models cited by IPCC.
The average estimate is around the same order-of-magnitude as the IPCC estimate, but with an opposite sign (-0.88 W m-2 °K-1, as opposed to +0.69 W m-2 °K-1).
IPCC models had predicted an increase in 2xCO2 ECS from cloud feedbacks of 0.13°C (from 1.9°C to 3.2°C).
So it is apparent that if corrected for the newer estimates based on superparameterization, the 2xCO2 ECS would be reduced to 1.0°C to 1.5°C, instead of 3.2°C, as previously predicted by the models cited by IPCC.
Interestingly, there have been several more recent, independent observation-based studies, also confirming a lower 2xCO2 ECS range (around half the previously estimated range).
And the physical observations by Spencer & Braswell 2007, based on CERES satellite measurements over the tropics, also confirm a net negative feedback from clouds, so it all seems to be falling into place.
Climate models are missing:
Heat from the Sun
A proper net heat flow – and this includes water cycle. I do not have the impression climate models do properly model the net heat flow – especially in the vertical column of air – but too much rely on assumptions.
They should better address the thermodynamics and do not add the complexity of carbon cycle and other unless the net heat flow is properly modelled.
Lars P – the AGW Greenhouse Effect doesn’t have the Water Cycle for a number of reasons, the chief of which is that this is the primary cooling cycle for the Earth, and the Greenhouse Effect required it be excised completely for its GHE illusion of the “33°C warming by greenhouse gases from the -18°C it would be without them”.
To this end, they have taken out all the properties and processes of the real gas atmosphere around our Earth and turned it into empty space (created out of the imaginary ideal gas) – so they could misappropriate the -18°C figures to mean their Greenhouse Effect when it properly means absence of all the atmosphere, of real gases which are predominantly nitrogen, oxygen and water.
In the real world, and not in the models’ alien world with a cold Sun and no atmosphere, nitrogen and oxygen as real gases under gravity are the thermal blanket around the Earth of the real greenhouse which both warms and cools, and water with its high heat capacity cools further.
The appropriate comparison is to the Moon without an atmosphere, which goes into extremes of cold and heat.
Earth with all atmosphere in place: 15°C
Earth without any atmosphere: -18°C
[Moon without any atmosphere: -23°C]
Earth with all atmosphere in place, but minus water (think deserts): 67°C
The real world greenhouse gases nitrogen and oxygen are the thermal blanket around the Earth, not the trace gas carbon dioxide which is practically 100% hole ..
The AGW GHE “33°C warming by greenhouse gases” doesn’t exist, there is no mechanism for their version of greenhouse gases to achieve this.
Attributing the -18°C to absence only of their version of greenhouse gases is a science fraud.
A magician’s trick.
“What are climate models missing?”
Physics, basic comprehension of the problem to be solved, Ockham’s Razor & the possibility of experimental verification. Nothing else.
There is a general theory of non-equilibrium stationary states (ignored by computational climate models).
Journal of Physics A: Mathematical and General Volume 36 Number 3
Roderick Dewar 2003 J. Phys. A: Math. Gen. 36 631 doi:10.1088/0305-4470/36/3/303
Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states
According to it in reproducible non equilibrium stationary systems the Maximum Entropy Production (MEP) principle holds.
A system is reproducible if for any pair of macrostates (A;B) A either always evolves to B or never.
On the other hand, a system which is only radiatively coupled to its environment (as it is the case with the terrestrial climate system), it should be pitch black to produce entropy at the highest possible rate for a given incoming radiation flux of fixed color temperature.
Now, we do know that Earth is not black. Liquid oceans under clear sky as seen from above come pretty close, but sky is not always clear, the sun is in the zenith only above a small patch and parts of its surface is solid. Its face is covered by an ever changing white-on-blue fractal (and much else) with an overall albedo of ~0.3.
Therefore the climate system is not characterized by reproducible evolution.
Now, trying to reproduce the irreproducible computationally or otherwise is futile. That is, the perfect job for climate scientists, who have build an entire paradigm (& industry) around this nonsense.
Chaotic systems do not always have reproducible evolution, of course, so it is no wonder the climate system, which is chaotic, is not reproducible. But the thermodynamic argument above shows this chaos is deep in the sense it will not go away with averaging & the like; Earth is not black even as seen from far away with its light averaged over eons.
Still, some parts of this system may be reproducible, for some subsystems are described by MEP pretty well, as Paltridge has shown. Therefore a structural analysis, looking for reproducibility, is in order. What is more, even if most of the system’s behavior is not reproducible, it is full of recurrent phenomena. An even more general theory may capture this property and capitalize on it.
If not MEP, there may be some more general extremum principle at work here, of which MEP is only a special case under certain circumstances. Earth, even if it is not black, has a well regulated albedo, a wonder in itself, considering the highly dynamic processes giving rise to it.
Clouds are fractals, everyone can see that. But the visible surface of clouds is only the surface separating saturated (100% RH) atmospheric regions from unsaturated ones. All the other equal relative humidity surfaces are fractal-like, even if invisible to the naked eye. Water vapor is not a well mixed gas.
And, average optical depth of a fractal absorber has almost nothing to do with its average density (it only provides an upper bound). The actual relation, as a concise geometric description of it, is unknown.
Those are the simple questions that beg simple answers, that is, ones with low Kolmogorov complexity (like basic equations in physics). Razor of that Medieval guy should be kept sharp & shiny.
Then comes experimental verification, an indispensable keystone in science. Earth itself, as it was correctly noticed by some climate scientists, would never fit into the lab and it is both time consuming & costly to build material replicas. However, some other members of the class made of irreproducible non-equilibrium stationary systems may fit happily, providing a perfect test bed for an as yet nonexistent theory.
To progress in the climate problem, 3 steps are necessary in my opinion.
Recognize the failure of approaches based on computable determinism.
Too many scientists still consider the climate dynamics like a computable deterministic case. Basically they think that there is a unique solution (to some system of equations) and what prevents us to compute it with an epsilon accuracy, are material constraints – poorly known subgrid processes and computing power. In other words these material constraints generate uncertainty and it will go away with increasing computing power and complexifying the model.
Yet after 30 years and thousands of papers, the computed numbers are still all over the place. The spatial distribution is wildly different from model to model and even the time variation of spatially averaged quantities (which are most probably irrelevant for the dynamics anyway) stubbornly refuse to converge.
For scientists trained in non linear dynamics, these phenomenons are on the contrary extremely familiar. They are a clear consequence of spatio-temporal chaos where no computable deterministic solution exists.
The only computable property is the probability of future dynamical states. What we see is not uncertainty due to material constraints but the irreducible scattering of results according to some probability distribution that governs the system’s dynamics.
There is an analogy between observation of a chaotic system and a quantum mechanical system – in both it is not surprising to get different results all the time because only probabilities can be predicted. This is a fundamental property, not a contingent uncertainty that will go away with better instruments and/or complexified models..
I must salute here The Chief who stresses this important point over and over even if many posters are clearly unable to understand the argument.
Once done away with the faith in computable determinism, the search must focus on the question of the spatio-temporal probability distribution of the dynamical states. And here I applaud Judith writing : The inability of climate models to simulate the evolution of and connections among the teleconnections and interannual to multidecadal circulation regimes is the biggest source of problems for understanding regional climate variability.
Indeed if we cannot find the right probability distribution for a “simple” system containing only oceans and atmosphere then it is hopeless and actually contra productive to add chemistry, biology and other even much less understood processes.
However first things first. If we want to look for a probability distribution, we better make sure that it is unique and invariant by the dynamics.
Coming back to the analogy with QM. Here the problem is simple – the probability distribution is a (unique) solution of the Schrödinger equation. As for the observables it is even easier, they all have a probability distribution computable by using the wave function solving the Schrödinger equation.
We have nothing such in fluid dynamics. While a probability distribution might be hidden inside Navier Stokes for some cases, we have no equivalent of Schrödinger equation and more importantly we have no certainty that such a unique probability distribution exists for ALL cases.
The paramount question therefore is whether a fluid system (oceans&atmosphere only) admits a probability distribution independent of initial and boundary conditions. This question is extremely far from being trivial and there has been no progress in answering it sofar.
For those who prefer physical examples to mathematical arguments, this questions means :
“If I start with an Earth without polar caps and then with polar caps everything else being equal, will the future states be distributed identically for both cases after a certain time ?”
“If I start with an Earth without a Gulf stream everything else being equal, will I get after a certain time a distribution of states which will look like the real Earth and contain with a very high probability a Gulf Stream ?”
Clearly if the answer on these questions is no, then the problem will be even much harder than we previously thought.
If the answer on the previous question is yes, e.g it exists invariant probability distributions Fi(x,t) for each climatic variable i which gives both the spatial and the temporal variability, then the work is to find them. This contains both theoretical considerations of Navier Stokes and fluid systems like Judith said and experimental approaches of relatively simple models. This is the optimistic case where the climate is as simple and well behaved as quantum mechanics.
If the answer on the previous question is no and we must be aware that it is mostly no for spatio-temporal chaos, then we have a problem that would probably be one of the most complex problems in physics.
Indeed in this case the distribution of future states depends on the initial and boundary conditions and the only hope would be to find a partitioning of initial conditions in classes where a “suitable” (define suitable) regularity exists.
As trustworthy theoretical tools don’t exist, one would probably have to rely on computer simulations to explore the space of initial conditions and try to semi empirically guess the partitioning without being sure that one didn’t miss some important subclass which would lead to very different dynamics.
This last case is the domain where one would speak of “Black Swans” and “Sea Dragons” because a small change which makes the system switch from a particular subclass to another subclass can manifest itself by a very different probability distribution of states and a very different behavior (e.g shifts).
Tomas, As I understand what you have written, it seems to boil down to my own interpretation of the situation. At the moment, physics cannot tell us what happens when we add more CO2 to the atmosphere. Is this fair?
He’s babbling, Jim.
At the moment, physics cannot tell us what happens when we add more CO2 to the atmosphere. Is this fair?
Not quite. In a certain qualitative sense, there is a number of things physics, especially radiative and equilibrium physics, can say and some of them are very sure.
1) The mean free path of infrared photons will decrease
2) The energy density (J/m^3) will increase – consequence of 1
3) The purely radiative equilibrium local temperature (aka all other things being equal) will increase – consequence of 2
4) The altitude of tropopause will probably increase. Etc.
More generally, as long as you don’t start moving energy and momentum around (that is what fluids do), even 19th century physics can say a lot of things.
Problem start, and this was the purpose of my post, when the system stops being static and in equilibrium and starts to be dynamic and out of equilibrium. It is like starting with a laminar regime of a fluid (easy computable determinism) and transit to turbulence (very hard, only statistically predictable in some cases and chaotic).
The non linear dynamics lead to probabilistic descriptions only and, of course, if you don’t know the probability distribution functions, then you can say nothing of really quantitative and sure.
Tomas is confused by what is possible. This is the kind of full soup-to-nuts analysis one can apply.
A representative phenomena of what stochastic processes are all about is the carbon cycle and the sequestering of CO2. Much work has gone into developing box models with various pathways leading to sequestering of industrial CO2. The response curve according to the BERN model is series of damped exponentials showing the long term sequestering. Yet, by a more direct statistical interpretation of what is involved in the process of CO2 sequestration, we can model the adjustment time of CO2 decay by a dispersive diffusional model derived via maximum entropy principles.
A main tenet of the AGW theory presupposes the evolution of excess CO2. The essential carbon-cycle physics says that the changing CO2 is naturally governed by a base level which changes with ambient temperature, but with an additional impulse response governed by a carbon stimulus, either natural (volcano) or artificial (man-made carbon). The latter process is described by a convolution, one of the bread and butter techniques of climate scientists:
CO2(t,T)=CO2(0,T)+κ ∫ C(τ) I(t−τ) dτ
With the impulse response of I(y) convolved against the historical carbon outputs as archived at the CO2 Information Analysis Center, it matches the measured CO2 from Mauna Loa with very good agreement:
The match to the CO2 measurements is excellent after the year 1900. No inexplicable loss of carbon to be seen. This is all explainable by diffusion kinetics into sequestration sites. Diffusional physics is so well understood that it is no longer arguable.
If on the other hand, you do this analysis incorrectly and use a naive damped exponential response ala Segalstad and Salby [unpublished] and other contrarian climate scientists, you do end up apparently believing that half of the CO2 has gone missing. If that were indeed the case, the response looks completely different, given the skeptics view of a short 6.5 year CO2 residence time. This is clearly not observed in the data
The obvious conclusion is that if you don’t do the statistical physics correctly, you end up with nonsense numbers ala Salby.
Consider next the sensitivity of temperature to the log of atmospheric CO2. In 1863, Tyndall discovered the properties of CO2 and other gases via an experimental apparatus;he noticed that light radiation absorption was linear up to a point but beyond that the absorptive properties of the gas showed a diminishing effect:
The actual model used by Andrew Lacis and other climate scientists is to configure the cross-sections by compositing the atmosphere into layers or “slabs” and then propagating the interception of photons by CO2 by differential numerical calculations. In addition, there is a broadening of the spectral lines as concentration increases, contributing to the first order result of logarithmic sensitivity. In general the more that the infrared parts of the photonic spectrum get trapped by CO2 and H20, the higher the temperature has to be to make up for the missing propagated wavelengths while not violating the earth’s strict steady-state incoming/outgoing energy balance.
To get a sense of this logarithmic climate sensitivity, the temperature response is shown in the linked graph below, right side, for a 3°C sensitivity to CO2 doubling, mapped against the BEST land-based temperature record. Observe that trending temperature shifts are already observed in the early 20th century.
FIGURE 1: CO2 model applied to AGW via a log sensitivity and
compared against the fast response of BEST land-based records.
If we choose a 2.8°C sensitivity from Hansen’s 1981 paper and compare the above model to Hansen’s projection, one can see that we are on the right track in duplicating his analysis.
FIGURE 2: Hansen’s original model projection circa 1981  and the 2.8C model used here.
Perhaps the only real departure from the model is a warming around 1940 not accounted for by CO2. But even this is likely due to a stochastic characteristic in the form of noise riding along with the trend. This noise could be volcanic disruptions or ocean upwelling fluctuations of a random nature, which only requires the property that upward excursions match downward.
So if we consider the difference between the green line model and blue line data below:
FIGURE 3: Plot used for regression analysis.
Beyond 1900, the RMS error is less than 0.2 degrees
and take the model/data residuals over time, we get the following plot.
FIGURE 4: Regression difference between log sensitivity model and BEST data
Note that the yearly excursions never exceed about 0.2°C from the underlying trend. The dotted line red curve above the solid blue residual curve is an Ornstein-Uhlenbeck model (red noise) of a yearly random walk with a reversion-to-the-mean drag that prevents fluctuations beyond approximately 0.2°C. At this kind of scale and considering the Markov process which will allow short-term fluctuations in the tenths of a degree, the bump of increased temperature in the 1940’s is not a rare nor completely unlikely occurrence.
A red noise model on this scale can not accommodate both the short-term fluctuations and the large upward trend of the last 50 years. That is why the CO2-assisted warming is the true culprit, as evidenced by a completely stochastic analysis.
No need to apply any of the chaos analysis. There are simply too many mutually and self-consistent steps in the entire soup-to-nuts analysis to argue for something much more complicated.
I hope this helps your confusion and feeling of helplessness in being unable to solve a physics problem.
Tomas, you write “Not quite. In a certain qualitative sense, there is a number of things physics, especially radiative and equilibrium physics, can say and some of them are very sure.”
Fair enough. Let me put it another way. There is currently nothing QUANTITATIVE that physics can tells as to what happens to the atmosphere when more CO2 is added.
I disagree with you David.
Tomas Milanovic | June 18, 2013 at 5:46 am | Reply
Weather predictions are all stated in probabilities. These people are familiar with the concept. Climate sensitivity is given as probability density functions. It almost seems like you don’t know the material you pretend to critique and just want a podium to expound on trvialities about non-linear dynamic systems to an audience who won’t mail you a rejection letter for having nothing new or interesting in what you wrote.
“For scientists trained in non linear dynamics, these phenomenons are on the contrary extremely familiar.”
So you’re saying you and people with training like yours are extremely familiar with being wrong. There’s a shocker.
Certainly, the mainstream climate science is very clear on this. The issue (if you want to call it that) is that science is about stretching the frontier and of course research scientists will go beyond the mere pedantic derivation. That’s actually my forte, as I like to try to simplify the arguments to find out if there are first-order principles that we can apply to obtain more canonical representations.
Recall what Andrew Lacis said on this site:
How about some references to some of your published work, Tomas. I’d like to see what your peers think of it. For some reason ‘t milanovic’ draws a blank on google scholar so before assuming the obvious I thought I might ask.
Draws a blank, indeed, unlike ‘d springer’.
Would you be so kind as to go away now and spend your time some place else where your wits are appreciated?
So what do you think of the work of Tsonis etc.? [Tsonis et al. (2011), Tsonis et Swanson (2012)] I’ve looked into applying network theory to the computations that go on in the cell, and there seems to be good work going on in neurology.
Personally, I’m a little skeptical regarding the “communities” per se, but identifying such phenomena might be the first step to finding some better way of modeling climate than the “brute force” approach which can’t really be expected to work. I notice, particularly, a unique situation WRT the southern Andean cordillera, which might represent a specific localized phenomenon independent of the more global structures.
Another unique situation, my favorite, is the Himalayan orgeny (including the Hindu Kush), Tibetan plateau, and consequent Tropical Easterly Jet (TEJ). I was confused at first that it didn’t show up in the above mentioned analyses, until I looked closer and discovered that they’re looking at NH winter months, and the TEJ is a NH summer phenomenon.
Interestingly, the TEJ appears to have been evolving back to its 1970’s condition, even while other climate “indicators” (e.g. global average temperature) seem to be plateauing [Ratnam et al. (2013]. It might be interesting to see what a NH summer version of the Tsonis et al. (2011) looks like. For that matter, I can see varying a whole bunch of their parameters, and seeing what changes.
Ratnam et al. (2013) Is the trend in TEJ reversing over the Indian subcontinent? by M. Venkat Ratnam, B.V. Krishna Murthy, and A. Jayaraman Geophysical Research Letters accepted article DOI: 10.1002/grl.50519
Tsonis et al. (2011) Community structure and dynamics in climate networks by Anastasios A. Tsonis, Geli Wang,Kyle L. Swanson, Francisco A. Rodrigues, Luciano da Fontura Costa Climate Dynamics September 2011, Volume 37, Issue 5-6, pp 933-940 doi:10.1007/s00382-010-0874-3
Tsonis et Swanson (2012) On the origins of decadal climate variability: a network perspective by A. A. Tsonis and K. L. Swanson Nonlin. Processes Geophys. 19, 559–568, 2012 doi:10.5194/npg-19-559-2012
Thank you . Very interesting post. Which led me to your blog. Very interesting as well.
AK, ” I notice, particularly, a unique situation WRT the southern Andean cordillera, which might represent a specific localized phenomenon independent of the more global structures.”
That region is much more stable or rather consistent due to the Antarctic Circumpolar Current. It should make a good reference for longer term “shifts”.
Glad you liked them.
@captdallas 0.8 or less…
It’s also perfectly placed to transport large amounts of information between the East Pacific and/or the Amazon basin and the SH westerlies.
In determining how to answer your comment, I did come to the realization that in analyzing climate we should be looking at information flow, rather than (or perhaps along with) heat or other energy. As with cellular information processing systems (also neurological systems) the flow of energy is in some respects orthogonal to the flow of information, although essential to it. But it’s the information that matters. The processes that control the flow of energy are essential informational.
“To progress in the climate problem, 3 steps are necessary in my opinion.”
Which climate problem?
Climate sensitivity is given as probability density functions.
Proof of existence (link is enough) ?
Proof of unicity (link is enough) ?
Proof of invariance (link is enough) ?
You have no idea what you are talking about and beside pitiful attempts at vague ad hominems, your post has no added value. I will change my mind if you give the relevant links but as I know that you can’t do that, you are just one of few nuisances on this board that are best left ignored
We need thank God, for the gravity of our current situation. There is no denying it, either. Weather; you like it or not.
“Tomas Milanovic | June 18, 2013 at 5:46 am | Reply
To progress in the climate problem, 3 steps are necessary in my opinion.”
I agree what you are saying. Essentially you have defined the issues associated with predicting behaviour of a chaotic system
1) It represents non-computable determinism hence the usual iterative computer tools are of no use so you need to fall back onto the evolution of probability distributions if possible.
2) Beware as the evolution of probability distributions may well be chaotic and so no useful computations may be done on them.
3) That leaves an extremely difficult problem to solve using known mathematical and computing tools.
We are left with spatio-temporal chaos which you may interpret from other post I have made is what i believe we face with climate science. ie beyond short term weather forecasting and macro observations like summer is warmer then winter, so little progress will be made by using more powerful computers.
The barrier is at the very heart of mathematical chaos theory for loosely constrained spatio-temporal systems. Consider the phrase “loosely constrained” meaning not bound like ice in the case of water.
Son of “We are left with spatio-temporal chaos which you may interpret from other post I have made is what i believe we face with climate science. ie beyond short term weather forecasting and macro observations like summer is warmer then winter, so little progress will be made by using more powerful computers. ”
That is not completely true. Since uncertainty seems to be increasing as we throw more money into computer power, that pretty much lets us know that there is spatio-temporal chaos in our methods. That is kind of the point of the simple models approach to humongous problems. You use them to determine an uncertainty range, that irreducible degree of imprecision.
We can put Girma in charge of that. Just double his confidence intervals and you have +/- about 0.8 C which is a normal range of fluctuation. If a model can consistently reduce that range of uncertainty, can it.
CaptDallas, Simple models will help you to better understand chaos and the characteristics of chaotic behaviour but will do little good in modelling real climate behaviour. Hence will have no skill to be able to reduce the range of uncertainty in predictions of real climate behaviour.
Does anybody know if the magma/core of the earth is closer to the surface of the crust or water at the North Pole than at the South Pole? I seem to remember that the crust is thicker down south – pear shaped. Newton spent a reasonable amount of time explaining that complication to the gravitational field in Principia Mathematica. It would make things colder down south and warmer north. The asymmetry would make a nice forcing function for currents and represents more energy that any else in the puzzle.
Better to look at the greater advection of energy via ocean currents to the NH pole. Recent studies have shown the water flowing to the Arctic Ocean is the warmest in at least 2000 years:
Climate models are missing temperature data that hasn’t been tampered with …
Here’s an example from Arno Arrak http://rankexploits.com/musings/2013/hide-the-decline-trenberths-trick/#comment-116266
“… But that is not the only reason for declaring it invalid. There are also other observations and scientific analysis as well that tell us the same thing. When it comes to temperature standstill you probably did not know that in the eighties and nineties there was no warming from 1979 to early 1997, an 18 year stretch. I discovered that by comparing satellite temperature curves with ground-based temperature curves while researching for my book “What Warming?” Satellite data showed a period of ENSO oscillations whose mean temperature stayed constant in the eighties and nineties before 1998. Ground-based data showed a steady warming in this time slot that was called the “late twentieth century warming.” Nobody could find a natural cause for it and that was taken as proof that the warming was anthropogenic. I considered this wrong and said so in the book. But nothing happened when it came out and everybody ignored it. Until last fall that is, when GISTEMP, HadCRUT and NCDC in unison got rid of this fake warming and decided to use the satellite data for the eighties and nineties instead. Nothing was said about it. A nice cross-pond cooperation you might say. Or else somebody got cold feet and decided to cover themselves. Take your pick. I consider this coordinated action as proof that they all knew the warming was fake. But as a result now we have an 18 year uncontested no-warming period beginning in 1979. And we also have the entire twenty-first century as a no-warming period. Between them is a narrow window just wide enough to accommodate the super El Nino of 1998 and its associated step warming. This takes up the entire satellite era and leaves no time for any greenhouse warming at all. It means no greenhouse warming at all for the last 34 years. In view of this fact, what chance is there that any of the warming that preceded the satellite era can be greenhouse warming? Not much, in my opinion.”
34 years of no warming which they knew very well, and all the while they got more and more shrill telling us it was getting hotter and hotter and we were all going to fry, after we drowned..
The models are based on nonsense temperature data, besides all the other gigo.
They are missing credibility
What are climate models missing is, HONESTY, INTEGRITY AND RESPECT FOR SCIENCE
Pingback: Computer Models vs real world
Pingback: Simulazioni climatiche: Un passo indietro per andare avanti | Climatemonitor
Climate models “miss” this …
Radiation does not “carry” any “thermal energy” because it transmits electro-magnetic energy. When will people come to understand the difference?
The Sun’s radiated electromagnetic energy is converted to thermal energy in the far cooler surface. Back radiation’s electromagnetic energy is carried by waves that have far too low a range of frequencies and intensities for their EM energy to be converted to thermal energy in the warmer target.
Microwaves in your MW oven are also at far too low a frequency to warm anything by atomic absorption, so they are pseudo scattered and follow a random path through opaque plastic bowls without warming those bowls. But the Sun’s radiation would warm them. All that microwaves can do (if they are at the right frequency to resonate with water molecules) is to cause those molecules to rotate with each passing wave, generating thermal energy from a friction-like process that is nothing like atomic absorption.
Doug, if electromagnetic energy could NOT be absorbed, your car radio wouldn’t work. Really, you need to stop writing and read more.
Garbage jim2. A radio receiver functions in a totally different way to atomic absorption. But if low frequency radiation (like radio waves and back radiation) were absorbed by the surface, the radio transmissions probably would not even reach you.
I’ve been reading, studying and understanding physics so that I have been able to help many highly successful students over about 50 years. How about you? Can you even quote the Second Law of Thermodynamics without looking it up here? Maybe you need to read my two papers and six articles on climate related matters.
The Sun’s direct radiation cannot raise the Earth’s surface temperature to a mean of 288K. Nor can any radiation from the atmosphere.
The surface transfers more thermal energy to the atmosphere by conduction and evaporative cooling than by radiation. Hence radiation cannot possibly transfer all the energy that it would from a body with the same emissivity at the same temperature in space. Non-radiative processes have taken their slice of the cake. This means that it is not possible to calculate what the surface temperature “should” be for any particular radiative flux. The surface does not act remotely like a true blackbody, and you can’t just fudge the figures by changing emissivity, because non-radiative transfers would then also change.
It is even more obvious on Venus that direct radiation from the Sun could not raise the Venus temperature by 5 degrees each 4-month-long Venus day, from about 730K.
So if the Sun cannot raise the surface temperatures, then the whole concept of a “blanket” effect is irrelevant, because we have to ask (if the “blanket” is slowing surface cooling) from what temperature is it slowing such cooling? A vacuum flask will slow the cooling of your coffee, but it will never warm it. Furthermore, if the coffee wasn’t hot in the first place, the whole “slowing process” is next to being irrelevant.
There never was a need to invent a radiative forcing / greenhouse effect, because many physicists have known for many years that an autonomous thermal gradient evolves spontaneously in a gravitational field. This is quite sufficient to explain the observed temperatures on Earth and all planets, even beneath their surfaces in their crusts, mantles and perhaps cores.
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?
Dear Mr. Cotton,
What is thermal energy to you? Is it a fluid that is contained in solid objects? You had better get back to basics. I suffer for you students understanding of physics from such a confused teacher.
What is it to you?
I don’t have any objection to this quote from here – do you?
“Microscopically, the thermal energy may include both the kinetic energy and potential energy of a system’s constituent particles, which may be atoms, molecules, electrons, or particles in plasmas. It originates from the individually random, or disordered, motion of particles in a large ensemble, as consequence of absorbing heat. In ideal monatomic gases, thermal energy is entirely kinetic energy. In other substances, in cases where some of thermal energy is stored in atomic vibration, this vibrational part of the thermal energy is stored equally partitioned between potential energy of atomic vibration, and kinetic energy of atomic vibration. Thermal energy is thus equally partitioned between all available quadratic degrees of freedom of the particles. As noted, these degrees of freedom may include pure translational motion in gases, in rotational states, and as potential and kinetic energy in normal modes of vibrations in intermolecular or crystal lattice vibrations.”
Can climate models explain the recent stagnation in global warming?
Hans von Storch
and Eduardo Zorita
In recent years, the increase in near-surface global annual mean temperatures has emerged asconsiderably smaller than many had expected. We investigate whether this can be explained bycontemporary climate change scenarios. In contrast to earlier analyses for a ten-year period that indicated consistency between models and observations at the 5% confidence level, we find thatthe continued warming stagnation over fifteen years, from 1998 -2012, is no longer consistentwith model projections even at the 2% confidence level. Of the possible causes of theinconsistency, the underestimation of internal natural climate variability on decadal time scales isa plausible candidate, but the influence of unaccounted external forcing factors or anoverestimation of the model sensitivity to elevated greenhouse gas concentrations cannot be ruledout. The first cause would have little impact of the expectations of longer term anthropogenicclimate change, but the second and particularly the third would.
Pingback: The Austrian problem, waking up to reality | Tallbloke's Talkshop