by Judith Curry
From Roger Pielke Jr.: A fundamental problem with climate science in the public realm, as conventionally practiced by the IPCC, is the essential ink blot nature of its presentation. By “ink blot” I mean that there is literally nothing that could occur in the real world that would allow those who are skeptical of scientific claims to revise their views due to unfolding experience. That is to say, anything that occurs with respect to the climate on planet earth is “consistent with” projections made by the climate science community.
Pielke Jr.’s original inkblot post generated substantial discussion with James Annan James Annan (here, here, and here) with additional posts from Pielke Jr (here, here, and here), with a synopsis from John Nielsen-Gammon (here).
There are two ways for the climate science community to move beyond an ink blot (if it wishes to do so). One would be to advance predictions that are in fact conventionally falsifiable (or otherwise able to be evaluated) based on experience. This would mean risking being wrong, like economists do all the time. The second would be to openly admit that uncertainties are so large that such predictions are not in the offing. This would neither diminish the case for action on climate change nor the standing of climate science, in fact it may just have the opposite effect.
This series of posts is quite interesting, especially the comments. This sort of topic is right up my alley, although I am a bit late coming to this particular party. Roger’s second way is a better choice than the first way: “uncertainties are so large that such predictions are not in the offing.” This issue is discussed at length in my uncertainty monster paper, section 2.3. However, the concept of prediction/model falsification is misleading in the manner in which it seems to be used by Pielke Jr.
Skill scores for probabilistic forecasts
An issue that arose in the Pielke-Annan posts is the validation of probabilistic forecasts. In a proposal that I just submitted, I have a section on skill scores for probabilistic and ensemble forecasts (for weather and seasonal climate forecasts):
In the meteorological literature there are several methods for assessing the value of probabilistic and ensemble forecasts, including the Brier Score, Ranked Probability Score, Relative Operating Characteristics (ROC), Bounding Box, Rank Histograms. Of particular relevance to anomalous relatively rare events , skill scores are needed that address user sensitivities to event identification versus false alarms of relevance to the target decision makers. For example, the Ignorance Score penalizes strongly for a missed event, but not so strongly for a false alarm. [Note: missed event versus false alarm relates to Type I,II errors]
I spotted this presentation by a scientist at ECMWF that gives some examples. In the weather/climate community, these skill scores are used to evaluate a large population of daily or monthly forecasts.
For climate models simulating climate change, we are most often talking about century or decadal time scales, for which we have only a few realizations (with marginal external forcing for all but the most recent few decades). Evaluating previous ensemble climate projections against subsequent observations is probably most sensibly done using a bounding box approach (which is essentially what Lucia Liljegren has been doing).
Model outcome uncertainty
What exactly does falsification of a prediction mean? For an ensemble prediction, the prediction is said to have no skill if the actual realization falls outside of the bounding box of the ensembles (or whatever skill score for whatever variable has been decided in advance). A prediction with no skill does not imply falsification or rejection of a model. Falsification of a climate model is precluded by the complexity of a climate model. Here is some text from an earlier, extended version of the uncertainty monster paper:
Model outcome uncertainty, sometimes referred to as prediction error, arises from all of the aforementioned uncertainties that are propagated through the model simulations and are evidenced by estimates of the model outcomes. Reducing prediction error is a fundamental objective of model calibration. Model prediction error can be evaluated against known analytical solutions, comparisons with other simulations, and/or comparison with observations. Assessing prediction error by comparing with observations is not straightforward. Simulations are generally used to generate representations of systems for which data are sparse. In addition to accounting for the representativity error, it is important to judge empirical adequacy of the model by accounting for observational noise.
This challenge to model improvement arises not only from the nonlinearity of the model, but Winsberg (2008) argues that climate models suffer from a particularly severe form of confirmation holism that makes the models analytically impenetrable. Confirmation holism in the context of a complex model implies that a single element of the model cannot be tested in isolation since each element depends on the other elements, and hence it is impossible to determine if the underlying theories are false by reference to the evidence. Continual ad hoc adjustments of the model (calibration) provides a means for a theory to avoid being falsified. Occam’s razor presupposes that the model least dependent on continual ad hoc modification is to be preferred.
Owing to inadequacies in the observational data and confirmation holism, assessing empirical adequacy should not be the only method for judging a model. Winsberg points out that models should be justified internally, based on their own internal form, and not solely on the basis of what they produce. Each element of the model that is not properly understood and managed represents a potential threat to the simulation results.
Model verification and validation
This leads us back to the issue of model verification and validation. Several previous Climate Etc. threads have been devoted to this topic:
- The culture of building confidence in climate models
- Climate model verification and validation
- Should we assess climate model predictions in light of severe tests?
Heh, Kevin’s got a Rorschach null.
=========
The ink blot analogy is excellent!
I took a personality test once when taking a course in biology, and every ink block looked like a skeleton, because that was what was on my mind.
With kind regards,
Oliver K. Manuel
JC – have you noticed that there was a special issue about climate models in Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics about a year ago (vol. 41, no.3)?
http://www.sciencedirect.com/science/journal/13552198/41/3
Yes, we discussed this on several previous threads, thanks for reminding us of the link
How about abandoning the “ensemble” idea, that lends itself to the notion that different models are more right than others at different times, but we will only know which ones are after-the-fact?
From the above-mentioned special issue:
W.S. Parker, Predicting weather and climate: Uncertainty, ensembles and probability
“A key conclusion is that, while complicated inductive arguments might be given for the trustworthiness of probabilistic weather forecasts obtained from ensemble studies, analogous arguments are out of reach in the case of long-term climate prediction.”
http://www.sciencedirect.com/science/article/pii/S1355219810000468
Judith, in no particular order, here are my thoughts on the inkblot issue. I’ll split them into individual posts.
First, I do not understand the claim you make above:
This would imply that the worse the results of some individual models are, the more likely the ensemble will be found to have skill.
I don’t get it. How can throwing an obviously wrong model into an ensemble increase the skill level of the group? That just seems backwards to me. What am I missing?
w.
A model would at least have to do an adequate job with the last century before it is qualified to be an ensemble member.
Sez who? There’s models that have been used in the past IPCC reports that do a very poor job “with the last century”, but were used nonetheless. The truth is that there is no entrance exam, no minimum standards, for GCMs.
In any case, doing an adequate job with the last century is a trivial test, a three-cycle model can do that quite well … but that means nothing about the future.
Finally, I have shown that the models are simple linear transformations of the input forcings plus a lag … and those bozo simple linear models are what the IPCC uses.
So no, Jim, there’s no qualifications necessary for models.
But none of that answers my question. How does it make sense that an ensemble is improved by adding a bad model.
w.
You think there is no minimum standard for IPCC GCMs? Please explain. I haven’t heard that before.
I thought all IPCC GCM’s do the following:
1) Predict warming, warming, warming with 95% of their runs.
2) Throw in a few outliers way higher and lower just so they can say 3.7% of all runs were below or above the actual temperature implying that it took some skill.
GCM’s are a joke. IPCC is joke.
Its called “democracy of the models” in the climategate files.
A while back this was a subject of discussion in RC comments ( gavin, me and a few others) the problem is picking a metric. Some models do great at temperature and lousy at precipitation. ( see Taylor diagrams )
some models do bad at sea surface salt..
There is only one test where models are winnowed:
.
Attribution studies. In attribution studies models are selected if they have a low amount of drift in the control run.. chapter 9.. in the supplementary material.. as I recall,
I may have elements of this wrong, so people please feel free to correct me.
I believe that any nation participating in the IPCC is free to submit one or more climate models for inclusion in analyses. I also have heard that the sophistication of models varies enormously. However, the political benefits of their inclusion may outweigh the difficulties arising from their inclusion.
I’m just writing old memories of other peoples’ comments on various weblogs–this could easily be mistaken or have other very different explanations.
I write this hoping for such explanations, not to make a blanket accusation.
Willis, I agree with your comment/question. So if the ensemble’s skill is improved by adding terrible models, then the “most skillful” would be two models which diverge at time 0 such that one ensures not temp will ever be higher and the other that no temp will ever be lower than the two models. What sense does this make?
Billions of dollars are spent and the models are still very wrong, isn’t it time to defund those studies? Or at least cut half of the funding to better use of the taxes? Why the governmenta are so blind?
At the risk of getting a bit repetitious on this subject, modelling is a purposeful activity. Thus the selection of the appropriate model and its evaluation are intrinsically linked to that purpose.
If I wanted to forecast what some key parameters of the global climate might be in 2050 I don’t think I’d use a GCM, whereas I might well do so if I wanted to know next weeks weather, or to better understand the interaction between ocean dynamics and sea surface temperatures.
At different scales (including time) different factors are material to the matter in hand.
The Concise Oxford Dictionary gives eight definitions for the word “model”. Presumably the one pertinent to Climatology is number 2. ” a simplified (often mathematical) description of a system etc., to assist calculations and predictions.” The other definitions would include model airplanes,clay figures, ideal behaviour and fashion models. I think the latter do not apply to the use of the word in Climatology.
If this is so, predictions are an important part of the use of models in Climatology. Therefore, the accuracy of predictions (model skill?) is important in Climatology. Model skill can only be tested by the passage of time which implies that any analysis must include a clear description of the time period involved. This condition makes virtually useless conclusions that fail to mention the time period involved or make the prediction for some period more than a few years in the future. All models are suspect until they are tested and testing has been done for almost no models of climate.
Since the terms verification & validation are often confused, there is a very useful way of distinguishing the difference.
Verification is objective substantiation of a model — it gives the confidence that you have computed the solution to the model correctly.
Validation is subjective substantiation of a model — someone decides they will use the model for whatever purpose they have intended, a fitness for purpose
In other words, verification means that you have the model built correctly, and validation means that you have built the right model.
I personally don’t find this useful unless you wanted to sell product. For example, a gaming model can be validated because everyone likes to use it and willing to buy it, but it can be completely unverified because a game is just a game, you know?
The other set of definitions is that verification is the same and validation is checking a model against real-world data. So if you are developing an engineering system and have a working model of it, there is no way to validate the model until you actually built the system and tested it in a real-world environment. The problem with this definition is that you can’t validate until the experiment is completed, and with climate science, we are in the middle of the ongoing experiment. Therefore, this definition is useless until we traversed the ergodic state space one time, or at least to some degree. I think that is why weather models have a better chance of getting validated than a climate model.
BTW, this all comes out of the customer requirements jargon for software and systems engineering. As Dr. Stults has said, there are a bunch of different variations out there (btw, Stults’ blog is my all-time favorite for innovative engineering analyses, so highly recommended.)
Bottom-line is that validation is all that counts, and that is responsible for the political dichotomy. One side says the model is not fit for purpose and the other side says it is fit for purpose. Whoever gets the most votes wins.
YMMV
I disagree that the engineering definition of V+V is not applicable.
The validation of the models CAN occur in an on-going capacity- rather than treat the climate as the experiment; you treat each model run as the experiment. This way you can individually validate each model against the real world observations without the need for knowing how the climate will have changed over 100 years. You’ve just got the application slightly wrong there.
labmonkey,
You actually cannot consider each model as an experiment all by itself… owing to the fact that none of them can/will have ( nor are they designed to have ) predictive skill in the wide range of climate measurement caharacteristics ( Precipitation, temperature, winter/summer wet/dry relationships etc. ).
so, as a set ( 15 or 23 or what have you ) of models, they claim that they have predictive skill.
From what i had seen, (even in hindcast,) they don’t look too good.
so… validation is beyond the possibility…. unless you redefine the word ‘validation’ — they do not even look like they will the direction of change right, let along the maagnitude within some reasonable accuracy
It’s not an ACTUAL experiment, but you can use them in an experimental way- as in the evaluation of set criteria against a pre-determined framework (eg oservational results), but yes, it’s not an experiment proper.
WHT,
The definitions you have for V&V don’t match up with any I’ve seen. Verification is really nothing more than testing; it’s ensuring that the implementation matches the requirements. Validation is, as you say, ensuring that the implementation is fit for purpose. However, validation need not be strictly subjective. For example, let’s say Enron’s accounting system required 2+2 to equal 6. The system could pass verification (the implementation matches the requirement), but validation (conforming to GAAP) would fail. You should also note that the system would fail validation without even being built (because of the dodgy requirement).
I don’t really care, it just goes to show you how much people will define it on their own terms. My terms are that every decision ever made comes with an objective evaluation and a subjective evaluation. Look at the root of the terms, to verify is to check someone’s work and to validate is to put a stamp of approval.
I don’t really care, it just goes to show you how much people will define it on their own terms.
No, it goes to show that the terms have specific meanings related to software development. Dr. Curry (in the posts she lists above), Steve Easterbrook, and others who have expertise in this area use those terms in an identical manner. Trotting out the dictionary (“Look at the root of the terms…”) is as fallacious here as it would be in a discussion of the meaning of a legal term.
You should read Seuss’s story on The Bee Watcher.
http://www.drseussart.com/details/illustration/beewatcher.html
These levels of V’s are like watching the watchers.
Run along now.
Web
To make it very basic, if a climate model is able to reasonably accurately predict conditions 50 years from now, shouldn’t it be able to be at least at accurate for conditions 5 years from now?
Or looked at another way- Is it reasonable to incur expenses that change the basic framework of your economy based on any model that can not be demonstrated to repeatedly, accurately predict the future???
Web is still trying to find a way to skip over his being completely wrong about peak oil.
As if peak oil needed more debunking;
http://online.wsj.com/article/SB10001424053111904060604576572552998674340.html?KEYWORDS=peak+oil
CWON, I am way ahead of you and you are so behind the curve in quoting the Yergin article debunking Peak Oil:
http://judithcurry.com/2011/09/16/week-in-review-91711/#comment-113106
WebHubTelescope: “validation means that you have built the right model”.
In my experience, validation should occur before one builds the system. Validation is an inspection of the problem / requirements / functionality statements relative to the objectives and requirements of the customers (manager, sponsors, end users). It occurs before and during specification (how the requirements are to be met).
I agree that verification occurs during and after the system is built: unit testing, system testing, acceptance testing and use. (As an aside, one should also test the conversion subsystem, analogous to acquiring initial data and reanalysis for parameters.)
If one waits until after something is built, one risks making Edsels.
Verification of a climate model occurs when you get an output from the model that you believe represents the future.
Validation of a climate model occurs when you gel a big fat grant as a result of the output of the model.
If only that were true.
In recent years there has been no need to have a validated model to obtain a grant.
Sorry..my bad…missed the intended humor
I have followed parts of this and the JohnN-G RPS discussion was interesting. Eli Rabbet made an interesting double sided remark,
Eli Rabett says:
August 29, 2011 at 4:37 pm
What the heck
——————-
In terms of the robustness of ocean heat content anomalies, since about 2003, it is considered by Josh Willis as quite accurate in the upper 700m as a result of the density of Argo network and the use of satellite measures of sea level.
——————
Spencer and Christy and even the RSS guys were saying the same thing about the MSU measurements after 10 years or more, and they still had to make major corrections within the last five years. These are complicated systems.
Trying to measure a fraction of a Watt/M^2 and hundredths of a degree change in the deeper ocean is definitely complicated. Trying to sort out a tenth of a degree in surface temperature change globally ain’t all that easy either. The global surface temperature record is accurate to about +/- .17 degrees since 1900. The accuracy of the Paleo reconstructions are not likely to be better than the instrumental record. The only thing that has high confidence is the Physics of CO2 radiative forcing and that is complicated by cloud feedback.
So I think option B, “The second would be to openly admit that uncertainties are so large that such predictions are not in the offing.” is the only real option.
The physics of CO2 radiative forcing is near perfectly known because we have accurate measurements of CO2 concentration and accurate measurements of the effect of CO2 concentration on the spectrum of the Earth’s thermal radiation.
The problem is that the 14.77micron band of the Earth’s thermal radiation is already well over 80% saturated so the actual effect from increased CO2 concentration is not what is being input into the climate models which attribute at least six times more forcing to increased CO2 concentration than is physically possible. (In reality this is likely over 20 times what is physically possible).
To complicate matters over 90% of the greenhouse effect is attributable to water vapour and clouds and it is impossible to determine if additional atmospheric CO2 merely takes over part of the greenhouse effect already attributable to water vapour and clouds or whether it actually adds to the greenhouse effect.
If the proper physics of CO2 radiative forcing was input into the climate models instead of the contrived fabricated CO2 forcing parametyer currently in use the climate models would have at least some hope of being correct but most importantly we would not be killing the economy with initiatives to stop global warming nine years after the world had already started cooling.
When a model fails to match observation true scientists look to the model to find out what is wrong with the input. When people fond that a model does not match the data and fix the data to match the model these people can no longer be considered scientists.
Without coupling energy flux with some time dimension it is not possible for GCMs to predict temperature because temperature id a function of energy and not energy flux.
All climate models using a CO2 forcing parameter to predict temperature will depict warming from 2002 to present day because CO2 is increasing by 2ppmv/year.
All five global temperature datasets show none of this warming demonstrating some failure of the models.
All five global temperature datasets show the low temperature in 2008 due to the la Nina conditions but this is not predicted by the models.
All five global temperature datasets show the temperature spike from the 2010 el Nino but this was not predicted by the models.
This failure is notn the fault of the models but the fault of those who input the parameters into the models in such a way as to get what they want to see out of the models. GIGO is the accronym for Garbage in Garbage out and the computer models are just the vehicles taking the garbage from the dump and bringing it back to the people instead of taking garbage from the people and putting it in the dump where it belongs.
I am intrigued by the way that J N-G shifts the discussion to Tyndall gases. So he is saying that CO2 is not the only factor, which appears to contradict the orthodox view.
The ortdoxy view is that there are other factors than C02
Wanna know what they are?
Look at the damn inputs for ar5 simulations. The data is out there.
read more comment less.
If you cant find the link, just whine
The orthodox view is that other gases play a role but CO2 is the only gas that needs to be regulated. Only the hare-brained among us want to limit the number of cows to regulate methane.
Since H2O is the major greenhouse gas, we should be covering the oceans in oil to limit evaporation and thus cool the planet.
diogenese,
Dr. N-G moved to naming the ghg ‘tyndall gasses’ in an attempt to avoid the endless parsing and discussion of ‘greenhouse gas’. I personally like the term, since it is not as easily confused as greenhouse has become.
Even Hansen has agreed that CO2 is not the only player in the atmosphere. Hansen has believed, and many agree with him, that CO2 is the most important ghg/Tyndall gas.
the real question is, “are we facing a climate crisis caused by CO2?”
‘AOS models are members of the broader class of deterministic chaotic dynamical systems, which provides several expectations about their properties. In the context of weather prediction, the generic property of sensitive dependence is well understood. For a particular model, small differences in initial state (indistinguishable within the sampling uncertainty for atmospheric measurements) amplify with time at an exponential rate until saturating at a magnitude comparable to the range of intrinsic variability. Model differences are another source of sensitive dependence. Thus, a deterministic weather forecast cannot be accurate after a period of a few weeks, and the time interval for skillful modern forecasts is only somewhat shorter than the estimate for this theoretical limit. In the context of equilibrium climate dynamics, there is another generic property that is also relevant for AOS, namely structural instability. Small changes in model formulation, either its equation set or parameter values, induce significant differences in the long-time distribution functions for the dependent variables (i.e., the phase-space attractor). The character of the changes can be either metrical (e.g., different means or variances) or topological (different attractor shapes).’
‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ http://www.pnas.org/content/104/21/8709.full
There is not a problem with the former – but there may be a problem with the latter. There is an irreducible imprecision in climate models that arises from sensitive dependence and structural instability given plausible values for initial or boundary conditions. The imprecision is not known as the models have not each been systematically evaluated within the range of the uncertainty and variability of climate parameters. Any one solution – selected for a posteriori solution behaviour – is only one solution amongst many and possibly significantly different solutions. The difference in these solutions – irreducible imprecision – is not known as the topology of the solution phase space is not known for any specific model.
In the metaphor if the thread – there are many potentially significantly different ink blots theoretically emerging from the same model equally well formulated. One is chosen subjectively on the basis of a posteriori solution behaviour and sent off to the IPCC where it is graphed in an ensemble of similarly derived ‘solutions’.
Models have problems with plausible formulation for many real world phenomenon – ENSO, PDO, SAM, NAM, THC, etc. AOS have the additional burden of irreducible imprecision.
‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ op. cit.
Robert I Ellison
Chief Hydrologist
With all this uncertainty in models which say we should be warming and the absolute certainty of measured global temperatures that say we are cooling; why are we taking action on climate change based on models and ignoring the actual physical data?
We are not taking any action with any payoff at all. There are some things we should be doing – http://thebreakthrough.org/blog/2011/07/climate_pragmatism_innovation.shtml
So when the world refuses to warm for another decade or three at least – it is still no reason not to do this. And please – no rants about kleptocratic foreign governments. Do you want to continue wasting the $33 billion America spends on aid?
The Proper Precautionary Principle mitigates against doing anything whatsoever which inhibits economic growth, as that is the sole relevant determinant of likely outcomes for the world’s population under any and all circumstances.
Are you lecturing me on this?
I suggest you check out the link provided. There is nothing that suggests – or indeed says in anything I have written – that 3%/year growth in food and energy supplies is not needed this century.
“The difference in these solutions – irreducible imprecision – is not known as the topology of the solution phase space is not known for any specific model. ”
That sentence makes no grammatical sense to me
‘There is a strong relation between the fixed points of a dynamical system, the topological structure of its phase space, and the symmetry of the dynamical system.’ Ah – sorry
The solution space is simply the evolution of the solution of a dynamically complex system through time – the phase space. It can be seen directly in 2D here – http://www.youtube.com/watch?v=Qe5Enm96MFQ
But it also occurs in 3D (indeed also in 4D in Earth systems) as in the bi-fold symmetry of the Lorenz strange attractors – the 2 ‘wings’ of the butterfly. .
http://en.wikipedia.org/wiki/File:Lorenz_attractor_yb.svg
The topology is simply the structure of the phase space – i.e. both wings (or strange attractors) of the butterfly. Models have many strange attractors. So what is being said is that we don’t the limits (the topology) of the phase space for these complex and dynamic models. The solution may change significantly as a result of small changes in parameters or formulation.
Earth systems are chaotic in both space and time and the phase space – according to Tomas – is almost infinite.
Try a comma after the first “known”.
I don’t think a comma will help. There is nothing structurally wrong with the sentence.
If there is one solution – and then the model is changed within some feasible range of parameters – we may get another solution that is ‘topologically’ different.
I’m afraid that the addition of the comma did indeed allow the sentence to make sense
Like Ian, the comma proved illuminating to me. And grammatically essential.
All,
1. If climate is deterministically chaotic, as I believe it may be, how does this change our ability to predict future outcomes based on CO2 forcing alone? Is it possible to envelope uncertainty of outcomes in a chaotic system?
2. Accepting that certain long term proxies accurately represent global temperatures during previous interglacials, can we know anything about the limits of the attractors given maximum and minimum temperatures during past interglacials?
Well, since Tomas hasn’t returned.
1. If the two albedo points are strong attractors, a perturbation towards the furthest from the nearest will produce the a larger oscillation. So as CO2 forcing increases from our current point its, impact will decrease with increased temperature. Looking at ice core data for the glacial interglacial periods, the glacial attractor is stronger, but CO2 will shift the warm attractor to a higher temperature, possibly making it somewhat stronger.
2. It is hard to tell how good the long term proxies are, but the relative slopes of rising versus falling temperature should tell us were we are relative to the attractor. During the instrumental era, the increasing temperature slopes are greater than the decreasing slopes. It is chaotic, and there is no law that says the attractor has to stay put, but we should see a change in the slope and frequency of internal oscillations, if all things stay the same. With a solar minimum synchronized the negative phase of an internal oscillation I would think that the period of the negative oscillation would increase. How much…?
Then I am just a fisherman, what would I know?
Think about the treatment of Biploar Disorder. BPD is manageable in almost all cases. However, one cannot a prior know which is the best treatment regime and how long a particular treatment regime will last. For mania, lithium is the first drug of choice; but we have no idea how it works. It does work, in a narrow therapeutic window, for the majority of people. However, it often stops working, after 5 years, a decade, 2 decades or even 4 decades, for no known reason.
Omega-3 fatty acids work in some people, and in some studies, and not in others.
Valporate works for BPD and for many epileptics. It works on major depression.major migraines and schizophrenia. There are about six different mechanisms that some people believe to be the way it works. In truth, no one has any idea.
Electroconvulsive therapy, electroshock therapy,,works in the majority of cases, no idea why. Animal models give some insights into how it works in animals, but we already know that human (and higher primates) are very different than rodents.
So, you are 17, very weird, up-n-down, are sent to a shrinkand are diagnosed with DPD. The shrink will tell you how he is going to treat and, within a few weeks you will be back to ‘normal’, only having to go off your meds if you require the use of your womb for its intended purpose.
We have the opposite of climate models, but the same uncertainty. We have therapies that work, but no real reason why. A good medic can, by using a Mk I supercomputer, predict the best cause of action and will be right more often than not. They will get a good treatment regime worked out over 6-9 months and the nation will have a fully functional citizen/taxpayer.
Medics don’t ‘pretend’ they know how stuff works, neurochemists and neuro-pharmacologists don’t ‘pretend’ we actually know how neurons work, nor do shrinks ‘pretend’ to understand how an individuals mind is dysfunctional.
Now no one actually thinks of the medical profession as stereotypically humble nor are medics characterized fro their humility, but compared with climate scientists with their absolute confidence in their analysis, models and statistical abilities, medics are timorous.
The big difference between medics and climate scientists, is that in spite of uncertainty, medics deliver, but climate scientists don’t.
Doc
I would suggest that there is one very significant difference between experimenting in medicine and the climate. (btw-I agree that much of drug treatment is little more than trial and error. I write this after living with my GF who is a doctor and develops and “tries out” new drug combination treatments with patients all the time)
In medicine the potential harm is only done to the individual patient.
In the case of the climate, the potential economic harm for a “wrong treatment” impacts an entire population.
The biggest difference between medicine and climate is that in medicine it is universally accepted that the injection of untested concentrations of a drug without proper testing is reckless. Tests must be done to prove the drug safe first.
Whereas in climate many argue that injection of 30 billion tons of CO2 per year should be assumed safe unless it can be proven that it’s dangerous.
In medicine when models and theory suggest a drug has dangerous effects this is taken very seriously.
In climate when models and theory suggest the injection of vast amounts of CO2 into the atmosphere is dangerous this is dismissed because it’s merely a “suggestion”. “likely” doesn’t mean “100% proof”. models and theory “can be made to say anything”. etc etc.
Funnily enough the attitude of safety first in medicine is widespread in society. From everything to aircraft design to space missions, any suggestion by models or theory of danger is taken very seriously.
So I wonder why climate is treated so differently. I wonder if the common “safety first” understanding is being dropped by one group simply for political reasons.
And when I see that same group getting alarmed about potential dangers of geo-engineering I realize that yes, they are treating CO2 “prove it’s dangerous first” as an exception.
Red herring. Injection of 30 billion tons of CO2 per year is a given. We can’t really change that. We are injecting! Only a big global ecomomic crisis can reduce that. Or a new source of large scale energy. It doesn’t exist yet. So it’s not an exception. We burn carbon because it’s exothermic. Any geo-engineering would be “endothermic” and would cost a lot of money.
“Injection of 30 billion tons of CO2 per year is a given. We can’t really change that.”
Why?
Why do you believe it is impossible to release, say, 29 million tons of CO2 per year?
Do you believe that if we burn less fossil fuels, that they will somehow burn themselves? Do you believe some sort of god or other higher power would intervene and force us to release more CO2 if, say, we increase energy efficiency?
I would appreciate your attention to these very clear, and specific questions.
Lolwot
You write imo a very flawed argument.
There have been no tests that have or can demonstrate that atmospheric CO2 levels can even possibly reach dangerous levels for human health. In the IPCC AR4 a number of potential harms were outlined for the environment, but virtually none of these potential harms were actually provable through any repeatable experiments. The harms were little more than supposition linked to CO2 theory.
Medicines are not taken off the market unless testing has been done to demonstrate that the medicines are harmful to humans. These tests must be repeatable and verifiable.
Not the same at all- and the climate folks really lack science to back their positions for policy change.
“Medicines are not taken off the market unless testing has been done to demonstrate that the medicines are harmful to humans”
Wrong way round. Medicines are not even allowed on the market until they are tested to be safe.
You are arguing that the CO2 injection is fine because it isn’t proven dangerous. That would never fly in medicine. Imagine it, “hey FDA I know you want us to test this drug but there’s no evidence it’s dangerous to human health (because we haven’t tested it) so why bother testing it? Sure models and theory suggest it is likely dangerous…but hey those are just models afterall”
“In the IPCC AR4 a number of potential harms were outlined for the environment, but virtually none of these potential harms were actually provable through any repeatable experiments”
None of them were disprovable through repeatable experiments. Ie no repeatable experiments exist to show to any degree of satisfaction that the “drug” is safe.
You are basically arguing that a lack of testing makes something safe because a lack of testing = no danger proven.
Until the “potential harms” of a drug have been tested you betcha the drug would not be released.
It’s one rule for geo-engineering and medicine and plane design, another rule for CO2.
Whether in healthcare or in climate science, there’s little anybody can do for you if you’re too shortsighted to change your lifestyle and too pigheaded to take your medicine.
The second would be to openly admit that uncertainties are so large that such predictions are not in the offing. This would neither diminish the case for action on climate change nor the standing of climate science, in fact it may just have the opposite effect.
I agree that scientists of any kind to admit they do not know and cannot predict a result takes some bravery and can earn the respect. At least until other scientists make a track record of useful predictions.
“Not diminish the case for action on climate change…” Huh? That is a non-sequitur if I ever saw one. What “action on climate change” do you propose if you cannot predict what the change will be nor how big? “We know what we are doing and we predict A, so we should do X in advance. (Smirk) yea, we were just foolin’. W e don’t know if it will be A or not-A, but we should still do X. By the way, ‘X’ will cost a bloody fortune, but we oughtt to do it anyway because we predict either A or not-A. “ Is that what climate science is coming to?
Suppose your surgeon said, “I don’t know what we will find, but let’s open you up and see what’s in there…. Can’t be too careful, can we?” I would be looking for another doctor immediately.
I’ve never understood Roger Jr’s argument that admitting no one has a clue does nothing to diminish and may even strengthen the case for political intervention in the market and the reduction of people’s rights. It seems to say “We know we’re right even though we can’t prove it. Trying to prove it is only a distraction from focusing on what we must do. Let’s just do it!”
Even if bane, there is a not-so-small matter of cost / benefit analysis — which requires mutiple predictions to be made.
Flip a coin. Let Koutsoyiannis call it.
========
Tomas describes the path of the coin on its temperospatial voyage.
=================
God does not play dice. Bohr does. Must we choose? At the level of climate, causality trumps probability. Hilbert holds no hand. We can only think.
“Science has not progressed by calculations and models, but by repeatable observations.” ~David Evans: “I Was On the Global Warming Gravy Train,” May 28, 2007
As oversimplifications go this one is very bad. Models and math are central to science. Observations are seldom repeatable. May 23, 1933 will never come again. Equations are forever.
You can’t be serious. You over and over even if there will never be another Jesus.
And incorrect equations and their incorrect interpretations are also forever if someone does not die and allow them to be repaired or replaced.
Models are also very good at reminding people how little they truly know about the system they are trying to model. Unfortunately, when one is trying to push a belief, those minor issues are quickly covered up with excuses and “ensembles.”
Equations are only good IF ALL the parameters are included.
Todays equations are chicken scratches that distracts rather than gives knowledge.
Equations are forever.
You mean like the equation 1/2 mv² for the kinetic energy of a particle of mass m moving with velocity v?
That only lasted until the beginning of the 20th century when Einstein pointed out that it was only correct when v = 0.
Observations are just data. Without theory and models you have no explanatory power at all.
Then again this isn’t the only mistake David Evan’s makes.
Here’s another one:
“They place their thermometers in warm localities, and call the results “global” warming. Anyone can understand that this is cheating. They say that 2010 is the warmest recent year, but it was only the warmest at various airports, selected air conditioners, and certain car parks. Global temperature is also measured by satellites, which measure nearly the whole planet 24/7without bias. The satellites say the hottest recent year was 1998, and that since 2001 the global temperature has leveled off.”
Yet 2010 is statistically tied for warmest year with 1998 in UAH and the 3rd warmest year in HadCRUT. Not only are David Evan’s facts here wrong they also completely undermine his argument.
There’s an amusing bio attached to one of David Evan’s speeches that reads: “Dr David Evans consulted full-time for the Australian Greenhouse Office (now the Department of Climate Change) from 1999 to 2005, and part-time 2008 to 2010…”
So after describing it as a gravy train in 2007 he has hopped right back on it. What’s he saying about himself?
David Evan’s is also one of the ones who is banking on global cooling:
“We have just finished a warming phase, so expect mild global cooling for the next two decades.”
And confusing models for experiments, and favoring models over data, yields……..
AGW
Sorry, lolwot, no matter what you write, it has cooled slightly since 2001 according to HadCRUT3, the surface temperature anomaly record preferred by IPCC.
It warmed at a higher rate during the period 1991-2000 than the rtae of cooling from 2001-2010, so obviously the average temperature of the 1990s is lower than that of the 2000s.
But the 2001-2010 trend is one of slight cooling, while the IPCC models told us it should be warming at 0.2C per decade.
So much for model validation…
Max
actually a small percentage of the models showed cooling, but the vast majority showed warming.
Essentially its a combination of ( pick your favorites )
1. Most models too sensitivity.
2. Actual forcings differed from projected forcings ( especially with solar
which was input at FLAT, and it went down )
3. Short term natural variablity, which the models will never fully resolve
unless they are initialized with the right state.
4. Models missing key forcings.
But the 2001-2010 trend is one of slight cooling, while the IPCC models told us it should be warming at 0.2C per decade.
According to your logic, Max, global cooling began with the period 1978-1987, which showed a trend of slight cooling even though the IPCC models told us it should be warming at 0.2 °C/decade. Delete 1978 (the period 1979-1987) and you get an even stronger cooling trend, 0.1 °C/decade!
So much for model validation…
So much for the logic of climate skeptics, which loves to turn a blind eye to the statistically obvious fact that the shorter the interval the easier it is to find trends that can prove any claim whatsoever. Climate skeptics preach to the moron tabernacle choir.
And Vaughan, also longer. Remember the Climatic Optimum, when temps warmed 4 degrees C and early civilizations began to flourish.
Sufficiently slow climate change is no problem as long as it’s slow enough for life on Earth to adapt to the change. I don’t know how long it’s been since the temperature last climbed a whole degree in less than a century, but that’s the sort of thermal event that results in mass extinctions. Even the PETM didn’t change that fast, yet was responsible for mass extinctions.
“I don’t know how long it’s been since the temperature last climbed a whole degree in less than a century…”
It happens all the time. Temperature in the 20th century was very stable, compared to the earlier centuries. Much more climate change before the 20th century. Here’s the CET (I know, not global):
http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png
Kim
I hope you had a good time at the Burning Man festival – prophecy and sex is always a popular mix for would be cult leaders. Although I think you need to cultivate a bit more of the apocalyptic. For example – CO2 was sent by God (insert image of Kim in flowing robes from the desert years) to save us from an icy doom (think 6th planet of the Hoth system). But only the true believers (product placement – insert image of smiling Kim driving Humvee or similar) will be saved. We may have something here to replace the spaceship is coming like AGW cult. Just don’t take it too seriously and stay away from the Kool-aid.
Wrong side of a 2 headed coin – however – when looking at minimum cost and maximum benefit from multiple approaches that include – but are by no means limited to – health care for women and education for girls.
Cheers
Robert I Ellison
Chief Hydrologist
Mr Ellison,
Most commentators don’t seem to think it necessary to give themselves a job title with their postings, but you often seem to feel the need to do it twice. You’ve just signed off with the words “Chief Hydrologist” even though you’ve already told us that in your online moniker!
Is this indicative of some psychological problem on your part?
Probably. My ‘job title ‘ derives from Cecil (he spent four years in clown school – I’ll thank you not to refer to Princeton like that) Terwilliger. Cecil was Springfield’s Chief Hydrological and Hydraulical Engineer. It is both a reminder to myself not to be a pompous twit – and a subtle mockery of those who – like yourself – have no compunction about descending to pompous twittery. This includes use of any perceived weakness – a suggestion of psychological problems in this case – to distract from the essential weakness of your position.
This isn’t about personal growth and change for you – about understanding – but about an inflexible conceptual framework to which you are psychologically committed. You are about rejecting anomalies. This is cognitive dissonance – you are seriously beyond the limits of science and insist on a truth that is murky at best. In my personal lexicon – you are an AGW space cadet.
SR @ 10:08 PM. I responded with a bane/boon jibe. The wind has shifted and it’s blowing colder.
===========
Intriguing essay.
One problem I see is that if this is what AGW is based on, it really is a social movement. Social movements like religions or fanatical political movements are what use tools vague enough- like scriptures, or a manifesto for instance- and then interpret them through a lens of faith.
Asserting that the inkblot problem does not diminish the case for expensive mitigation policies seems very odd.
Think of a doctor insisting that a patient’s interpretation of an inkblot was the evidence needed for a pre-frontal lobotomy,and that ambiguous results were even stronger evidence.
Even worse, think of having a third party interpret the inkblot on behalf of the patient and that interpretation being used to operate on the patient. And the person doing the interpretation turns out to have a financial interest in the surgery.
Hi Judy-
Your definition of “skill” is at odds with convention in meteorology (but not some areas of climatology) — “For an ensemble prediction, the prediction is said to have no skill if the actual realization falls outside of the bounding box of the ensembles”
Have a look at this figure that I ginned up a while ago:
http://cstpr.colorado.edu/prometheus/archives/ipcc+55.png
In this example, I “improved” the fit between models and observations by adding a bunch of nonsense. So “skill” in my lexicon (borrowing from meteorology) refers to the improvement that a prediction shows over a naive baseline.
I don’t know if you’ve discussed Oreskes et al. 10994 here, but it is directly relevant:
http://www.likbez.com/AV/CS/Pre01-oreskes.pdf
They write that verification and validation of open system models is impossible. I agree.
yes,
We have exactly the same situation with war gaming models. That still makes them useful as heuristic tools.
Steve–your description is interesting, but I do not see how you can classify GCMs as heuristic tools.
War gaming models are used quite differently. Those models allow planners to have insight into a wide variety of potential situations and thereby react very quickly in the event that a similar situation develops.
Roger,
I am not sure I understand the point of your diagram.
The observation is more or less below the point of intersection of the two curves. Move the observation a little to the right and the blue curve has the greater likelihood, move it to the left and the red curve has the greater likelihood.
Adding variance to an overly confident prediction is an improvement.
Alex
The reason Oreskes et al. can write that verification and validation (V&V) of open system models is impossible is because of the way they define their terms. For example, they require “verification” use only logical deduction (as in formal symbolic logic). Therefore, abduction, which is merely logically consistent, would not be allowed.
Yet the scientific method itself relies on abduction for verification (confirmation/falsification). Thus, under their definitions, “complete confirmation” of science is impossible.
IMHO, their insistence on logical certainty is impractical. Science/modeling has been able to get by without it. And engineering and technology has flourished as a result.
Re: “abduction”. Guess?
I think Feynman would agree. But then you have to include the rest of the lecture: If the guess disagrees with nature, the guess is wrong.
Start over.
Hi Roger, there are many different skill scores that can be used, the choice depends on the type of model and the application for which the skill score is being used. Earlier in my post i discussed and provided links for the kinds of skill scores used for ensemble predictions. My point about the bounding box is that is the test that people seem to be using (e.g. Lucia).
I suggest you read my earlier posts on V&V and their relevance and application to climate models.Oreskes is incorrect IMO, and I have discussed this on previous threads. Verification is mainly about documenting the model and its assumptions, surely that is not impossible. Validation is is a more complex issue, but my previous posts describe the thinking out there on how this can be done for climate models, and these ideas are beginning to get some slight traction in a few of the U.S. govt agencies that fund climate models. FYI, a “validated” model does not insure a “correct” prediction.
Hi Judy-
The “bounding box” that you refer to is not a metric of skill but rather a conventional hypothesis test under a metric of statistical significance.
A model is a tool used to generate that hypothesis. An observation which falls within a 95% CI of an ensemble prediction simply means that the hypothesis cannot be rejected. (And a broad enough CI means that you have a hypothesis that is not falsifiable — e.g., I predict that global temperature will increase 0.2 degrees/decade until 2050 with a 95% CI of +/- 10 degrees — such a prediction is not very useful practically or scientifically). One cannot apply such a hypothesis test to a prediction of the distant future as we have no observations of the future.
It is also possible that such a hypothesis on more reasonable time scales not be falsified and also not show skill (as a naive method may do better).
Climate models are inkblots because the scope of events that are predicted exceeds the scope of potentially observable phenomena, and the time scale for the prediction is always beyond experience, and on such a long-time scale that verification according to experience is not possible.
Climate models are just tools.
Climate models can be considered tools, but stating that they are just tools doesn’t necessarily make full justice to them as the models may also be the best way of summarizing the results of climate science. That models may be the best way of summarizing knowledge on a complex system is not at all unique to climate science, but it’s true in many areas, where the total amount of data and other knowledge is very large, but still incomplete and fragmentary to an essential degree.
The GCM type models are actually models of the Earth System that produce results on temporal and spatial scales relevant to climate. They are largely based on knowledge at a more detailed level, and their validation is done to a very significant degree on level of detail not directly essential for climate projections. One of the problems of the best models is that their performance is known better for unnecessarily detailed data over much shorter periods than what’s of most interest in climate science.
It’s known that the models are capable of much, but that doesn’t solve the difficulty in judging, whether they are valid for the purpose, where most is expected from them. Some climate modelers appear to believe that the indirect validation provides strong support for their fitness to purpose, while not everybody appears to agree on that even among the active developers of those models.
It’s, indeed, difficult to describe to outsiders of the climate modeling community, why and how far they should trust the results of the models. Comparison with ink blots captures something of this difficulty, but has also clear shortcomings as there isn’t anything more behind the ink blots, while the large models are certainly something more, which may just have one projection that looks like an ink blot.
Nope. Not all the system responses of interest exceed either the scope of observable phenomena and are not always beyond the time scale of experience and the time scales are not so long so as to be impossible. To view the problem otherwise is to focus on certain system responses that have been selected as a bumper-sticker PR approach to educating the general public.
The calculated responses are always displayed as a function of time. And the scale runs from the present to far into the future. Many of the calculated response functions between now and then are certainly open for observations within the physical system.
The word verification in the sentence quoted above should instead be validation.
Climate models are just tools.
What do you mean by “just tools”? Hammers and drills, and magnetic resonance imaging devices and the Hubble telescope are tools. Tools are tested for their usefulness and accuracy, and are only accepted for use if they are shown to be useful and accurate.
If the models can not be shown to be accurate enough for the purposes at hand , then they will not be and should not be used as “tools”.
Dr P.
I think this bears some more development. Let’s use an example. In your car you probably have a trip computer that calculates your DTE: distance to empty. A plane has a similar model that calculates a “bingo” fuel.
You’re DTE calculation is a very simple model. It says “if you keep consuming fuel at the rate you are you will run out in xx miles.
This calculation is very crude. It knows nothing about the road ahead, changes in wind, traffic, etc. But its a good tool for the purpose of answering the question “Should I stop for gas?”
Tools are built to do things. In the case of climate models what is their purpose? what task are they suppose to do? With my DTE model, I’m pretty happy with model that gets it within a few miles ( With a reserve). The model has never run me out of gas. That’s how I measure its usefulness. I follow its instructions and I dont run out of gas. It was built NOT to represent the reality of future fuel consumption in my car. It was built to keep me from running out of gas. For that purpose it does a good job. Of course some idiotic skeptic might say.. oh look it predicted 10 miles to empty when in point of fact the road ahead was downhill and you actually had 20 miles to empty. In the validation business this kind of objection wouldn’t carry any weight. The tool served its purpose. provided an answer to the real question,and yes succeeded despite being “wrong”
It would seem to me that one thing we would want to do is decide which metrics are most important to get “correct.” For example, If we calculate that the biggest damage of climate change is sea level rise, then we start by asking the question:
1. How good do sea level estimates have to be to inform policy.
That specification has to be written before you ever build a model. To be useful a model must ( for example)
A. predict a regional sea level mean within X cm
B. Have a variance about that mean of X cm.
The determination of these values starts from understanding the costs First. For example. You start the process by calculating a cost due to sea level rise. That cost curve will tell you what sea level rise is worth looking at. If the cost curve is non linear the requirements to predict sea level rise will change. On the other hand, imagine that there was no cost from sea level rise.. that would tell me as a modeller that I did not have to focus efforts on getting that piece perfect.
The problem is they didnt start the design of the GCMs as if they were a tool to answer a specific problem. They built them to represent “reality”. It’s exactly backwards from good tool development.
“The problem is they didnt start the design of the GCMs as if they were a tool to answer a specific problem. They built them to represent “reality”. It’s exactly backwards from good tool development.”
It is kinda catch 22, they need to reasonably represent reality to be a useful tool\ for what they are intended to predict.
So for what purpose would you bet your house or job on the outcome of a climate model? For prediction of temperature x years from now? Precipitation? Ice melt? Sea level rise?
Dallas,
I’m not convinced that you have to “represent reality” to have a useful tool.
Its part and parcel of the unexamined belief that science works by representing “reality”
It works because it works. We like to explain that by saying that the utility of science is derived from its isomorphism with the “real” world.
but then Im a pragmatist or instrumentalist.. take your pick
Steven Mosher said, “Its part and parcel of the unexamined belief that science works by representing “reality”
It works because it works. We like to explain that by saying that the utility of science is derived from its isomorphism with the “real” world.”
With most tools I would agree, if they work they work. Climate models are attempting to predict a century in the future with not much skill in the short term. So they have set their own bar of representing reality. As a tool you could could just cut and paste the last hundred years and splice on a CO2 trend. That would be a visual tool. As is is, the tool is too complicated to be useful, simple tools get used more often.
Climate models are just tools.
Thinking about this some more, I would say that current climate models are “prototypes”, not “tools.” I thought of calling them “toys”, but “prototypes” is better. Eventually this line of endeavor will produce something that is useful and demonstrably useful – a tool. But not yet.
They are maps. Not the territory. But maps can be useful no doubt.
“Climate models are just tools.”
Tools that did not produce wrong results might as well discarded altogether. Why linger on them?
Double negative, my mistake, so might as well discarded!
Dr. Pielke,
Steve Easterbrook has stated that the V&V approach discussed in Oreskes et al. is not feasable, but has discussed other techniques that he believes can be used. Do you disagree?
Gene- Easterbook’s nice post seems perfect compatible with my comments here. Thx.
Dr. Pielke,
So, to clarify, it’s the approach in Oreskes et al. that’s impossible, not all verification and validation.
Gene- I suggest that you re-read Easterbrook, he does not take issue with Oreskes et al.
Dr. Pielke,
Agreed, Easterbrook does not contradict Oreskes et al.
However, he also seems to indicate that other techniques can be used to provide V&V. I was trying to reconcile that with your agreement with the statement “…that verification and validation of open system models is impossible.”.
The paper by Easterbrook and Johns on the practices of the Hadley Center tells on his approach through a concrete example.
http://www.cs.toronto.edu/~sme/papers/2008/Easterbrook-Johns-2008.pdf
Reading that one can see, what kind of validation is applicable to climate models in his view. I think the arguments are largely valid, although there are certainly problems not given proper emphasis in the paper.
I’ve dropped a comment over at Steve’s blog in an attempt to reconcile my industry experience of V+V with the V+V sued for climate models.
I just can’t (currrently) get past the main assumption and the lack of real-world validation.
used not sued :-)
LM,
I have some definite reservations about some of what he says re: validation. It’s a thorny issue in that a model is really a collection of models (atmosphere, ocean, biosphere, etc.). I can accept his contention that the sub-systems should validated against theory (assuming the theory is validated against observations), but once those sub-systems are coupled, then the whole should be validated as a unit if the goal is to present the model as representative of the entire system.
Another issue is the prediction vs hindcast issue. I’ve argued that predictions are inherently problematic in that unpredictable events (such as volcanic eruptions) introduce factors that have to be adjusted for. Those adjustments just provide more opportunity for dispute. Hindcasting, however, gives the opportunity to validate against an agreed upon set of conditions. I understand the concern re: potential to “cheat”, but, in my opinion, this could be controlled for.
It should be noted that the groups involved with modeling are making strides toward more mature processes. One big obstacle seems to be the split nature of their goals. They’re trying to do both investigative work and “production” work (such as CMIP runs) at the same time.
Will you identify the specific properties of the mathematical models and numerical solution methods used in GCMs that are impossible to verify.
And, what are the specific aspects of the response functions of interest in the physical system that are impossible to validate.
Thanks
If you read Oreskes you will see what she is getting at.
Lets take the inputs for AR5. They are online. You will see files of forcings for volcanos, solar, C02, methane etc.
You run the models with those inputs and now you come to validate.
basically, you can’t unless the future inputs turn out how your projected inputs did.
Well, you might be able to, but you’d have to specify a bunch of different emission scenarios and explore the parameter space fully.
You might be able to get a handle on it if you emulate the GCM and explore the parameter space that way.
In any case our notion ( yours and mine) of Validation would have to be expanded somewhat.
Hey Steven, how’s it going?
I’ve read Oreskes more than one time.
Your characterization is a little light relative to the impossibility aspect relative to Validation.
Even controlled experiments when pre-test predictions are the objective, fail to achieve the originally specified boundary and / or initial conditions. The accepted approach is, I think, to change the BCs and or ICs and redo the calculations while changing absolutely nothing else.
You did not mention Verification.
In this thread, and many others scattered all over the Web, it is flatly stated that Oreskes proved the impossibility of verification and validation.
Oh, I forget to note that not all perturbations in BCs and ICs significantly impact all response functions of interest. Your interpretation of Oreskes seems to imply that the paper was written to address a particular response function for a particular open system.
That’s again far from my understanding of the concept of impossibility.
Hmm.
I will have to go back and reread her. As always I respect your position on these things. Are we arguing about the exact meaning of the word “impossible”.. reading your comments however, I think I’m seeing your point.
verification? crap, where do you start? I suspect that some parts of the model are deemed verified because they have been in use for decades.
Oh.. its going great.. having fun with R
Are we arguing about the exact meaning of the word “impossible”..
The Oreskes paper is mainly quibbling over dictionary definitions of two words which are synonyms in common use (but which have been given specific meaning by technicians that you won’t find in Webster’s), so welcome to the STS club ; – ) All models are “open” to some degree (it is not binary, “open” or “closed”); some feedbacks are cut, unknown unknowns lurk, our imagination is feeble, and our time is short. Imperfectly known ICs and BCs present no special problem to validation (that’s what inference is for).
I disagree (in different particulars) with Oreskes, Easterbrook and Pielke Jr. on this one, and agree with Roache (and in a round-about way, Jaynes). For decision support (VV&UQ), we should be interested in useful answers rather than effete philosophizing.
Even if the Left didn’t obviously want the money to continue the funding of an ever expanding secular, socialist government, and even if we did not know that the Left is willing to use any means–even a hoax and scare tactics–to achieve their Marxist Utopia, and even if the Left admitted that no matter what happens in America the rest of the world and most especially the energy-deprived Third and developing counries will continue to use oil and coal and nuclear energy, with wouln’t change the fact the null hypothesis of AGW theory, that all climate change is natural, has yet to be rejected.
Natural variability is the null hypothesis, but very little funding goes toward the study of how it works. The contention that planetary orbital dynamics have much to do with it, have been shown to be valid in the long term cycles in the Earth’s orbital changes, with the idea that Ice ages occur over 90% of the time with the exception when the solar system is passing through one of the spiral arms of the galaxy, then as now we are in an interglacial period.
With an interest in how the patterns in the solar system might influence the dynamics of the weather processes that end up being the climate, I under took the problem of looking at patterns in the weather that might make for a process that would allow mid range forecasting in the 5 to 20 year range, as well as the short term periods of the Solar/Lunar interactions.
I found that every 6558 days the inner planets have a harmonic repeating pattern, 27.32 days short of the Saros cycle of the of the repeating pattern of the solar and lunar eclipses due to the synchronicity or the tidal and gravitational interactions between inner planets.
Using that 6558 day long modulated pattern to investigate if there were any usable signals developed that would give assistance in weather forecasting I set up the raw data into tables one file for each date of records, then pulled the data from the tables for three repeats of the 6558 day pattern and averaged them together for each of the 6558 days of the cycles.
The results are presented from the past three cycles as a resultant combined data set used to produce the maps found on my web site.
http://www.aerology.com/national.aspx
Past evaluations of the forecasts produced have suggested that they are as accurate as the NWS 3 to 5 day forecasts. With further investigation I have added a fourth 6558 day long cycle to the mix and fixed some of the data problems that I have found.
I am going to add Alaska, Canada, and Australia to the maps presented, and at the same time I am going to use an independent forecast evaluation service to check the accuracy of the new expanded data base and forecasts generated from them.
http://research.aerology.com/project-progress/map-detail/
where the images and links I cannot place here can be seen, I have no qualms on presenting my ideas on how the Null hypothesis works or of the evaluation of how the results of these up to 18 year long forecasts are preforming. The only assumption I have made is if there is something there it should be visible enough to stand on its own.
Richard Holle
I admire your guts on suggesting that you have discovered a mean-value limit cycle in the state space. I presume that you are just presenting mean values for the estimates of the forecast parameters. Pretty cool if it works out.
This is true, in so far as no physicist will ever win a Nobel Prize for showing that a power-law is due to plain garden-variety disorder instead of some critical phenomena. Entropy rules but it is pretty dull stuff.
The process is based on the phase lock between the N/S lunar declination and the magnetic rotation of the sun, where the electromagnetic variation in the solar wind polarity shifts are driving the declinational movements of the moon, hence the declinational tides of the moon upon the atmosphere develops the patterns in the Rossby waves due to the resultant meridional flows off of the equator. The jet streams are just where the polar air masses meet the equatorial air masses, with a more neutral ion depleted swath in between.
Being able to predict the strength and timing of the repeats in these effects gives reliable predictions on the global circulation patterns that are supposedly the drivers of the weather, but in fact are being driven by solar/lunar tidal effects.
http://research.aerology.com/supporting-research/leroux-marcel-lunar-declinational-tides/
http://research.aerology.com/natural-processes/solar-system-dynamics/
… and, add to that the role of the big planets–Saturn and Jupiter–in the changing North Pole due to variations in the magnetosphere along with changes in gamma radiation, the Earth’s ‘molten outer core’ and the mathematics of chaos to explaing the fact that the ‘weather,’ to a degree certain and yet unfathomable but all a part of a holistic process, makes it’s own ‘weather.’ Looking at the ‘Earth’s rotation/sea temperature’ as a single unit, changes in ‘atmospheric circulation’ act ‘like a torque’ that can in and of themselves cause ‘the Earth’s rotation to decelerate which in turn causes a decrease in sea temperture.’ (see, Adriano Mazzarella, 2008)
Can’t resist: Ink Blog.
==========
Hmm. Do you see what I see?
(Or as the old joke goes, two East German soldiers guarding the Berlin Wall look longingly across to the Western zone. “Are you thinking what I’m thinking, comrade?” asks one. “Of course,” says the other. “In that case you’re under arrest.”)
Lucia at the Blackboard is testing the IPCC model predictions against subsequent temperature data. See GISTemp: Up during August!
Referring to: GISTemp Land and Sea Anomaly Trend
Similarly for: May T Anomalies: Cooler than April.
Referring to: NOAA/NCDC Land and Sea Temperature Anomaly Trend
Is such objective quantitative evaluation of model predictions to be considered in “Climate Science” or not? If not why not?
Pielke Jr. gives 10 key
<a href=http://rogerpielkejr.blogspot.com/2011/07/simple-math-and-logic-underpinning.htmlSimple math and logic underpinning climate pragmatism
I agree with Pielke Jr, that unless each of those conditions are met, that climate models are irrelevant.
Simple math and logic Pielke Jr. 28 July 2011
Note especially:
The real motivation for Climate Scientists will be the opposite way around from what is being suggested in this thread.
Since Judith has started to curry (pardon the pun) favour with climate sceptics, her profile will have increased at least tenfold and I’d say she’ll be now be able to charge slightly more for her lectures, TV appearances and consultancy than previously. Would I be right there?
Furthermore, if she, or any other climate scientist, did manage to actually crack the current consensus in any meaningful way showing that CO2 emissions were indeed as benign as many commentators on this blog seem to imagine they are, then the sky would surely be the limit.
It’s all about Judith, Judith.
============
Its all about the truth is it not??
I would say that no climate scientist is actually “in it” for financial reasons.
The poor salaries on offer and just a quick Google turned up this example:
http://www.earthworks-jobs.com/geoscience/south11091.html
just make me wonder how anyone can seriously think that money is any sort of motivation to fiddle the results.
For what? Just so they can get an extension on their contract at pretty much the same sort of salary?
Well, I mostly get $0 for my lectures, I don’t ask anything for my lectures, but occassionally one is associated with an honorarium, I have a lecture in a few weeks where I will actually get $1000. My rare TV appearances are very rare and unpaid for. My consultancies have nought to do with with skeptics, they come from companies with a specific problem to address and they view my skills/analysis as being able to make a contribution.
The dynamic for other climate scientists may be different, I speak only for myself.
Al Gore’s speaking fee is $145,000.
http://www.studlife.com/news/assembly-series/2011/09/12/su-to-vote-on-funding-gore-others-to-speak-2/
Maybe if I am ever elected vice president of the U.S., I can charge those kinds of fees also when I step down
Sorry, feet not big enough to step into those footprints.
================
Judith
We need our decision to visit this blog and ‘buy’ your brand to be validated. Charging $0 does not help the credibility of Denizens. Please put your rate up to $100,000 immediately :)
Ps Let us know the outcome.
Tonyb
Dr Curry
Please don’t ever try to justify yourself. Its not necessary. This blog speaks volumes about the way you approach climate science and it sure ain’t about feathering your nest!
Chief Hydrologist | September 18, 2011 at 9:23 pm |
Chief, as you may know, the “Rule of Seventy” relates compound interest and doubling time. It says that a three percent growth will double in 70 / 3 = 23 years, that is to say, by 2034.
You claim that we need to double our food supply by 2034? I’ve never heard anyone say that was needed. Our population will likely go up, but it is slated to increase by only about 15% between now and then … so why would we need to double the food supply in 23 years?
Cite?
w.
If you continue the calculations, 3% anuual growth does mean that GDP (not just food, but energy and all manufacturted products, services etc) worldwide would increase by a factor of 15 by the end of the century.
‘By 2050 the world’s population will reach 9.1 billion, 34 percent higher than today. Nearly all of this population increase will occur in developing countries. Urbanization will continue at an accelerated pace, and about 70 percent of the world’s population will be urban (compared to 49 percent today). Income levels will be many multiples of what they are now. In order to feed this larger, more urban and richer population, food production (net of food used for biofuels) must increase by 70 percent. Annual cereal production will need to rise to about 3 billion tonnes from 2.1 billion today and annual meat production will need to rise by over 200 million tonnes to reach 470 million tonnes.
In conclusion, under the assumptions made for the baseline modelling of the outlook towards 2050, food security for all could be within reach. The conditions under which this can be achieved are strong economic growth, global expansion of food supplies by about 70 percent, relatively high production growth in many developing countries achievable through growing capital stock, higher productivity and global trade helping the low income food deficit countries to close their import gaps for cereals and other food products at affordable prices.
It should be underlined, however, that these projections do not yet take into consideration the possibility of a more intensive competition between food and energy commodities for the limited land and water resources. As the recent crisis has demonstrated, under certain conditions (high oil prices, first generation biofuel technologies, government support in several countries), the production of biofuels can expand rapidly and contribute significantly to price increases and scarcities in the food and feed markets.
It is obvious that the positive vision presented here contrasts strongly with the reality of recent trends. The number of chronically undernourished and malnourished people in the world has been rising, not falling. FAO estimates that the number of chronically undernourished people has risen from 842 million at the beginning of the 1990s to over one billion in 2009. The recent increase was mainly the consequence of the recent financial crisis and the drastic food price increases and occurred although harvests had reached record levels.’ http://www.fao.org/fileadmin/templates/wsfs/docs/expert_paper/How_to_Feed_the_World_in_2050.pdf
3% is the ballpark given a little more ambition for global economic growth.
15 times GDP growth over the rest of the century must happen for the human race to transition to bright and limitless future.
That report is a very “glass half empty” one.
Yes it is bad news that the number of undernourished people has gone up be 150 million or so however it has to be put in context of the increase in total population. This has gone from 5.8 billion in 1990 to 6.8 billion in 2010. So out of an extra 1 billion mouths to feed, we are managing to feed adquately 850 million of them.
This means that the percentage of undernourished people has decreased. I for one think that this is a cause for rejoicing, we are adequately feeding more of the human population than ever before.
GDP is artificial and will not be used in 30 years time
Willis
‘Magadoff and Tokar (2009) concluded that 12% of the global population –approximately 36 million people- suffer from hunger and live without secure access to food. Decreased food production in less developed countries, increases in the price of food, and growing production of bio-fuels are responsible for current rates of food scarcity. Global warming, crop diversity loss and urban sprawl also affect agriculture production. Kendall and Pimentel(1994) note that current per capita grain production seems to be decreasing worldwide. The situation is particularly distressing in Africa, where grain production is down 12% since 1980. Africa only produces 80% of what it consumes (Kendall and Pimentel, 1994:199)
For most countries, population growth rate is approximately 2-3% a year, which should translate to an annual increase of 3-5% in agriculture production levels. (Kendall and Pimentel, 1994: 202) Kendall and Pimentel designed three models to predict crop levels by 2050. They concluded that if production continues at its current rate, per capita crop production will decline by 2050. The possibility of tripling today’s current crop production is unrealistic (Kendall and Pimentel, 1994).’
Given the uncertainties in the projections – you still want to turn this into a you’re wrong and I’m right p…ing competition?
The FAO report is optimistic that food demand can be met in 2050 – but it will not be easy.
Robert I Ellison
Chief Hydrologist
Chief, I can find nothing in your citations or your claims that says we need to double the food production by 2034. Not one word.
You have re-asserted it, but that doesn’t do anything. I read the FAO file. It says we need to increase food production by 70% by 2050.
It doesn’t say we need to double food production by 2034. Anywhere.
Admit you were wrong and move on. Nobody thinks we have to double food by 2034 except you.
w.
Chief and tempterrain
Interesting extrapolations.
The Chief’s world population estimate of ~9 billion by 2050, leveling off ~10.5 billion by 2100 seems to match UN mid-range projections.
It is clear that the global population growth rate has already started to decline.
One study paints a dismal future of global population die-off starting in only a few decades with world population dropping to 1 billion by 2100 as energy sources decline, but IMO this is too pessimistic.
http://www.paulchefurka.ca/WEAP/WEAP.html
From 1960 to 2010 population grew from ~3 to ~7 billion; this equals a compounded annual growth rate (CAGR) of 1.71% per year.
Over this same period atmospheric CO2 increased from 316 to 390 ppmv, or at a CAGR of 0.42% per year.
GDP increased by 4.5% per year CAGR over this same period, or 2.6 times the population growth rate.
From 2010 to 2050 the UN expects population to grow from ~7 to ~9 billion; this equals a compounded annual growth rate (CAGR) of 0.63% per year.
And from 2050 to 2100 the UN expects population to level off at around 10.5 billion; this equals a compounded annual growth rate (CAGR) of 0.31% per year.
If we assume that atmospheric CO2 levels will continue to increase at the past 0.42% CAGR, we would arrive at 570 ppmv by year 2100 (this is close to IPCC’s “scenario and storyline” B1).
The carbon efficiency of an economy can be defined as the GDP generated by that economy in $ divided by the CO2 emitted by that economy in tons (see table).
http://farm6.static.flickr.com/5011/5500972088_54742f12be_b.jpg
The economically developed nations (EU, Japan, USA, Australia, Canada, etc.) have a “carbon efficiency” of $2,000 to 3,500 per ton of CO2, while the developing nations are at around $700 per ton.
I would expect that the developing and underdeveloped nations will continue to develop their growing carbon-based economies with the industrially developed nations continuing at a slower rate of growth.
At the same time, I would expect that the industrially developed nations will continue to improve their carbon efficiencies as they have in the past, as fossil fuels become more expensive and alternate technologies become more competitive, and that the developing nations will improve their carbon efficiencies to around the same levels as those of the developed nations today.
This means that by 2100 we will have 50% more people on Earth to feed than today.
And we will in all likelihood feed them better than today, as well, if we don’t get sidetracked into dead-end strategies like killing our carbon-based economies before we can supplement them with proven economically and politically viable alternates.
Whether this all translates into a 15-fold growth of GDP to year 2100 is another question. This figure seems high if population is only growing by 50% (we’re not all going to be 10x as affluent in constant dollars).
A more reasonable estimate IMO would be that GDP continues to grow at a rate, which is 2.6 times the population growth rate, as it has in the past. This would mean a CAGR of 2.6 * 0.45% = 1.2% (rather than 3%) and a 2.9-fold growth to 2100, rather than 15-fold).
But hey, guys, this is all just crystal-ball gazing.
Max
G’day Max,
I agree with most of what you said. I expect, however, that 10%/year growth is possible and desirable in many areas of the world given appropriate trade and internal governance provisions. Democracy, the rule of law and free trade and free markets. An increase in income over the century for many people from $1.00/day to $2.90/day (2.9 fold growth) is nowhere near sufficient.
Will the global economy grow at 3% for the rest of the century? I think it likely that that is the low end of expectations – and the only way to stop it is to dismantle democracy and free markets. That would be resisted.
Cheers
The question of the 3% isn’t really about “need”. There is enough food produced to feed everyone now.
However, the system we are living under seems to only function efficiently if there is indeed the growth, not just in food production, but everything else too, of around a few percent per annum and as CH mentions.
For instance, in many western countries production has fallen instead of risen in the last few years. Its not been a dramatic fall. In the year 2010, production may have fallen back to something like 2005 levels. However, the people in those countries are certainly feeling much poorer than they were in 2005 with unemployment etc being far higher.
So what’s the solution?
tempterrain and Chief
The 3% projected compounded annual growth rate for GDP does not appear to make sense if population growth is going to decrease from the 1.7% CAGR of the past to the UN projected CAGR of 0.45% (from 7 to 10.5 billion by 2100).
The issue is that the large developing economies (BRIC plus Asian “tigers”) will see a large percentage of future GDP growth, the advanced industrial economies (Europe, N. America, Japan, AUS, NZ) would see a smaller percentage growth and the rest of the world (including all the underdeveloped economies) would see the highest percentage growth, as the poorest nations move out of abject poverty through industrial development..
So, instead of a global GDP that is 15X that of today in constant $, we would see one that is around 3X that of today.
3X the global “affluence” (in constant $) with 1.5X the global “population” translates into a better life for everyone (especially the poorest of the world, who would see the largest change from today).
Today the “advanced” nations have 14% of world population but 50% of GDP, while the “large developing” nations have 44% of population and 35% of GDP and the ROW has 42% of population but only 15% of GDP.
I believe it is in everyone’s interest that these percentages shift so that the poorest of the world can earn a higher percentage of the total through industrial development, as the wealthier nations have done in the past.
But this will require that we do not get side-tracked into senseless top-down carbon-cutting schemes and programs, which will only be counterproductive..
Max
Perhaps the following succinctly characterizes IPCC climate science:
Weather or Not?
Whether weather is climate or whether it’s not,
depends on whether it’s cold or it’s hot.
If it’s cold it’s just weather, whether or not
it’s cold all the time and never gets hot.
If it’s hot it’s the climate, whether or not
it was cold yesterday and just now it got hot.
So, weather is climate whenever it’s hot,
but climate is weather whenever it’s not.
As the French would say, “That may apply in Practice, but does it work in Theory?”
From Dr. Curry’s conclusions: “Improvements to [climate] models are ongoing at a rapid place, but such improvements do not always decrease model outcome errors when compared with observations.”
I am reminded of Fred Brooks’ famous book on software engineering, “The Mythical Man-Month”, in which the author observes “in a suitably complex system there is a certain irreducible number of errors. Any attempt to fix observed errors tends to result in the introduction of other errors.” [This phrasing of Brooks is from http://en.wikipedia.org/wiki/The_Mythical_Man-Month%5D
I do recall anecdotal evidence that IBM could not lower the number of bugs their operating system, due to the complexity of the system (and also to human-related reasons, also discussed by Brooks).
Some time ago, curious about the climate model software, I downloaded some code from one of the publicly available models. I was both impressed and appalled. Impressed at some of the physics, well above my pay grade, that was involved; and at the size and complexity of the code. Appalled because the quality of coding and documentation did not seem better that industry standards circa 1970. (My opinion upon brief inspection, others may differ.)
Referring to my first-paragraph quote from Dr. Curry, it is not surprising that rapid changes to climate model code does not cause the models to better track observations. It is a really tough problem.
For what it’s worth I think that the issues here are not the definitions of V+V, but (as ever) the application. For the purpose of climate model screening V+V is next to useless. V+V is designed as a process for evaluating know systems, i.e. you need a decent understanding of the system or model you are attempting to V+V.
The verification, in this context, is checking that the model is operating to the assumptions / parameters programmed into it and given the level of knowledge on the climatic system is entirely subjective.
Validation is different and I think the incorrect application is being used. To validate the model you have to demonstrate, reproducibly, that the model accomplishes its intended requirements. Herein lies the rub; are the intended requirements to model the climate accurately or to model the warming accurately? Or poorly defined subset of both?
Let’s assume we want to model the climate proper. For a full validation the evaluation against observational results is a must. If the model does not accurately reflect the approximate trends observed, reproducibly, then the model can be said to have failed validation. This is not something you can ‘bend’ to suit climate science modeling, if you do not do this then the process you are using to ‘validate’ your models is NOT validation.
On a wider note the ensemble model method is an intriguing one, and if used appropriately I think could have some real benefit, however I think it is being used more in the ‘stick them all in and one will show us what we want’ meme. A strict, iterative application of the ensemble method could provide some interesting results and further, actually advance climatic modeling. As it stands, I’m not sure we’ll get there any time soon.
Finally, I think the chap writing PhD thesis missed a trick. He could have added another dimension to his piece by then comparing those V+V procedures to engineering/industry V+V. Would have allowed him to draw more conclusions/make suggestions on improving the systems that are used
Labmunkey
What you write about validation makes sense. Let me simplify.
A model must be validated against actually observed data (rather than simply against other models).
– Model projects 0.2C warming per decade.
– Observations show slight cooling over decade.
– Model has failed validation.
“My projection was correct, except for…” doesn’t count (see Taleb).
Max
That’s it effectively.
I get the impressions that the climate ‘moddlers’ see V+V as a hurdle to get over, rather than the incredibly useful tool it actually is.
Using V+V correctly you could get a really useful iterative design process going which would almost certainly (by nature of design) work by increasing the accuracy of the models over time. This would clearly be classed as more of an in-process V+V, but it’d work. Heck, i use this process daily at work.
Rather than use the simplest (and arguably most effective) method for model selection and improvement, more and more coplex and contrived methods seem to be implemented. It’s odd, perhaps i’m missing some nuance, but the whole approach just doesn’t make sense to my addled mind.
” – Model projects 0.2C warming per decade.
OK the 00’s were 0.17 deg warmer than the 90’s. Similar figures for 90’s over 80’s etc
I suppose you could argue that they didn’t get it quite right!
The problem is THE model versus the model ENSEMBLE. The ensemble can verify the model, but observations can only validate THE model. Think of the wonderful UKMET hurricane model. As part of the ensemble the MET can ignore how poor their model’s performance is, after all, it is close to at least one of the other poor models. So we need to establish the bureau of model reality. The worst performing model gets no more funding.
As Steve Mosher said above, when looking at the various models they each have strong points and weak points. Some model the surface temps well but don’t do other areas properly. This appears to be true for all of them.
This is why ensemble means worry me. It strikes me as akin to having 6 car production lines that each make 100 cars. One makes cars without motors, another no doors, another no gearbox, another no windows, the next no interior and the last have no wheels. But apparently, on average, they are said to make good cars.
No, I don’t have 600 good cars “on average”, I have 600 piles of useless junk.
Could somebody please explain to me why I should view ensemble means in climate models any differently?
To say nothing of aircraft modeling.
Steve Mosher, all I’m hearing in reply is crickets ??
Funding twenty models can only be justified on the basis of winnowing and catching the kernels. It’s incorrect to depend upon ensembles, effectively mashing chaff and seed together and trying to make it edible.
It is a political bind. Bound to the absurdity of ensembles, no criticism of the models can be tolerated.
I suspect the models will burst these temporary bonds, pitifully inadequate digital simulacra though they are.
==================
Oh what issue?
On the issue of including bad models in the ensemble?
I’ve been on the record against that since 2007 when I first started learning about this stuff. Go read every comment at CA, and RC and WUWT and Lucia. You’ll see I am pretty cosnsistently in agreement with Willis’ position on this. We disagree about a few things ( some important) but the notion that the models should be winnowed has a long history in my argument. In fact, I argued that we should actually have a downselection of models.. a standard model, standard hardware, standard input datasets, especially for input into a policy document.
And so it shall be done.
===========
Steve,
All models will be bad models to the so-called skeptics. Unless, of course, they start to predict what they’d like them to predict. Models will of course continue to improve, the available processing power to run them will become greater, but will those improvements ever be enough to convince anyone who is sufficiently determined to remain ‘skeptical’ ?
Hopefully, you know that is not true.Skepticism has grown with the number of excuses for the observations falling below the estimates. Now that ARGO data is being used, it has to be wrong just like UAH MSU was wrong and now RSS MSU is wrong. The model estimate are high. Get over it.
Judith wrote:
I like the Pielke’s reference to “inkblots”. This “inkblot” or “Rorschach” test methodology was developed by Swiss psychoanalyst Hermann Rohrschach in 1917 and is still being used in psychiatry to arrive at a holistic analysis of an individual patient’s personality.
One patient will interpret the blot as a beautiful butterfly, while the other will see a threatening flying dragon. A third will have a hard time seeing anything but an ink blot.
If an individual is psychologically programmed (or motivated by fear) to see the threatening dragon, it will be there. Repeatedly.
Like Judith wrote, it’s all about interpretation.
And then, even more importantly, not taking anyone’s interpretation too seriously.
Max
Joshua Stults, and others, have clearly delineated that Verification and Validation, as these technical terms are used in present-day computational science and engineering, are two distinctly different activities. As such, one cannot simply say that V&V are not possible in the sense that these are almost always lumped together whenever the GCMs are the subject. Lumping the terms together in a single sweeping declaration of impossibility is an extreme generalization that prevents further useful discussion of how to proceed relative to accomplishing V&V for the GCMs.
Instead, in order to begin an orderly attack on solving the V&V problems of GCMs, each term must be considered separately and the proper tools for each applied to the appropriate parts.
In the present-day sense of V&V, at the zeroth-order cut, Verification is basically a mathematical procedure ( see Josh’s post for a more complete characterization ). From this point of view, then, the important method which would allow progress toward application to GCMs, is to identify those specific aspects of GCMs that are might be unique relative to the mathematical procedures that have been used for Verification of other models and codes. These have never been set down anywhere, so far as I am aware; instead Verification is simply declared to be impossible.
Right off the moment, I can think of one or two. The first is very important and is as follows. I think it is a correct characterization that no calculation with any GCM has ever been done in the asymptotic range of the various numerical solution methods that are used in the codes. The method of manufactured solutions ( MMS ), the gold-standard for Verification, requires that calculations be done in the asymptotic range. A second aspect is the very large number of algebraic parameterizations that are the back-bone of GCMs. Some of these require that physically-sensible dependent variables be used; positive pressure, mass and energy, and transport and thermo-physical properties, for examples. The equation of state for real water, for example, cannot be evaluated with negative pressure and temperature. Some parameterizations additionally include the spatial and temporal increments as independent input. Such parameterizations will always present problems relative to mathematical Verification.
In addition to the impact of these relative to the MMS procedure, until the spatial and temporal increments are such that the calculation is in the asymptotic range, refinement of the increments, and changes to the parameterizations and other model improvements, will always present the possibility that the calculated numbers will deteriorate in quality when discrete-increment refinement is carried out.
Finally, complexity and non-linearity are also not sufficiently unique descriptors of some of the difficulties associated with application of V&V procedures and processes to the GCMs. We live in a non-linear universe, all significant, real-world problems are non-linear problems. Simply throwing these terms out onto the table, again lumped together, is another way of avoiding the fact that real progress in V&V has been successfully accomplished for a wide range of wicked, real-world, non-linear, highly complex problems in engineering and scientific mathematical modeling and computation.
A mathematical model of a modern fossil-fuel electric-generation plant ( e. g. a coal-fired, wet-wall boiler plant ), for example, has all the physical aspects that are a part of the GCM problem; fluid flow, conduction, radiative energy transport, phase change, i. e. multi-phase, multi-scale thermal sciences, in general. Additionally, many rather wicked nitty-gritty aspects are more important in this application that in the GCMs and thus must be more carefully handled; e. g. combustion and radiative energy transport in an extremely messy interactive media and the distributions of the phases of water within the confines of complex engineered equipment.
Real progress on applying present-day Verification and Validation to the GCMs will be possible only when the many specific aspects of the target models, methods, and software have been sufficiently delineated to allow intense focus on the real problem areas. Progress will not be made so long as we can simply throw apple-pie and motherhood boiler-plate terms onto the table, wave our arms and hands and say, It’s impossible.
We live in a non-linear universe, all significant, real-world problems are non-linear problems. Simply throwing these terms out onto the table, again lumped together, is another way of avoiding the fact that real progress in V&V has been successfully accomplished for a wide range of wicked, real-world, non-linear, highly complex problems in engineering and scientific mathematical modeling and computation.
You win my vote for best post of the thread so far.
Dan,
Thanks, I believe this is the best answer to my questions above.
Judith
What strikes me in this issue is that people prefer to stumble in the dark trying to reinvent the wheel by using relatively clumsy analogies (“ink blot”) instead of simply asking those who have invented the wheel already a long time ago.
This is really only the problem of the phase space. And both the questions and the answers become suddenly luminously clear when one uses the correct language and evokes the correct and known results.
Indeed what is at issue here? How can we compare predictions and realisations?
What we are really asking is what are THE STATES of the system.
As every state of a physical system since Hamilton is given by its coordinates in the phase space, we can immediately reformulate rigorously and quantitatively the rather foggy above questions.
1) What defines the states of the system?
The answer is simple. Basically 4 fields. The velocity, the temperature, the density and the pressure. 1 vectorial and 3 scalar fields. That makes 6 functions Fi(x,y,z,t). Give me the value of each field in every point and I give you ONE uniquely defined state of the system.
2) When are 2 states identical?
When the 4 fields are equal.
3) What is the metric? E.g if I have 2 different states, how can I say if they are “far” or “near” to each other?
Well as the states are defined by fields, what we want is a metric of a space where live functions f(x,y,z,t). This space will also be by definition a phase space because it will contain all the states of the system.
Now while this seems to be a complex question, it has been answered a century ago.
Such a space is called a Hilbert space and using it is a routine in quantum mechanics.
There exists a scalar product defining a metrics which is {f,g}=∫ f.g.dx
The space of integration is in our case the volume of the system e.g the atmosphere, hydrosphere and the cryosphere.
As a bonus we can immediately reuse a result we already know – the dimension of this Hilbert space is infinite. Therefore the dimension of the phase space of our system is infinite too. So honorable Chief allow me a correction – it is not “almost” infinite but infinite.
At this stage we have practically answered in a rigorous way what it means to make a prediction and compare it to a result.
You first predict 4 fields at a certain time t0 Pi(x,y,z,t0), then measure the same 4 fields at the same time R(x,y,z,t0) and last you compute ∫ (P-R)²dx.
If the result is small, then the prediction is good, if it is large then the prediction is garbage.
Several remarks are in order.
– It does no good to “predict” averages or any other functional of the fields.
Actually it is useless for a Hilbert space and from an equality of field averages nothing can be deduced about the fields themselves.
– For a particular state where some field will have a large bump somewhere (aka an extreme event) there will be very “near” an infinity of different states having a similar bump. The word “near” meaning here in the sense of the metrics defined above.
And just to be sure that everybody got it, let’s repeat it : there is no other metrics for the system
As a byproduct we have explained the “ink blot” en passant because the “ink blot” language translated in the phase space language just means that whatever the state of the system is/will be, there will be always a an infinity of different other states which will “look like it”.
Now in order not to blow a blog post format we will only briefly analyse the question of probabilistic predictions and their validation.
We know the exponential divergence of orbits. So we know that the evolution of the system or its orbit in the infinite dimensional phase space cannot be deterministically predicted.
This of course invalidates nothing of what we said above. We have still the Hilbert space, we have still the metrics, we have still the field equations and the initial/boundary conditions.
Let us also notice that taking the original fields or their transforms (f.ex gliding time averages) doesn’t change anything either.
What we then would like is at least to say : “The probability that the system will be “near” a point P in the phase space is 20 %”.
What we would like is to find a FIFTH field Ψ (X1,X2,…) which gives the probability of presence at any point of the phase space P(X1,X2,…) .
However with this wish we will hurt 3 huge walls:
1) Ψ lives in the phase space. As the latter is infinite dimensional, you need a Hilbert base and an infinity of coordinates to characterise it. It is not impossible to do that, after all the QFT (Quantum Field Theory) succeeded, but it is definitely not easy. In our case even if we suspect that the orbits will live in some subspace (attractor) and everything else will be forbidden, we have no idea what this subspace could be.
To avoid misunderstandings-when we talk attractor here, then it has no geometrical interpretation (e.g no nice pictures à la Lorenz) because we are in a Hilbert space where “surfaces” are combinations of functions (fields).
An attractor means then that some combinations of fields are authorised while some other are forbidden. In QM we have this clean and simple result that only Eigenvalues of operators may happen. The “attractor” of QM is constituted of Eigenvalues and Eigenfunctions of operators – one cannot make a picture with that :)
2) The experimental validation of Ψ needs many identical experiments – ideally an infinity. Unfortunately in our system we are allowed only 1 experiment. So despite the fact that I could COMPUTE a large number of states by repeating computer runs and thus approximately and crudely establish a THEORETICAL Ψ, once I obtain a value for a state f.ex 60% and another state happens “far” (in the sense of the above defined metrics) , then I can conclude exactly nothing. I could redo another experiment, e.g look at the system later and again compute the probability that it happens. But this would NOT be an identical experiment because the system would have moved in the meantime so I could again conclude nothing.
3) I left the most important for the end. Namely the existence of Ψ. Clearly in order for this strategy to make sense I need Ψ not only to exist but to be invariant. I vitally need that Ψ be independent of the initial/boundary conditions. If it is not then then it doesn’t even make sense to talk about probabilities in the phase space – Ψ would in that case become an explicit function of time and the whole strategy would be doomed.
Now this particular invariance property is known and is called ergodicity.
However it is never a given and it can’t be a postulate. Natural systems can be but must not be ergodic. We have sofar absolutely no clue whether the system we are talking about is or is not ergodic.
With two much more probably states, glacial and interglacial, why would the system be considered ergodic? If you neglect one highly probable state, whether the system is ergodic becomes a larger issue. With two preferred states, the system space would not be infinite, right?
The phase space is necessarily infinite dimensional because its “points” are fields (functions).
And a function is not a number, it is an infinity of numbers.
For example if I deal with the pressure field, I cannot say that the only thing that matters about this field is the pressure at the top of the Eiffel tower (which would indeed be only 1 number).
If I want to describe the dynamics, I need the pressure on plenty other places, I actually need the whole field with an infinity of numbers.
Or if I want to treat it numerically with a model then I need a very large number of points (billions).
That’s why “interglacial” is not a “state” of the system. Clearly all the fields change all the time at all points during the “interglacial”. You might have the ambition to find some kind of Eigenvectors and Eigenvalues in the phase space (like in QM) which would characterise the “interglacial”.
But even if it was possible, be very sure that their number would be infinite too.
Infinity is relative. A bounded space may have infinite possibilities, but compared to an unbounded space?
I didn’t finish my thought. With a bounded space and two states with higher probability, the system would not likely be ergodic. If the space were unbounded, the greater probability of the two states could become negligibly small.
There is a confusion here.
The phase space of our system (technically a space of square summable functions) is not “bounded” in any usual meaning of the world.
This of course means nothing for the dynamical orbits of the system happening in this space.
The fields themselves, being physical fields are of course bounded everywhere (a pressure or a temperature cannot be infinite).
I am afraid that you still imagine the phase space as R^n, the usual cartesian space with euclidian metrics (R^3 is our normal space with its 3 dimensions).
A Hilbert space where climate or quantum mechanics happen is very different. Its “points” are functions, it is infinite dimensional and the “distance” between the points is an integral (distance between point f and g is ∫ (f-g)².dx).
You can’t handle that reality efficiently without these insights.
I have no idea what it could mean that “infinity is relative”. Is infinite what is not finite. Seems rather absolute to me.
I may be completely wrong. For a system to be ergodic there is equi-probability of revisiting any of the micro micro states. Therefore, if there is a greater probability of certain states, the system would not be ergodic. That is my understanding. The integral from a to b would contain an infinite number of points. The integral from a to c where c is much greater than b would also contain an infinite number of points.
Since temperature is our main variable of concern, the bounds of temperature are limited by the degree that albedo can change. The integral from a to b. Without albedo being limited, the system could change from 0% albedo to 100% or a temperature range of ~3 K to ~280K, the integral from a to c. The point of minimum albedo, ~25% to maximum ~50% (40% is more realistic) limits temperature to ~300K to ~ 260K.
In both cases there are an infinite number of possible states. With the albedo limits, the probability of the 25% and 50% state are much greater than the probability of states in between and much much greater than the probability of 0 to 100% albedo. That is what I mean by infinity is relative.
Since there are other factors than just albedo, the actual albedo limits would be a function of all the impacts. The glacial and interglacial periods provide some estimates of what they may be. CO2 increase can shift those points, but it is highly unlikely that CO2 is sufficient to cause a thermal over run much greater than the limit of 25% albedo.
Some IMHO, unless something ground braking is discovered, CO2 forcing and albedo change are the two primary variables. Albedo provides a limit in both directions. CO2 is limited to 1.2 degrees per doubling plus water vapor feedback that decreases with increased CO2 forcing (clouds).
I hope that makes some sense.
I may be completely wrong. For a system to be ergodic there is equi-probability of revisiting any of the micro micro states.
Yes you are.
Ergodic means that there is an invariant probability distribution of the states where invariant means independent of initial conditions. There is no constraint on the form of the PDF.
A die throw is ergodic and each of the 6 numbers has equal probability (1/6).
The logistical equation (X(n+1) = a. Xn.(1-Xn)) is chaotic and ergodic but here the PDF is fiven by 1/sqrt[Pi.x(1-x)]. Clearly the probabilities are different for every number in [0,1].
The Lorenz system is also chaotic and ergodic. Yet, obviously the points in the phase space have very different probabilities – some will be visited very often (high probability) and some not at all (probability 0).
Etc.
The above examples are simple because we deal with a finite low dimensionnal phase space.
For the climate which is infinite dimensionnal, if it is ergodic, the PDF would be very complex indeed.
Since temperature is our main variable of concern, the bounds of temperature are limited by the degree that albedo can change.
This is a very confusing statement.
It’s like saying that in an electromagnetic field the electric field is a “main variable of concern” so that one has to look only at electrical charges.
Well the field equations are the Maxwell equations and there is no way that one can arbitrarily choose only 1 field in a system where all fields interact. To get the variable “of interest” one has to solve for everything and also for variables of “no interest”.
If one is interested in the temperature field for the climate then one has ALSO to find the pressure, density and velocity fields because all these fields are coupled. You can’t simply forget things because they don’t interest you.
Tomas,
Main variable of interest does not mean no interest in other variables. As a limit to the possible range of global temperatures on Earth, Albedo is most significant, assuming the range of solar variation, volcanic activity average, ability to emit aerosols and CO2, water vapor etc. There are quite a few variables that impact temperature and albedo. You can argue that albedo is not the most significant variable, I don’t see why, since the climate change due to Milankovitch cycles requires the change in albedo first due to axial tilt to start the change, then the change in albedo to prolong the change.
So how would albedo impact the four basic fields, velocity, temperature, density and pressure?
Tomas:
Thanks once again, Tomas, for taking the time for the ‘luminous clarity’ to be sprinkled, like pixie dust, on the conversation here. Beautiful – but hardly very soluble. That has to be what Penrose calls the road to reality.
Richard Drake
I have not the hubris to pretend having solved the problem and I agree that it is difficult what is an understatement.
However when one uses the correct language (no rocket science here, the Hilbert spaces have been known, umm, since Hilbert) the problems take a sharp, quantitative and well defined meaning what is an advantage.
Just imagine how confused would have been Dirac if he tried to tackle the problem of light-matter interaction with clumsy field averages and ink blot analogies.
We would have been waiting for the QED still today :)
If what you have are fields then it is obviously of advantage to take them seriously and proceed with the language and tools which are adapted to treat fields. And if there are simplifications along the way, of course you take profit of it and as a bonus you can justify them.
Totally agree. I’m not saying that Mother Nature won’t give up her secrets. Just that the road to reality is hard and many are those who fall by the wayside.
As for Dirac, who was notable in his search for precision even in everyday conversation, leading to pauses of great magnitude, I was amused the other day of his friends defining ‘a Dirac’ as the smallest number of words it was possible to utter in an hour and still be considered part of a conversation.
Oh that many of us had that gift on the ‘ink blog’. But you truly explain. Thanks again.
Tomas Milanovic
I find myself humbled by the great clarity of thought you apply, and facing the irony that I ham-handedly repeat the ink blot error of using newly made-up, imprecise names by calling what you have done here by the cowboy terms “Monster wrangling” (in that you sidestep the ink blot ‘monster’ en passant) and “Monster corralling” (in that you put the problem into the same category as the far more familiar QM, allowing reuse of past successful approaches).
Bravo sir.
Thank you.
I take it that “Natural systems can be but must not be ergodic.” should be “… need not be egodic.”? “must not be” means it is forbidden.
??
Sorry, J; there’s an old saw about a gold mine being a hole in the ground with a liar at the top. Steve’s probably heard it, but would have deleted my comment, too.
Always, my best stuff. The critic’s laser eye.
===========
“There are two ways for the climate science community to move beyond an ink blot (if it wishes to do so)… that would allow those who are skeptical of scientific claims to revise their views due to unfolding experience.”
Wrong! There are three options (in the US), the third being the LMAD plan (http://letsmakeadeal-thebook.com) that advocates a carbon tax, NOT to save the plant but rather to save the country. A meaningful carbon tax is just two simple rhetorical questions away:
1) If the solution to too much CO2 in the air is to use less fossil fuels, why is NOT the solution to too much federal debt to use less government?
2) If the optimal amount of CO2 in the atmosphere is 350 ppm (current=389 ppm) because that is the maximum concentration of CO2 that life as we know it can continue, why is 18% of GDP (current =25% GDP) NOT the optimal size of government because that is the size that most likely yields maximum economic growth (of 4.1% historically)?
Think about it. Liberals (including Obama) and Conservatives are actually making the same apocalyptic argument albeit on different issues. They both make good arguments for action. But the public is yawningly uninterested in AGW and unwilling to make the hard choices on America’s fiscal problems. Buying off the opposition is the American way so why not use the system we have to get the outcome you want. And that’s what Let’s Make A Deal—The Plan is all about: getting the outcome you want.
It’s time for progressives (and scientists) concerned about rising temperatures and conservatives concerned about rising federal debt to realize the obvious: they need to BUY each other off in order to effectively address their pet ideological concerns-there is no other way. This means trading, among other things, a carbon tax for a balanced budget amendment and a more limited government. This plan is outlined at http://letsmakeadeal-thebook.com
LMAD is more than just a carbon tax: Healthcare-for-All? It’s in there. Balanced budget? It’s in there. Carbon tax? It’s in there. Rational taxation? Amnesty? Border Security? Limited government? Social Security and Medicare solvency? It’s all in there; it’s all paid for and it’s all scalable and optimized for economic growth.
Blog: letsmakeadeal-thebook.com/
Facebook: facebook.com/pages/Lets-Make-A-Deal-The-Book/143298165732386
Twitter: twitter.com/#!/lmadster
Or just Google “LMADster” for more info.
Unfortunately the deal that gets made is less taxes and more government.
If conservatives want less taxes and less government, and the progressives want more government and more taxes, the obvious compromise is more taxes and less government to balance the budget, but we get the above.
Begs the question. (= presupposes a preferred answer.)
Here’s the real underlying issue:
What’s “too much CO2 in the air”? I suggest it’s about 3,000 ppm.
To avoid it, we need only be cautious about not breaking down the bulk of the crust’s store of limestone.
No carbon tax required.
When I first wondered what an infinity of infinities was, I estimated it between 12 and a large number. On further consideration I’ve raised the lower estimate by two and a half dozen.
=================
Or you could google Georg Cantor
:)
Cranky Constructive Kronecker.
========================
kim
To see the world in a grain of sand, and to see heaven in a wild flower, hold infinity in the palm of your hands, and eternity in an hour. (Not original, but Blake was mad, too, so kinship is there.)
Just choose the 12 carefully.
If you reduce your lower estimate by four, you need choose with slightly less care.
The universe in a single atom.
H/t Ocean Teacher.
================
“Climate models are imperfect and always will be, but they are useful for some purposes.”
In climate science models are abused. It’s like the Drake equation. From wikipedia:
“Criticism of the Drake equation follows mostly from the observation that several terms in the equation are largely or entirely based on conjecture. Thus the equation cannot be used to draw firm conclusions of any kind…”
“…in a 2003 lecture at Caltech, Michael Crichton, a science fiction author, stated:
The problem, of course, is that none of the terms can be known, and most cannot even be estimated. The only way to work the equation is to fill in with guesses. […] As a result, the Drake equation can have any value from “billions and billions” to zero. An expression that can mean anything means nothing. Speaking precisely, the Drake equation is literally meaningless…”
http://en.wikipedia.org/wiki/Drake_equation
Many factors influencing global climate (GTA, the way it’s calculated) are unknown. Earth’s climate system is obviously an oscillatory system. The CO2 warming fixation should be loosened and other factors and processes should be looked into. It’s a travesty.
“The problem, of course, is that none of the terms can be known, and most cannot even be estimated. The only way to work the equation is to fill in with guesses. […] As a result, the Drake equation can have any value from “billions and billions” to zero.”
The difference is that all the terms used to model the climate are either known or can be estimated and so as a result the output (climate sensitivity) is very much more constrained than the output of the Drake equation.
Can a climate model even be made to show just 1C warming per doubling of CO2 if one honestly sticks to variable values that are within the constraints of physics and observations?
If everyone had a supercomputer on their desktop a climate model with a layman interface and tweakable parameters would make a neat game.
That’s exactly the problem with the Drake equation and the climate models. Some simpleton wil always show up and take it seriously!
will
I want to highlight something Tomas Milanovic said.
“What defines the states of the system? The answer is simple. Basically 4 fields. The velocity, the temperature, the density and the pressure.”
You propose to validate GCM output by comparing model values for these four fields to empirical observation. That sounds perfectly fine, except you will only have measurements for three of the four. There are no density observations against which to validate the model outputs.
This strongly supports the notion of a GCM being an “inkblot” rather than something falsifiable. Every GCM makes some very questionable assumptions about how density values will vary and these are NEVER checked against reality.
I think Erl Happ & Robert Ellison & al have insights into assumptions about variation of density values.
==============
Well, since Robert Ellison is here, we can ask him the question I’m most interested in. I’d be interested in as many opinions as possible on this:
What does a falling temp lapse rate throughout the atmosphere, or over a very wide area like the tropics, imply for mean sea level air density values?
a) rising sea level air density
b) falling sea level air density
c) no change in sea level air density
d) cannot be determined without reference to N other factors
Again, at this point there are no empirical data. The answer according to the IPCC is (b). My money is on (a).
Given that the ideal gas law is incredibly good at atmospheric pressures and below, this list is a bit silly. For practical purposes you only need two pressure and temperature to give you density to a percent or so, and the only is only because of water vapor, the atmosphere being well mixed up to the mesosphere in the non-condensible gases. If you want to quibble density is a poor substitute for humidity.
Hello Mr. Rabett,
First let me say that I think very highly of your blog, and I’ve benefited from your explanations a number of times.
Second, though, let me pose this to you. Do you really feel satisfied with accuracy in estimating density to “a percent or so”? That’s a casual remark, I won’t hold you to it. The reason I ask is because as I understand it, this entire fracas is about a future temp rise of 3 degrees on a base of 288 K, give or take a bit. That’s a one percent change.
So given that density has equal weight in the ideal gas law with temperature and pressure . . . wouldn’t an error of one percent in density have the potential to nullify the entire forecast?
Sorry, I meant to add a third point. I agree the ideal gas law is incredibly good at atmospheric pressures, but global circulation models don’t actually use the ideal gas law to estimate density. They use a hydrostatic approximation. Do you have any concerns about the accuracy of the approximation? Because I do.
You can spend an unfathomable amount of time banging your head on a brick wall when you are trying to use the wrong tool for the job. I’ve experienced this in my own field of science (biotech). Progress comes not from further tinkering with that tool but moving on to something different, I can confirm a very rewarding experience. Is there any possibility that models are the wrong tool for this job?
It seems to me that Dr. Pielke is confused. If you want to falsify the theory of AGW, nothing could be simpler. Options include:
1. Create a CO2-rich atmosphere and test it in a lab. Show that it does not, in fact, absorb and re-radiate longwave radiation.
2. Take a series of CO2 measurements, and demonstrate that CO2 levels in the atmosphere have not, as scientists think, increased over the last hundred years.
3. Find some convincing evidence that the rise in CO2 has nothing to do with the millions of tons of fossil fuel carbon we are releasing into the atmosphere (warning: you will be expected to show your slides).
4. Challenge the evidence that the earth has warmed over the last century. Prove that temperature are, in fact, no warmer today, on average, than they were in 1900.
You could use any of these methods to falsify the theory of anthropogenic global warming — if it were false. It’s not false, so that makes it hard to falsify. Understandably.
Models are obviously going to take decades to decisively prove themselves, or not, because of the degree of short-term variability in the climate. Hence we should not rely overmuch on models, but rather use them to generate plausible scenarios, and direct prevention and preparation towards the worst case.
Robert,
The AGW theory fails because the climate is diong very little.
Your tests are simple diversions you hope wil distract people from noticing the failure.
Baghdad Bob is proud of your steadfast abiliity to avoid the issue.
I call your problem Premature Rejectulation
So you claim to have falsified the theory of AGW? That’s staggering. And I was here; I was part of it. What a great day for blog science this is.
Just submit your data, explain how it falsifies AGW, and then we’ll alert the editors at Nature.
Baghdad Bob, AGW is not climate science, and you are no scientist.
Just submit your data, explain how it falsifies AGW.
Is that so hard?
You seem to be ducking a little bit.
What data do you have that falsifies the theory of AGW?
How does it do that?
Simple questions.
1. Create a CO2-rich atmosphere and test it in a lab. Show that it does not, in fact, absorb and re-radiate longwave radiation.
Earth’s atmosphere is not CO2-rich. Absorb and re-radiate does not mean warm. Even if it does and CO2 increment reduces the heat loss to space (all things being equal), that only represents radiative balance and not other forms of energy (heat) transfer. So this point is kinda red herring.
2. Take a series of CO2 measurements, and demonstrate that CO2 levels in the atmosphere have not, as scientists think, increased over the last hundred years.
The last hundred years? How do we measure that? We only have (background) measurements since the ~1960s. All the results before the start at Mauna Loa are a no-no. It’s not accepted. Pre-industrial CO2? Give me a break!
3. Find some convincing evidence that the rise in CO2 has nothing to do with the millions of tons of fossil fuel carbon we are releasing into the atmosphere (warning: you will be expected to show your slides).
I am convinced that the rise has very little to do with the anthropogenic input. Sorry, no slides (no time), but I have been waiting for someone who’s paid for it to do the job. I hope Salby’s work will be accepted and recognised. I also think that the CO2 growth rate will decrease very soon. Maybe that will make scientists do their job.
4. Challenge the evidence that the earth has warmed over the last century. Prove that temperature are, in fact, no warmer today, on average, than they were in 1900.
Red herring. It’s very likely warmer today than in 1900. It’s also possible (IMO likely) that it will be no significantly warmer in 2020/30 (or later) than in 1900.
Besides, the falsification of CO2GW hypothesis is not my job. Scientists are paid for it! Where’s their work? I only see CONFIRMATION! That’s pseudoscience.
“Earth’s atmosphere is not CO2-rich.”
Do you not understand what a scientific experiment is? How you construct one, what you test?
“Besides, the falsification of CO2GW hypothesis is not my job.”
Good thing, since your basic science literacy is so shaky. Well, since it’s not your area, I’m happy to tell you AGW has not been falsified, although it easily could be, in a number of ways, if it were false.
Since you admit you don’t have the professional expertise to evaluate the hypothesis yourself, I’m sure that it will be useful information to you that the scientists who do, have, and the theory is solid.
“Do you not understand what a scientific experiment is? How you construct one, what you test?”
I understand and I know. I don’t understand why academic scientists don’t do it. Well, I do. Kuhn explains it very well.
The “theory” is so solid that a little cooling and a little transparency shakes it very dangerously. Your “theory” is pseudoscience.
That’s hilarious. You claim to know (although your understanding is clearly shaky, at best) and then you denounce those damn ivory-tower elitist “academic scientists.” By any chance are you related to this guy:
http://www.theonion.com/articles/im-not-one-of-those-fancy-collegeeducated-doctors,11237/
Robert, stop trolling! Be specific. What do you want to know? Just ask.
Sure:
1. You claimed “Earth’s atmosphere is not CO2-rich.” What did you mean by that? How is it relevant to falsifying the theory of AGW?
2. You claim to understand what a scientific experiment is, how you construct one, and what you test, but your other comments make me doubt that. Could you please briefly describe what a scientific experiment is, how you construct one, and what you test in a scientific experiment.
3. You said “the falsification of CO2GW hypothesis is not my job” because you are not a scientist. To clarify, do you think have the expertise to do that if you wanted to, and chose not to? And if you have this natural, untaught ability, what is keeping you from setting the “academic scientists” to rights?
The theory involves much more than that. In fact, you could say that a GCM is a fair representation of the theory.
1. Create a CO2-rich atmosphere and test it in a lab. Show that it does not, in fact, absorb and re-radiate longwave radiation.
This is radiative physics. AGW relies on radiation physics, so yes, if radiation physics was wrong AGW would be wrong. Lukewarmers you should note, accept radiation physics
2. Take a series of CO2 measurements, and demonstrate that CO2 levels in the atmosphere have not, as scientists think, increased over the last hundred years.
This is not AGW theory. This is simply observation and historical analysis.
3. Find some convincing evidence that the rise in CO2 has nothing to do with the millions of tons of fossil fuel carbon we are releasing into the atmosphere (warning: you will be expected to show your slides).
Again. There is almost no AGW theory here.
4. Challenge the evidence that the earth has warmed over the last century. Prove that temperature are, in fact, no warmer today, on average, than they were in 1900.
Again. there is no theory here. There is observation and the statistical analysis of numbers. Almost zero scientific content.
What you have done robert is taken the aspects of AGW science that are farthest away from the core debate and core theoretical aspects and held them up as key tests of the theory.
What you avoid in all of this is what Roger is really interested in.
Sensitivity. That’s the key issue.
Steven Mosher writes “Sensitivity. That’s the key issue.”
Precisley. And there is no physics that allows anyone to translate changes in the radiative balance of the atmosphere into changes in surface temperature.
Agree Jim!
dU = Q – W.
Q is overal heat transfer and W is actually the overall net amount of energy (all forms but heat) transfered. All means all, some forms are of course insignificant.
The law is not
dU = Qe
where Qe is radiant energy.
And dU is not dT!
That’s your claim. What is your evidence that “the theory involves much more than that”? The applications of a theory to a particular problem — whether it be the theory of evolution to microbiology, the theory of gravity to ballistics, or the theory of AGW to climate sensitivity — are different from the theory itself.
The theory is very simple:
1. Greenhouse gases trap heat.
2. Trapping heat warms the planet.
3. Human actions are increasing the levels of certain greenhouse gases in the atmostphere, thus:
4. Human activities are warming the planet.
All falsifying the theory requires that you successfully find compelling evidence against #1, #2, or #3.
If you mistake the whole complex body of applications of the theory for the theory itself, of course it will appear non-falsifiable. It’s a category error on your and Pielke’s part.
Robert you writes “4. Human activities are warming the planet.”
The question, to me, is not whether this statement is true, but HOW MUCH warming is caused by adding CO2 to the atmosphere from current levels. If the effect is negligible, and cannot be detected against the background of natural variations, then AGW can be correct, but CAGW is wrong. That is the issue so far as I am concerned.
I’m glad you agree. That’s AGW. That’s it. How much warming for how much forcing is another conversation.
It is not another conversation. That’s entirely illiterate. It’s the whole conversation.
Robert
5. How much is CO2 warming the world?
Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.
It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.
Most ‘recent warming’ occurred in 1976/77 and 1997/98 in ENSO ‘dragon-kings’ (Sornette 2009) – extreme events associated with climate shifts. This is quite a simple observation – subtract the global surface temperature in 1977 from 1976 and 1998 from 1997 – and compare to the increase from 1976 to 1998. Take the increase from 1978 to 1996 – and calculate the trends. This is essentially what Kyle Swanson did here – http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/ – as well as suggesting no warming for another decade. Although I think 20 years of no warming – and a residual warming rate of 0.1oC/decade – is a big deal despite attempts to minimise the significance. As Swanson has no clue as to the control variables in play for these systems – he is clueless as to which way they may ‘shift’ next. As always – the studies have more credence than a blog.
Most of the rest seems associated with cloud change – http://i1114.photobucket.com/albums/k538/Chief_Hydrologist/Wong2006figure7.gif – which itself seems linked to decadal changes in ENSO.
There was relative warming in the SW and relative cooling in the IR.
We are told 2 things. That climate is so complex it can only be understood with supercomputers and that it is so simple that it can be understood by a moron.
Your attitude is typical of AGW space cadets in conflating a serious deficit of understanding – an inability or reluctance to countenance complexity – with ‘unctuousness, mendacity, mock-reasonableness, petulance, bullying, hypocrisy, overweening arrogance, brazen aggression and bogus moral preening.’
Robert I Ellison
Chief Hydrologist
“How much is CO2 warming the world?”
To understand that, you’d have to learn some basic science and read some climate change papers. Start with review articles, nothing hard.
Of course, if you want to persist in ignorance of the science, and cover your lack of understanding with whining and hysterical name-calling intended to bring others down to your level, you can play it that way, too.
:)
So it is not enough to reference scientists who suggests that there is ‘natural variability’ in climate – or to a realclimate post on one of these studies that suggests less warming attributable to CO2 and non warming for another decade. And I don’t understand science?
Here is another study that suggests a ‘slowing down’ of warming for another decade. ‘A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT rise.’ http://www.pnas.org/content/107/5/1833.full
The ‘hysterical name calling’ was a description of the new left that I thought was quite apt – so I used it. If you are a ‘space cadet’ and respond in all those ways to cognitive anomalies – it is what it is and no real communication is possible between us.
Chief,
We probably disagree about a few things. But I found it absolutely hilarious that Robert would send you to read science. He can’t get through a paragraph of Tsonis. Typical.
If anything his argument that sensitivity is unimportant is the funniest most anti science position I have seen in 4 + years of discussing this on the net. I walked the floors of AGU and sat through lectures and scanned all the papers.. Not one takes the odd almost perverse stance that Robert does.
The theory is not so simple.
1, 2, 3 and 4 are all trivially true. The issue is how much warming?
How much warming depends upon.
A Emission scenarios. If you look at the spread of values for future warming you will find that they are driven and dominated by emission scenarios. And if you look at emission scenarios you will find that they are driven by population growth and economic growth.
B. Climate senstivity. If sensitivity is 10 we have huge problems. If sensitivity is .6 ala Lindzen, then we have fewer problems.
You will find people who disagree with 1-4. That is not where the debate is. That is not where the science debate is. The science debate, the physical science debate is over sensitivity.
Finally, You miss my point entirely. The theory is clearly falsifiable. As you note the core beliefs are falsifiable in principle. Sensitivity estimates are also falsifiable.
You might do well to study the history of the term falsifiable. Start with the early logical positivists. When you finish come back and you’ll get your next reading assignment.
William Carlos Williams & Peter Pereira.
===========
dont rain on my wheelbarrow
Robert
Robert
Yes.
But the theory does not get us very far.
The claim is being made that the observed warming over the last half of the 20th century can be principally attributed to AGW from human GHG emissions (primarily CO2) and that this, therefore, represents a serious potential threat to humanity and our environment.
Yet there is no empirical evidence to support this claim.
Therefore it is an uncorroborated hypothesis.
That’s the problem, Robert – not the “theory”.
Max
I have yet to find or be shown any lab experimental evidence of quantified heating of CO2 by IR absorption. Absorption and re-radiation I have no problem with – that would result in scattering of an IR beam. Heating has not been demonstrated. Tyndall did not measure heating.
Some people think that a lab experiment has no relevance to atmospheric physics, hence justification that no experiment needs to be done.
It is you that is confused. Pielke didn’t talk about falsifying theory, just testing projections. The theory may well be sound as a bell but the net effect on climate may still be negligible. Clear? It is only the model projections that produce the alarm. If they are unverifiable then they may be useful tools but they are not fit for the purpose they are being used for. So we should look only at the real world data that we have. There has been substantial increase apparently in CO2 so what effect did it have on the real world numbers?
Answer: A lack of stratospheric cooling since 1995, a lack of any ocean warming since accurate data records began, leading to the “missing heat” problem, a lack of any tropospheric hotspot, and the total the lack of any credible data to back up the theory of stong positive feedback, which is what the models use to extrapolate 3 degrees of warming from o.6 degrees the last century. Of course note that the lower end of the IPCC range, ie 1.1K is the no feedback scenario and that moderate warming, which is the scenario the real world data favours, is considered to be beneficial even by the IPCC.
Now in any other endeavour this would be the end of it. But there are just too many people financially dependent on the scare. This is the only positive feedback in the whole farrago.
James G,
You outline the problem well.
It is indeed the inkblot nature of of the models, and not anything in the real world, that is causing the social mania of AGW.
There is a curious assertion that Pielke jr makes repeatedly, which is that the truth or falsity of the models does not affect the decision to decarbonize. I have been unable to figure where he gets this unless it is for pollution abatement reasons. However, for pollution abatement you do different things than for carbon abatement. For pollution, you put scrubbers on the stacks and prefer clean (in the sense of low-sulfur or low-mercury) coal or natural gas.
Craig — eGads! That’s one of the FEW things I do understand in all of this. :-o You’ll have to go pretty far back on Roger Jr’s blog to find his full explanations — but, yes, pollution, trade deficit, dirty coal, basic energy costs, etc kind of sum it up. Don’t take my word for it though, I may not understand as much as I think….
If you stop burning fossil fuels, you get abatement of both. That’s why it is important to factor in all of the benefits of leaving the stuff in the ground — it’s cumulative.
Robert
Isn’t that like throwing the baby out with the bathwater?
Max
Robert,
If we reduce the human population by denying them the proven benefits of using fossily fuels, that will simply put the true misanthropy behind much of AGW in an even harsher light.
But we will achive your obsession: reduce the CO2.
Yet the world will still have storm, drought, flood, extreme cold and extreme heat.
So, beyond destroying something that works pretty well, we will achive just what?
The policy challenge is to avoid doing more harm than good. You should think about that. Less use of fossil fuels mean expensive food, heating and electricity and a net effect on the environment that is worse than that from the posited warming. It’s all very well to believe that alternative fuels take up the slack but they just don’t and they won’t for a very long time.
Jr. took Pascal’s Wager a long time ago. And yes, it’s illogical, because there are more possibilities out there than Catholicism and atheism. It’s also illogical because all the side benefits Pascal talked about are pure speculation when talking about carbon. If there’s no carbon-free heaven, there may very well be a carbon-free hell.
Some insightful comments by Pielke Jr.:
The Green Blind Spot
Decelerating decarbonization of global economy
To get there, the cost of renewable energy needs to be brought way below that off fossil fuels so that penetration rates are strongly commercially driven.
Or bring the cost of fossil fuels up; the effect is the same. Use the market to drive innovation, conservation, and adoption.
The price of oil has more than doubled since 2005. How about giving innovation, conservation and adaption a chance to catch up?
Except that if rates go UP, the cost of doing ANYTHING goes up, competitiveness drops, exports drop, expansion moves to other countries, and we end up cleaning each others’ toilets, with a portion of each payment for that going to gov’t for health, education and welfare. I’d much rather wait for the AGW claimants to release their data, for the “science” to go a little beyond their biased models, and for the politics to be removed from the equation. Save the economy for my grandkids, ok?
The effect is not the same. Artificially increasing energy costs further simply exacerbates poverty.
http://www.paulchefurka.ca/Oil_Food.html
Yes, but it’s poverty that the Roberts want, although they deliberately mis-label it as a “change of lifestyle” – much like changing one’s brand of shoes, I guess
Speaking of Procrustes’ shoes, worry when it’s about energy footprint rather than carbon footprint.
==========
Uh.. what?
a.) How do you get the claim you make from the graph you cite?
b.) Artificially lowering costs of any kind unevenly distorts the Market, creating inflationary drag that exacerbates poverty. Energy costs are costs. Energy costs in America (where it matters) are subsidized. Subsidies are artificial attempts to lower costs. Energy subsidies unevenly benefit the few free riders.
c.) The carbon cycle — a common shared resource — includes a dynamic carbon budget ceiling, therefore is a limited resource, and must be priced or it is a subsidy to free riders.
You want to reduce poverty in America?
Stop with the tax holidays for energy free riders, support the natural market for the carbon budget instead of giving it away free, employ Americans and lower taxes in the USA instead of making free riding foreign fossil fuel corporations rich on overtaxed American backs.
Oh, and just because Paul Chefurka doesn’t know the secret of fuel/food correlations doesn’t mean it hasn’t been studied before in greater depth and for greater durations.
Surely, Chief, you can find the actual connection?
Here’s a hint. Think like a hydrologist.
Oh heck.
Here, just read something not based on astronomy. http://fletcher.tufts.edu/CEME/publications/~/media/Fletcher/Microsites/CEME/newpdfs/Food-fuel_price_dynamics.ashx
Waiting for you to think like a hydrologist again would take too long.
Late to the party again Bart. The biofuel aspects have been at least flagged in the FAO report – and by Hunter elsewhere I believe.
Your reference adds nothing to the discussion at all – but I guess you’re used to that.
1. The recent commodity price spike signaled a new era of significantly increased prices.
2. There is a strong correlation among commodity group prices, particularly meat and grain prices such as beef and corn
3. Increased use of biofuels will change the relationship between
commodity groups with a significant impact on prices and demand
But I guess as well you haven’t even read it – simply googled it and imagined that it supported your of thesis of biofuel/food price increases. .
‘Biofuels have yet to demonstrate a significant and lasting impact on prices or demand – but that potential is clearly real.’
It is certainly a real risk for the future and such risks should clearly be beyond politics and personality. .
Chief
Alas, all these hints and you can’t get a clue.
2006 saw (by casual chance?) a combination of floods and droughts that brought (despite heavy investment — you saw the fuel-price bump in your graph — in energy-intensive fertilizers and farm machinery in the face of anticipated food demand increase) a series of drops in crop yields globally acrosss the board, among other curtailing influences on Supply.
With food shifting budgets and increasing Demand pressure, the cycle repeated in 2007-2008 (blamed ridiculously prematurely by a decade at least on biofuels by the media) under cost-intensive and generally negative conditions and prices spiked.
How’s a competent hydrologist who pretends to understand Hayek miss this?
Yet your reference is all about biofuels.
The graph simply shows a link between fuel and food prices from 2000. It reverses the long term decline in food prices seen since the 1960’s and is clearly related to the cost of energy.
You scramble to post rationalise something else after failing to read your own reference.
Chief
You attribute to me sufficient regard for your opinion to imagine I’d care enough to put in the effort of duplicity.
Which in past was true, and if you behave better, you may deserve again in the future.
However, at the moment you’re in my ill graces for performing so far below your intellectual gifts.
Artificially lowering energy costs exacerbates poverty.
That’s what your link really says, when looked at through the lens of the more detailed analysis of the link I offered.
See where the paper makes explicit that it was most emphatically not biofuels, and specifically not ethanol, that led to the food price spike (as the food price preceded the fuel spike).
Sure, someday there will be a causal relationship. Maybe. But not today, and not in 2007.
That spike came about because of global hydrological failures.
So it can be blamed on failed hydrologists who didn’t use their intellectual gifts.
It’s another example of spinelessness in agw dissent, think Lindsey Graham pandering with “energy policy”. We shouldn’t facilitate co2 fraud because the higher goal is central planning not the enviornment to begin with.
How about Tom Fuller’s bet with Joe Romm, as to whether 2010-2019 will be at least .15 warmer than 2000-2009? How will the winner of the bet and by how much affect your conclusions as to climate sensitivity?
Judith,
Thank you for bringing the inkblot issue to my attention — while it consumed the best part of a late afternoon, I feel better informed for it.
I agree with ‘JC conclusions’ — we are well into practical pragmatist territory.
Chief Hydrologist | September 19, 2011 at 5:09 pm
Given the uncertainties in the projections – you still want to turn this into a you’re wrong and I’m right p…ing competition?
Uncertainties in the projections? You have claimed that we need to double the world’s food supply by 2034, and we need to quadruple it by 2057.
Your citation, rather than saying we need a 400% increase, says we need a 70% increase by that time.
That has nothing to do with “uncertainties in the predictions”, it has to do with your ridiculous claim. So yes, your claim is wrong, and even your own citation says so.
Second, you claim that things are getting worse simply because (in your words) “The number of chronically undernourished and malnourished people in the world has been rising, not falling.” That’s just another alarmist slant on the facts.
Surely you must realize that in a growing population, the crucial figure is the percentage of people who are chronically malnourished, not the absolute number. In 1990, the chronically malnourished were 16% of the population. By 2010, that number had dropped to 14% of the population.
You can see the error in your claim by looking at the other side of the equation. From 1990 to 2010 the number of people who were not chronically malnourished also went up … so wouldn’t that increase in properly nourished people indicate (by your incorrect measure) that things are getting better, rather than worse?
In fact, neither the absolute number of nourished or malnourished are meaningful. Only the percentage is meaningful, and it has been decreasing, which means things are getting better.
So I strongly object to your implied claim that people are eating worse or that chronic malnourishment is on the rise. Neither one is true in the slightest. People on average are better fed now than at any time in our history, both the rich and the poor, and chronic malnourishment is on the wane, down 2% in only twenty years. Check out the FAO database if you don’t believe me.
See my discussion of these issues at “I am so tired of Malthus“, “Farmers vs. Famine“, and “The Long View of Feeding the Planet“.
w.
Sorry, the first paragraph in my post above is from Chief Hydrologist and should be quoted, viz:
Chief Hydrologist said:
w.
Willis,
‘FAO estimates that the number of chronically undernourished people has risen from 842 million at the beginning of the 1990s to over one billion in 2009.’
So we are doing better as a percentage? Better fed than ever before?
We need agricultural growth that is at least 3% – and preferably higher right now. Not least to build strategic stores of food. I don’t really see what your problem is – some sort of anti neo-Malthusian rant?
I am being pro-development as always. The solution to tt’s ‘problem’ is to ramp up food, energy and GDP as fast as possible.
Cheers
BTW – I quoted the FAO on things getting worse in terms of absolute numbers of hungry people. They were not my words – although a couple of hundred million more hungry people since 1990 seems a bit of a problem.
Clone me a few Borlaugs from the cornucopia, s’il te plait.
====================
Remember, @100W/human, the earth receives from the sun the energy necessary to sustain approximately a million times our present population. That is a number in the low quadrillions, theoretically possible; not practical.
A tiny increase in efficiency of our use of that energy would sustain, in a style to which we would like to become accustomed, many multiples of our present population of ~7 Billion. Advances such as Norman Borlaug’s point the way for increased, climatically probably neutral, use of the sun’s energy.
So much for the ‘sustainable’ meme, when it means nasty, brutish, and short.
================
“nasty, brutish and short”
That’s either an apt description of the CAGW movement’s lifecycle, or a Chicago law firm. Can’t decide which.
Note the rapid increase in corn going into US Corn Utilization
The price of corn about doubled in 2 years and is projected to stay high. From USDA Feed Grain Baseline, 2009-18
This corresponds with five fold increase in fuel prices this last decade, and the efforts to “control climate”.
I did not know that abundance ever caused prices to double or increase 500%.
If there’s any doubt about the world’s capacity to produce more food, just look at the agricultural productivity of Israel compared to it’s neighbors. Egypt, for example, should be exporting food rather than importing it, and would be if they weren’t too bullheaded to emulate Israel.
The developing world isn’t food poor, they’re technology poor.
Nile river people
Starve and strive for sharia.
Oh Allah, baksheesh.
===========
“Imagine the following scenario. Carbon emissions cause some warming, maybe 0.05C/decade. But the current warming rate of 0.20C/decade is mainly due to some natural cause, which in 15 years has run its course and reverses. So by 2025 global temperatures start dropping. In the meantime, on the basis of models from a small group of climate scientists but with no observational evidence (because the small warming due to carbon emissions is masked by the larger natural warming), the world has dutifully paid an enormous cost to curb carbon emissions.” (Ibid.)
Interesting hypothesis. Now all you need is a plausible mechanism, some data to back it up, write it up, and you’re ready for peer review.
Drop me a link when you’re in press.
Dr. Curry:
You do not have a tips and notes like WUWT, so I will post this here. This may be of note to you as it seems to focus on and recognize uncertainty:
http://www.livescience.com/16124-dead-nasa-satellite-falling-earth-sept-23.html
Roy Weiler
Roy thanks for the tip. apparently there is no copy of this paper online, so i am somewhat reluctant to do a whole thread on it. if anyone spots a copy online, pls let me know.
Thank you for gracefully noting that those of us who refuse to afford $$$ paywalled articles are nevertheless still able to read. I much appreciate your efforts here – I’ve noted this from you for quite a while now and I’m very grateful
That’s to Wojick and was to read, you can demonstrate over and over again…
Let’s see if this works:
To Wojick–
You can’t be serious. You can demonstrate over and over how babies are created even if there will never be another Jesus.
Jesus the sorting on this thread is spastic tonight. Is someone at Command Central enjoying a wee bit of Irish Potain?
This is an example time-dependant chaos theory :
“Howsoever, the making of Poteen (also spelled Poitin, Potcheen, and Potain) went underground in 1661, when King Charles II introduced an outrageous levey on the production of distilled spirits throughout the United Kingdom. Not surprisingly, this tax was largely ignored by the Irish.”
Now, if there be any so disposed to wonder about time-INdependant chaos theory just let me know as I have a connection to master of the discipline who uses Hollereth cards and an old Frieden calculator to foretell the future.
” Falsification of a climate model is precluded by the complexity of a climate model.”
Sounds suspiciously like “Too Big To Fail!!”
If it gets falsified we just add an epicycle.
There are two ways for the climate science community to move beyond an ink blot (if it wishes to do so). One would be to advance predictions that are in fact conventionally falsifiable (or otherwise able to be evaluated) based on experience.
Here is my best attempt at “conventionally” falsification of AGW
http://bit.ly/oI8dws
If there is continued global warming until 2030, as claimed by the IPCC, AGW will be confirmed
On the other hand, if there is a cooling of about 0.3 deg C by 2030 from about 0.45 deg C for the 2000s, AGW will be invalidated.
There is evidence for global cooling of 0.03 deg C per decade since 2002 as shown below.
http://bit.ly/f42LBO
AGW is currently on a very shaky ground.
Correction
There is evidence for global cooling of about 0.1 deg C per decade since 2002 as shown below.
There is evidence for global cooling of about 0.1 deg C per decade since 2002 as shown below.
http://bit.ly/f42LBO
And when we do a simple sensitivity test to see if the picture Girma draws is sensitive to his choice of start year… a basic due diligence test of an analyst’s reliability.. we find that Girma is hiding something..
http://www.woodfortrees.org/plot/hadcrut3vgl/from:2000/to:2011/plot/hadcrut3vgl/from:2000/to:2011/trend
Right up there with Mann, Jones and Briffa. hide that incline Girma
Steven, an alternative to fighting water with fire is to fight water with water. This 1979-1987 example is the same duration (eight years) and the same decline (0.1 °C/decade) as Girma’s example. Hence according to her logic we had evidence for global cooling as far back as 1987.
We should take Girma’s example every bit as seriously as any rational climate scientist would have taken the 1979-1987 example.
Global cooling is not inconsistent with Global Warming
Global cooling is not inconsistent with Global Warming
Quite right. You could support this by pointing out that there are over 30 global cooling events in the 1979-1987 example, alternating with an equal number of global warming events.
Vaughan,
And is global warming inconsistent with Global Cooling?
Yes, it’s symmetric.
Most discussions of global warming are about what’s likely to happen 20 years or more from now. If you look back at the last 20 years you won’t find any global cooling over that long a period, nor over any 20-year period since the 1960s.
Sorry, by “yes, it’s symmetric” I meant that global warming is not inconsistent with global cooling (I lost count of the negatives).
If the results depend so heavily on the starting date, does that not imply a lack of significance?
Hunter, you’re absolutely right. Those results that depend heavily on the starting date are not significant, those that don’t depend on it are significant. You will find that averages over 5 years or 8 years depend heavily on the starting date while those over 20 years don’t depend on it at all, at least not in the last 40 years.
Models and math are central to science. Observations are seldom repeatable. May 23, 1933 will never come again. Equations are forever.
This is a deep truth.
Frank Wilczek (QCD father and Nobel) often quoted “It works in practice but does it work in theory?”
Lesser people are often quoting the same sentence with a derogatory intent, implying that a conceptual effort to properly formalise in a consistent mathematical way is an inferior if not ridiculous task.
By doing that they show that they understood nothing about science.
Actually it is precisely when something works in practice but doesn’t work in theory that science is about to reach some of its deepest insights.
This quote of course doesn’t mean that data, observations should be ignored but exactly the opposite. Something that doesn’t work in theory should COMPEL every scientist to rethink the very foundations of the theory. There is no room for improvising artificial adaptations, glueing cumbersome appendices and making ad-hoc assumptions.
When the planetary orbits work in practice (e.g can be accurately observed) but stubbornly don’t work in theory then it stops making sense adding yet another epicycle which would only give a few years respite and it is time to rethink about the foundations of the theory itself.
I submit that actual climate theory such as founded by climate models and statistical tortures of time series of data is in this state.
The climate works in practice but doesn’t work in theory.
The early warning signal that something doesn’t work in theory is a persistent inability to make predictions, an accumulation of observations that don’t fit accurately with the the theory and an inflation of ad-hoc statistics.
I would then say that it is time to rethink the foundations.
In my opinion the following explicitely or implicitely admitted axioms are problematic and likely incorrect.
– The system is in equilibrium or near to it. It can be treated perturbationally like a pendulum that has been deviated by a infinitesimally small angle dθ.
In reality the system is perpetually far from equilibrium and doesn’t allow linear approximations justified by infinitesimal deviations to equilibrium.
– The numerical models produce a converged solution of the dynamical equations of the system.
In reality the resolution is much too coarse to produce a solution let alone converged.
– A weaker form of the above axiom : The models do not produce a converged solution but the statistics of many runs of the model(s) produce the statistics of the system. I have commented in detail about this issue in my above post. In summary here we enter the kingdom of magical thinking – there is no rational justification why by just multiplying the number of non converged runs we should magically reproduce the real world statistics.
Not mentionning the fact that I do not believe that the system is ergodic – would however immediately change my mind if somebody could prove it.
A certainty – it is not with the current theory that it is possible to prove ergodicity.
– Space doesn’t matter. Spatial averages of the real physical fields are meaningful and can be predicted better than the fields themselves. This is trivially wrong for all established field theories. Also observation shows that the dynamics of the system is governed by local spatially sharply defined structures (oceanic oscillations). It is specifically this belief that works the least in theory. This assumption should have been abandonned already long ago because it prevents any progress.
– It is easier to predict the dynamics at larger time scales than at shorter time scales. This is just a variation on the previous theme and likely misguiding for similar reasons. On the contrary and R.Pielke Senior published on this theme, with increasing time scales new and less understood degrees of freedom kick in and can be no more postulated constant as it happens for short time scales. As the number of non linear couplings between fields increases, the system becomes more complex and not less.
To finish I would emphasize that for me it is not interesting to ask questions whether temperatures will increase or ice will melt. As we are in an interglacial, both will trivially happen untill … they stop happening.
What is interesting are the dynamics – how, how much, when and where will the fields vary.
For instance the most interesting question for me is how, where and when in detail will the next glaciation happen. Admittedly this “prediction” will take a long time to be validated by observation. But when this problem works in a sharply formulated mathematical theory and by that I don’t mean “creative” statistics or computer simulations, I will become convinced that we have at last a reasonable theory.
Tomas,
Equations currently used are too confining to only a single point in time at a single point on the planet and fails to consider many small or large influences.
The 1,2,3,4 discussed above fail to consider distances, shape, varied speeds, different energies, bending energies to the atmosphere, sudden changes(like impacts or volcanic disturbances), different gases and materials, magnetics, circulation, land heights, energy storage, compression, etc., etc., etc.
The climate is far more complex that what todays scientists can grasp and in doing so, is a disservice to anyone wanting to understand this planet.
Judith,
Ever thought of grading relevance to factors that give us temperatures?
As is usual in each thread, the topic descends to the usual AGW “sciency” talking points that are unresolved and will remain so. Then there are the material avoidance of the broader social and politcal themes that are drivers of the IPCC “consensus” that have many examples and evidence;
http://www.un.org/esa/dsd/agenda21/res_agenda21_00.shtml
Try chapter 7, this is the evolution of the eco-central planning of which agw was a device. The term “eco” as with AGW itself is just a tool. The real driver is “central planning” and the key players in the science community are largely tools on this or similar agendas. The roundabout side topics of data and here “inkblots” only prolongs the essential debating topics, in fact it obfuscates them.
Dr. Curry,
Have you noticed the real faux pas the recent Time’s Atlas made regarding Greenland?
And, apparently, the publishers are defending the blatant lie.
This raises the question about when and if the general cliamte science community, beyond yourself and John N-G and a very few others are going to stand up and speak out against what is apparently wide spread corruption by pro-AGW groups and companies.
Less than an hour before you posted, we hear that the Time’s Atlas has “apologise[d sic] for ‘incorrect’ Greenland ice statement“.
http://www.guardian.co.uk/environment/2011/sep/20/times-atlas-incorrect-greenland
Sort of weasel-worded, still, though.
AK,
It still leaves the same question that has never been clearly answered by the IPCC: How do such boners get through, and why do these obvious errors seem to always promote the idea of claimte crisis?
It is fascinating that AGW news releases carry the general pov that ‘things are worse than predicted’, or ‘this is consistent with AGW predictions’ no matter how seldom it is actually an accurate claim.
I forget the technical term for it, but people writing both scientific articles and news like to tie their products to the current paradigm, especially if it’s under attack they consider unjustified.
As for news, these people are primarily interested in sensationalism, so wild alarm according to an alarmist paradigm is to be expected, any time they can justify it.
But note the position of the scientists who object: saying Greenland’s lost 15% of its permanent ice cover when it hasn’t will open the whole AGW movement to denialist ridicule. (Which it has, see above.)
AK,
Why would someone pointing out an obvious mistake that is completely fabricated and not supported by data at all be called a “denialist”?
In any reasonable setting such a person would be called things like ‘honest’, insightful’, ‘ethical’, and be praised for clarifying a deceptive attempt to mislead people.
That the AGW movement would call such people ‘denialists’ speaks volumes about AGW and its connection to honesty and reality.
“If it bleeds it leads”.
@hunter…
Oh, perhaps because you blame the IPCC for the sloppy carelessness of a news release copywriter for the Times Atlas? Or perhaps because you forgot to mention how quickly the more scientific “AGW” community jumped on this error. I had to find the link myself, is this because you didn’t post it because anybody would have realized the “AGW” community was actually policing itself (or trying to)?
Don’t get me wrong, I’m pretty sure there’s a Marxist conspiracy to take advantage of Climate Change science for their own agenda. This doesn’t mean the science itself is wrong, although it’s certainly not as certain as the IPCC claims. It also doesn’t mean there aren’t a lot of people pursuing a contrary political agenda (e.g. claiming to “Falsify the Greenhouse Effect”) in ways that are totally unscientific.
AK,
You are filbustering rather well, but you should check your supply of red herrings and strawmen, as you are using them at a great rate.
I don’t follow Marxist conspiracies, but I hope you find a great deal of fulfillment in doing so.
I hope you will actually address my question instead of talking around it, but then perhaps you are otherwise occupied with worrying about conspiracies.
I haven’t been following this one closely, too much going on this week!
hunter –
Hilarious.
Joshua,
Please do continue.
Your entertainment value is very special.
Where the Wild Beasts Are Atumble In the Night Kitchen.
========================
I just want to note that Joshua’s use of bold and exclamation points make his comment VERY persuasive. Good job Joshua. ;)
Andrew
…and italics.
Andrew
OT. Dr. Curry, you have given us 3 threads on Spencer and Braswell. How about a thread on Allan
http://www.met.reading.ac.uk/~sgs02rpa/PAPERS/Allan11MA.pdf
Combining satellite data and models to estimate cloud
radiative effect at the surface and in the atmosphere
as reported on WUWT at
http://wattsupwiththat.com/2011/09/20/new-peer-reviewed-paper-clouds-have-large-negative-feedback-cooling-effect-on-earths-radiation-budget/#more-47761
coming soon! I have a lot in the queue right now, and i have a broader topic in mind for this. probably this weekend.
jim you might want to read roy spencer’s comments on the misunderstanding many people have about that that article.
Steven Mosher and Jim Cripwell
Yes. There apparently have been some misunderstandings about what the Allan study tells us.
It does not tell us directly that cloud feedbacks with warmer surface or tropospheric temperatures will be positive or negative, simply that the net cooling forcing from clouds is a bit higher than estimated earlier (21 Wm-2 versus 18 Wm-2, as estimated by Ramanathan + Inamdar several years ago).
On WUWT, Roy Spencer cleared this up, adding:
The Allan finding is significant in itself, however, if one considers that the total anthropogenic forcing from 1750 to 2005 is estimated by IPCC to be 1.6 Wm-2, or half the 3 Wm-2 difference between the Allan and earlier estimate for net cloud forcing.
But interestingly, a poster named Paul M has commented (also on WUWT) regarding inferences in the paper to a negative cloud feedback. You can check the WUWT site directly for these comments, but I have copied them here:
So are Roy and Paul M reading something between the lines that is not there?
Or is the Allan paper really telling us something about net negative cloud feedback with warmer temperatures?
Max
Id say if you want a real answer on CRF.. wait 10-15 years till you have more data and better data ( like TRUTHS)
Question: If a NASA satellite in orbit for 20 years finally plunges to earth and defying all odds takes out New Orleans, is it Bush’s fault.
A joint investigation by NOAA, NCDC and NASA established that the satellite fell out of orbit as a result of increased TOA LW radiation attributed to increased GH warming. Dr. James E. Hansen of NASA stated publicly that he had warned US Congress and President Bush in 2007 that such a disaster could happen if a carbon tax was not immediately enacted and all US coal mines and “coal death trains” shut down, but that this fell on deaf ears with the President..
“This would mean risking being wrong, like economists do all the time.”
I wish they’d just shut up too. The minority who predicted the financial crash were ignored which should have reminded us all that it is very easy for the consensus to be wrong.
There are too many theorists around and not enough theory testers. Probably this is because you still get huge adulation, prizes and grants even after being proven to be largely wrong. People believe what they want to believe and most economists and climate scientists are no different.
It was Hadley who insisted they could model natural variation and match up the 20th century only if anthropogenic forcings were included in the model. This was the entire basis for the IPCC statement that most of the warming in the last 50 years was likely manmade.
Fastforward and that entire analysis was proven wrong. Under the CO2 dominant scenario they didn’t expect a pause in warming. Then Hadley explained that the natural variation that was supposed to be in decline magically reappeared and caused the warming pause. The Argo measurements were supposed to demonstrate the warming without the natural variation “noise” but they showed no warming at all.
So in summary the basis for the IPCC statement that man is responsible for most warming in the last 50 years is based on model-based attributions that are known and admitted to be wrong and the observations of no ocean warming and no stratospheric cooling (supposedly the fingerprint of AGW) prove there is no discernable AGW effect at all. That we are still debating the existence of this modern day aether is only proof of too many people reliant on climate funding and too much pessimism about manmade emissions.
Yes; that’s a top contender for basic assertions in the IPCC CAGW position that it needs to be called on.
Seems like a form of trying to have it both ways:
Anthropogenic influences are the only available factor which can explain their failure to make natural forcings work in their models;
and
Natural variation is the explanation for the failure of anthropogenic forcings to match observed deviations from the models.
The bind moggles.
Possibly this conundrum is why there has been a move to blame aerosols lately: With enough uncertainty to hide an elephant. Though any and all tall tales are welcome it seems, including heat passing into the deep ocean without first heating the first 700 metres. This after having said the whole theory of AGW is sound because of the basic physics of heat transfer. If they don’t really understand or accept those basics then how can they pretend to lecture us about them?
Exactly the same thing was done when they turned a warming ocean into a CO2 sink: When they need to explain things away, those basic physics somehow morph into more complex and as yet unknown physics that seemingly defy common sense. Nature makes fools of them.
Aerosol hazy,
Thermoclines lazy and deep.
Biosphere buffers.
=============
More on V&V from Steve Easterbrook.