by Judith Curry
Overall, climate modeling has made enormous progress in the past several decades, but meeting the information needs of users will require further advances in the coming decades. – NRC
The U.S. National Research Council (NRC) has just released a new study under the auspices of BASC that takes on the challenge of A National Strategy for Advancing Climate Modeling . Here is a summary of the recommendations:
The Committee recommends a national strategy for advancing the climate modeling enterprise in the next two decades, consisting of four main new components and five supporting elements that, while less novel, are equally important. The Nation should:
1. Evolve to a common national software infrastructure that supports a diverse hierarchy of different models for different purposes, and which supports a vigorous research program aimed at improving the performance of climate models on extreme-scale computing architectures;
2. Convene an annual climate modeling forum that promotes tighter coordination and more consistent evaluation of U.S. regional and global models, and helps knit together model development and user communities;
3. Nurture a unified weather-climate modeling effort that better exploits the synergies between weather forecasting, data assimilation, and climate modeling;
4. Develop training, accreditation, and continuing education for “climate interpreters” who will act as a two-way interface between modeling advances and diverse user needs.
At the same time, the Nation should nurture and enhance ongoing efforts to:
5. Sustain the availability of state-of-the-art computing systems for climate modeling;
6. Continue to contribute to a strong international climate observing system capable of comprehensively characterizing long-term climate trends and climate variability;
7. Develop a training and reward system that entices the most talented computer and climate scientists into climate model development;
8. Enhance the national and international IT infrastructure that supports climate modeling data sharing and distribution; and
9. Pursue advances in climate science and uncertainty research.
Well, almost all of these individual recommendations are good (I take issue with #4). And a few of them are very good (I especially like #1, #2, #9). But will they do the job, in terms of dealing with the fundamental problems with climate models?
The large investment in climate modeling is justified by its perceived importance in supporting decisions. Until recently, such decision making has focused on CO2 stabilization and climate sensitivity. In this report, the target users and policy decisions are farmers, hydropower systems managers, insurance companies, national security sector, and the building community. What is needed by these users? High resolution regional accuracy, with a focus on extreme events. This is exactly where climate models have the least skill. How will this be addressed?
As climate models become more comprehensive and their gridscale becomes finer, they can provide meaningful projections of more parts of the climate response and their possible feedbacks on the overall climate system, but this does not necessarily reduce projection uncertainty about some aspects of climate change. Indeed, global climate sensitivity, defined as the global warming simulated by a climate model in response to a sustained doubling of atmospheric CO2 concentrations, still shows a similar 30 percent spread across leading models as it did 20 years ago.
The authors recognize that increasing resolution comes with its own problems:
However, increasing spatial resolution is not a panacea. Climate models rely on parameterizations of physical, chemical, and biological processes to represent the effects of unresolved or subgrid-scale processes on the governing equations. Increasing spatial resolution does not automatically lead to improved accuracy of simulations. Often, the assumptions in the parameterizations are scale-dependent, although so-called “scale-aware” parameterization development has been pursued recently. As model resolution is increased, the assumptions may break down, leading to a degradation of the simulation fidelity.
Even if the assumptions remain valid over a range of model resolutions, there is still a need to recalibrate the parameters in the parameterizations as resolution is refined (sometimes called model tuning), and the tuning may only be valid for the time period for which observations used to constrain the model parameters are available. The lack of understanding and formulation of the interactions between parameterizations and spatial resolution makes it hard to quantify the influence of spatial resolution on model skill. Furthermore, structural differences among parameterizations may have comparable, if not larger, effects on the simulations than spatial resolution.
The authors recognize the problems associated with attempting to address the regional issues using downscaling:
Climate projections at finer scales (such as resolving climatic features for a small state, single watershed, county, or city) are typically produced using one of two approaches: either dynamical downscaling using higher resolution (50 km or finer) regional climate models nested in the global models or empirical statistical downscaling of projections developed from global climate model output and observational data sets. Neither downscaling approach can reduce the large uncertainties in climate projections, which derive in large part from global-scale feedbacks and circulation changes, and it is important to base such downscaling on model output from a representative set of global climate models to propagate some of these uncertainties into the downscaled predictions. The modeling assumptions inherent in the downscaling step adds further uncertainty to the process. There has been inadequate work done to date to systematically evaluate and compare the value added by various downscaling techniques for different user needs in different types of geographic regions. However, as the grid spacing of the global climate model becomes finer, simple statistical downscaling approaches become more justifiable and attractive because the climate model is already simulating more of the weather and surface features that drive local climate variations.
What are some other sources of improvements?
As discussed in Chapter 3, model improvements to address these research frontiers will be achieved through three main mechanisms: (i) development of Earth system models (increasing model complexity), (ii) improvements to the existing generation of atmosphere-ocean models, through improved physics, parameterizations, and computational strategies, increased model resolution, and better observational constraints, and (iii) improved co-ordination and coupling of models at global and regional scales, including shared insights and capabilities of modeling efforts in the climate, reanalysis, and operational forecast communities.
The authors recognize an important point:
There is no one-size-fits-all answer, but instead the approach should be problem-driven. Some problems that are of great societal relevance, such as sea level rise and climate change impacts on water resources, require increased model complexity, and progress is likely through the addition of new model capabilities (ice sheet dynamics and land surface hydrology, in these examples). In other cases, such as improved model skill in regional precipitation and extreme weather forecasts, increased resolution and “scalable” physical parameterizations are the highest priorities for extending model capabilities. Other problems, such as water resource management, require both increased resolution and complexity.
The make the following summary recommendation:
Recommendation 4.1: As a general guideline, priority should be given to climate modeling activities that have a strong focus on problems which intersect the space where:
(i) addressing societal needs requires guidance from climate models and
(ii) progress is likely, given adequate resources. This does not preclude climate modeling activity focused on basic research questions or “hard problems,” where progress may be difficult (e.g., decadal forecasts), but is intended to allocate efforts strategically.
Here is what they have to say on natural internal variability:
Climate predictions and projections are subject to uncertainty resulting from the internal variability of the climate system. The relative role of this type of uncertainty, compared to other sources of uncertainty, is a function of the future time horizon being considered and the spatial scale of analysis (Hawkins and Sutton, 2009, 2011). Hawkins and Sutton note that internal variability dominates on decadal or shorter time scales, and is more important at smaller (e.g., regional) space scales. Natural variability is usually explored by running ensembles of climate model simulations using different initial conditions for each simulation. Traditionally the number of ensemble members has not been large (e.g., around three in the CMIP3 data set), nor has it been based on rigorous statistical considerations. In addition, estimation of natural variability using models is limited by inherent uncertainty in the models because of parametric and structural uncertainty.
In this regard, the role of internal variability has been under-investigated in the exploration of future climate change, although recent research on larger ensembles has developed improved measures of natural variability and underscored how substantial it can be particularly on regional scales.
The above statement is excellent, I couldn’t have said it better myself. This is a FAR GREATER impediment to providing information for regional decision makers than is model resolution. I see no strategy here for actually addressing this issue.
While there is a section on uncertainty (a good thing), their ‘uncertainty’ strategy is woefully inadequate, focusing on
- Model weighting in multi-model ensembles
- Parameter optimization
- Arguments that they are reducing uncertainty
- Communicating uncertainty (which received most of the emphasis)
There is a brief section on decision making under uncertainty, and one particular statement caught my eye:
The promise of uncertainty reduction, when not realized, stands as a metric of poor management, poor scientific method, or outright scientific failure.
Their overall recommendation on uncertainty has the right words (but from the preceding text, not clear what if anything of use would actually emerge from this. The whole issue of verification and validation was ignored (apart from a general call to maintain the observing system).
Recommendation 6.1: Uncertainty is a significant aspect of climate modeling and should be properly addressed by the climate modeling community. To facilitate this, the Unites States should more vigorously support research on uncertainty, including:
- understanding and quantifying uncertainty in the projection of future climate change, including how best to use the current observational record across all time scales;
- incorporating uncertainty characterization and quantification more fully in the climate modeling process;
- communicating uncertainty to both users of climate model output and decision makers; and
- developing deeper understanding on the relationship between uncertainty and decision making so that climate modeling efforts and characterization of uncertainty are better brought in line with the true needs for decision making.
And finally, the NRC BASC Newsletter announces a new website:
Along with its new report about advancing climate modeling, the Board on Atmospheric Sciences and Climate has just released Climate Modeling 101 , a website designed to help the public learn more about the basics of climate modeling — how they work and why they are important. The site features short videos and animations that explain everything from the difference between climate and weather to how climate models are built and verified.
I applaud their making this kind of effort, do you think it works for the intended audience?
JC summary: If the objective of this report is to argue for increased resources for the U.S. climate modeling enterprise, it might be effective. If the objective is to improve climate projections for decision making, it is not clear now much of an impact this will have. My prediction is that #3 unified weather-climate prediction will have the greatest impact; it will force a focus on subseasonal and seasonal timescales, where there is some hope of progress on these timescales that would have great societal impact. Improving simulations of the MJO, ENSO, AO etc will require getting the coupling between the atmosphere and ocean correct, which is needed for climate models to get the circulations (and hence regional climates) correct.
And finally, if you missed it the first time around, check out my presentation on climate modeling strategy issues that I made to DOE BERAC about a year ago, if you want to know what I think needs to be done.
On 2 of the recommendations, I found “and more consistent evaluation of U.S. regional and global models,” Notice the weasel word “evaluation”. I cannot say I read all that out hostess posted, it was not really worthwhile to do so, and I have better things to do with my time, but I venture to state, explicitly, that the word “validate” NEVER appears anywhere in the writings. If I am wrong, I would be grateful for anyone to point where it was.
Unless and until climate modellers face up to the fact that their models are useless in making predicitons until they have been fully validated, then there will be absolutely no progress whatsoever in using climate models for anything that is remotely useful in predicting the future. And the ONLY way to validate a model is to have it predict the future consistently.
How do you do post hoc validation of a model’s long-range forecast (e.g., a forecast for 2100) now?
Max_OK, you write “How do you do post hoc validation of a model’s long-range forecast (e.g., a forecast for 2100) now?”
You cannot. That is the whole point of what I have written . You cannot use climate models, or any models for that matter, to predicit the future, unless and until they have been validated. The ONLY way you can validate any model for predicitons is to have it consistently predict the future. It therefore follows, as night follows day, that climate models cannot be used to predict the future on ANY time scale that is so long that validation is impractical. Hence my case that what out hostess and the NRC have written is, basically, useless garbage.
In a post that was removed, I pointed out that Climate Models are blatantly unscientific attempts by mortals to be our “Higher Power”.
No thanks, we’re not buying !
-Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
Jim, if planning I don’t see how we can avoid addressing the future. A “we don’t know for sure” statement about the future is true but useless.
For policy purposes, not predicting is in itself a prediction of no change. If you can’t validate a prediction of change you can’t validate a prediction of no change.
What good is “we don’t know for sure.”
Delete the last sentence in my previous post.
Max_OK, you write “Jim, if planning I don’t see how we can avoid addressing the future. A “we don’t know for sure” statement about the future is true but useless.”
I agree with you 100%. But we seem to draw different conclusions from this observation. What I think it shows is that people with names like Houghton and Watson, who are nobody’s fools, must have known this when they discussed CAGW many decades ago. What they should have concluded, is that the methods used by classical physics can NEVER answer the question “What happens to global temperatures when we add more CO2 to the atmopshere?”. That is what the people who initiated the IPCC should have stated in words of one syllable. That they did not do so, is a travesty, and a blot on the science of physics. The fact that the Royal Society and the Americal Physical Society went along with the nonsense only makes it worse.
Physics cannot tell us what happens to global temperatures when more CO2 is added to the atmopshere; period. The only way we are going to find out what happens, is when we get the empirical data; when we add more and more CO2 to the atrmopshere, and measure how much global temperatures vary.
I started my career in forecasting (not climate nor weather). We would “validate” forecasting models by using available data to time X then use the model to forecast to X+n (where n represents various observational points in the past but not included in developing the model). If the model was able to accurately predict the observed results of the past, then we would rebuild the model coefficients based upon all observed data. I don’t see why the same approach couldn’t be used in to validate climate models, except that i suspect no climate model would ever pass such validation.
AllenC, you write “We would “validate” forecasting models by using available data to time X then use the model to forecast to X+n (where n represents various observational points in the past but not included in developing the model). ”
Maybe you can answer a question for me. How do you ensure that there was no bias in producing the model because the results you used to validate it were already known? When medical tests are done, there are needs for there to be enormous predautions taken in what is termed “double blind”. It is essential to ensure that the people involved in the experiment have no knowledge whatsoever of what is happening. The results must be concluded with a guarantee that there was no “cheating”.
When you carried out your process, how did you guarantee the necessity for “double blind” conditions?
Jim, your comment. brings to mind Catch 22. You say we can’t use a model to forecast the distant future unless the model has been validated, but we can’t validate the model until we know the distant future, therefore there’s no point in trying to forecast the distant future because we can’t validate the model.
If we don’t forecast because we can’t validate the model, how do we validate a non-forecast?
“climate models cannot be used to predict the future”
“Correlation does not prove causation.”
“When you produce a measured, empirical, value for the total climate sensitivity of CO2 [which of course isn’t possible without having several Earths and a time machine] , I will listen to you.”
In other words you, are putting your fingers in your ears and are quite determined to live in your own fool’s paradise.
Max_OK and TT. You obviously have not bothered to read what I wrote at Jim Cripwell | September 14, 2012 at 1:59 pm | . What you both refuse to realize is that physics cannot tell us what happens to global temperatures when we add CO2 to the atmopshere. It is that truth that you refuse to accept. The myth that somehow scientists can make a silk purse out of a sow’s ear is simply unacceptable to you.
Yes, it is a sort of Catch 22, but a very real one. There is no replacement for empirical data in physics. The only hope that the proponents of CAGW have is to convince a gullible public, and equally gullible politicians, is that somehow the output of models can replace empirical data. The output of models can NEVER replace empiricla data. So, yes, the truth is that climate models, in practice, can never be validated, and are, therefore, useless for predicting the fututre.
Oliver wrote: “In a post that was removed, I pointed out that Climate Models are blatantly unscientific attempts by mortals to be our “Higher Power”. […] No thanks, we’re not buying !”
For fun, compare with the following…
Curry, J. (2012). What can we learn from climate models?
Dr. Curry provocatively asserts that a “Main advantage” of GCMs is:
“perception that complexity = scientific credibility; sheer complexity and impenetrability, so not easily challenged by critics”
More seriously: I disagree. The models are easy to crush (not just challenge) using aggregate properties of well-constrained geophysical variables.
Shorter Jim Cripwell:
Q: How do we know whether the models are any f…..g good at all?
A: At this time the climate modelling community does not regard investigating that aspect of their work to be a short or medium term priority. Nor do they believe that an unhealthy denier-driven focus on ‘results’, ‘achievements’ and ‘observations’ is a fair way to evaluate these great intellectual edifices.
One thing I can say about the future with certainty is that things will stay the same or things will change. So one way of evaluating a prediction or forecast is determining if its more accurate than an assumption of no change.
The word validation was used once and this is how it was used.
The whole issue of verification and validation was ignored
Indeed. “Overall, climate modeling has made enormous progress in the past several decades” The evidence for which is what, exactly?
If decadal forecasts (a stupid concept in itself) are ‘hard problems’, then what is the IPCC about!!!!
Would it be better to forecast no change than attempt decadal forecast?
I should have said would it be better to assume no change than attempt decadal forecasts. However, an assumption of no change is a forecast.
omnologos | September 14, 2012 at 9:42 am | Reply
“If decadal forecasts (a stupid concept in itself) are ‘hard problems’, then what is the IPCC about!!!!”
Money, power, ideology. Not necessarily in that order.
“climate modeling has made enormous progress”
I’ve never seen any evidence of this.
“Progress” is measured against a goal. If the goals are money, power, notoreity, and ideology then I’d say there’s been enormous progress. If the goal is skill in climate prediction, not so much.
“I’ve never seen any evidence of this.”
We don’t need better eyesight to see that solar & climate scientists tend to ignore (or at least deflect priority from) natural variability because they find it intractably enigmatic &/or threateningly inconvenient.
I suggest we show a whole lot more respect for nature’s power & beauty.
An easy step that can be taken immediately towards that end: Embrace the cross-disciplinary lead of NASA JPL’s Jean Dickey. (Please search her name in this page for links & notes I’ve shared today.)
Without more widespread awareness here of what Dr. Dickey teaches about TTG, the discussion at this blog may remain permanently divorced from realistic conceptualization of natural variability.
“I’ve never seen any evidence of this.”
I suppose you’d have to look for it first. But, even if evidence had been stuck under your nose, you still wouldn’t have the slightest idea you’d actually seen it anyway. So your point is?
tempterrain: Please try to be sensible. (Thanks sincerely for the effort if you can deliver.)
Please read my detailed comments elsewhere in this thread – e.g.:
In the interest of efficiency, I’ll start with a provocative comment and then switch to diplomacy afterward:
It’s unacceptable of the climate modeling community to think they should still be receiving funding 2 years from now if their models cannot reproduce the following OBSERVATIONS on EVERY model run:
Now I will attempt to assuage the fears of sensible modelers who realize that this is fundamentally important but lack the background to intuitively understand:
These are observed constraints on cross-ENSO AGGREGATES. Unless other exploratory efforts crystallize other constraining universal statistical properties of the Earth system, the modelers can carry on using ensembles, but my very strong advice is that without further delay they have to start learning from the exercise of mimicking OBSERVED AGGREGATE BOUNDARIES on spatiotemporal field components.
The news here is that the observed complex bounding envelopes can be decomposed in a manner that is demonstrably physically meaningful. This key base insight liberates and streamlines the pursuit of other fundamental revelations about zonal vs. meridional components of interannual to multidecadal climate variations.
Incorporating these insights into climate models is a very, very hard problem.
Sensible modelers who place strong value on integrity will agree that deeply fundamental new insights should not be ignored for years. One serious problem they will have is administrative. If we want to play a constructive role, we can help give them administratively defensible reasons for immediately devoting deeply serious attention to this fundamental matter.
Without help from the earth orientation community, I suspect climate modelers won’t really know where to start with this problem. I counsel them to reach out to good people like NASA JPL’s Jean Dickey.
Nature is beautiful. Let’s show her some respect. Let’s not misrepresent her!
I would like to see more interface between meteorology and climatology in in effort to extend the useful working range of weather forecasts to something more than 4-7 days to hopefuly several weeks.
The proposed interface should also benefit climatology by the provision of a greater empirical basis for the longer term climate models than those that are currently in use.
At 1st glance it does appear that this is more of an attempt to secure additional funding than it is to actually improve the climate modeling process.
It seems that point #4 should be made simpler and should be restated to state that model developers should work to develop models that are attempting to accurately forecast the criteria that are of importance to government policy makers. This means that the models need to be able to forecast over the appropriate time periods and within the margins of error necessary for policy makers to formulate sensible policies.
“If the objective of this report is to argue for increased resources for the U.S. climate modeling enterprise, it might be effective. If the objective is to improve climate projections for decision making, it is not clear now much of an impact this will have. ”
Appendix A has a task statement–presumably the committee’s charge which is not the same as a goal of the document. A cursory scan of the document did not reveal a concrete discussion of the goal of the document. Rather it appears to jump to jump into specific topics ultimately relating to the task statement. It is unfortunate that a National Research Council (NRC) document on a important, highly visible topic does not succinctly convey what its purpose is. One can note that this does disqualify the discussions in the report. But it may limit its use and effectiveness.
Judith’s comment above indicates a preliminary uncertainty on the part of a knowledgeable reader–admittedly with little time for assimilation–as to what the report is trying to accomplish. This sort of uncertainty (in the goal) is an uncertainty that the debate can do without. Is something is better than nothing? I do not know–things like NRC reports can provide a mantle of authority and must be carefully crafted.
This is a prepublication copy so I guess that is well beyond a draft and I wouldn’t expect much in the way of structural changes in the final. To those that have not downloaded yet, go for the individual chapters. The internals links and ‘goto’ are not working for the full document (OS-X 10.6/Preview). Life may be easier.
My first reaction was the same as Rob Starkey | September 14, 2012 at 9:58 am. However,I tried to push that aside (not dismiss) to better focus on the idea that the document does not seem to have a stated goal. The factor that the committee was unable to write what the document is supposed to accomplish in the picture does contribute to that initial reaction.
“contribute to a strong international climate observing system”
Poetry. Make it stop.
This looks good. It will be very informative to watch how members of the various climate “communities” react.
Will “realists” reject this initiative as wasting time that could be more valuably be used acting on the information we already have?
Will “skeptics” ignore this focus on uncertainty, and reject this as more efforts by the “cabal” to control the debate?
Try “reject this initiative as wasting time that could be more valuably be used adding to our detailed empirical knowledge on how the climate and feedbacks actually work”.
??? I don’t understand your point.
There are papers which say that what we thought we knew ain’t necessarily so (check out some sites which you probably disparage). My point is that instead of trying to get the models to fit all the info together, we should concentrate more on making sure the info is right.
That’s a given.
cui bono –
First – those two goals are hardly mutually exclusive. Second, I think that you are building on that false dichotomy to mischaracterize this initiative.
This exemplifies the problem I have with a lot of input from “skeptics”: They (rightly) call for a more explicit quantification of uncertainty even as they mischaracterize the extent to which “realists” do work to quantify uncertainty.
This is a good initiative. That doesn’t mean that it is above criticism – but my prediction is that “skeptics” will pan it not merely offer criticism, they will pan it.
My perspective is that skeptics (i.e., people who don’t reflexively conceptualize the world with a binary mentality) would applaud this initiative as positive; even if an initiative – like anything in the real world – is not without flaws.
I think that Judith deserves credit for her balanced reaction to this initiative.
I think more focus on regional, decadal modeling is a good thing. Our climate in the northern tropics could depend on what happened in the southern 44-64 latitudes 30 years ago. By having better regional models that consider thermal mass and internal lags, we could actually know that a shift in say the PDO precedes a shift in another internal oscillation then we could back track and locate likely century scale oscillations.
It wouldn’t do much for your annual forecasts, but it would help predict general longer term trends.
Of course, it helps knowing where to focus your attention :)
An unfortunate, or at least unusual, circumstance where I find myself mainly in agreement with our host’s thinking on a topic. I’ll have to put more time into evaluating this issue to see where I’ve gone so wrong. ;)
http://www.nature.com/nature/journal/vaop/ncurrent/full/nature11377.html highlights the need for advancing models and hypotheses through more and better observations, and more and better inference based on data. Without question, the separation of weather and climate study needs rebalancing, and the USA is in a particularly absurd situation as regards attitude toward better information. Let’s look at the current hurricane season: individuals, local government agencies and states reliant on the official hurricane forecasts are served only a fraction so well as those who pay for private forecasting: while I admire this from the point of view of free enterprise and limits on government at a superficial level, deeper logic tells me that leaving emergency workers, police, firefighters, transportation planners etc in the relative dark for no good reason during the height of a crisis is idiotic.
We know mathematically that both our weather and our climate science may produce significantly better outcomes with better observations, on the order of six hundred times more observation than is done now. We know a hundred times the granularity of observation will tell us much about whether forecasting can be extended, or is at its limit of certainty. We know we only collect about one sixth of the interesting information on influences on weather and climate. It’s not rocket science to know we need more and better satellite observatories and ground-based stations to feed into both our weather and climate models.
And we know from long experience that if you persistently communicate well about a subject like meteorology, with all its nuance, the general public and policymakers can handle that information, and you don’t need to sugar coat, dumb down, or overly simplify the reports you deliver to us. If it’s true for meteorology, it’s true for climatology; only I hope the mistakes of meteorology aren’t repeated in climate policy communication.
And I’m not talking about http://www.youtube.com/watch?v=ccx2hGt8M9I
Re: “our weather and our climate science may produce significantly better outcomes with better observations”
With the TRUTHS project, Nigel Fox of NPL is seeking to improve satellite measurements of earth by an order of magnitude – and exposes the very poor state of current instrumentation. When National Climate models are ever actually verified and validated, then their accuracy will be foundationaly limited by the current poor instrumentation – and by insufficient runs to obtain significance from chaotic variations.
This, I agree, is the ‘missed elephant in the room’ –> yours “The whole issue of verification and validation was ignored (apart from a general call to maintain the observing system).” Total disregard for for model verification and validation makes one wonder if they are at all serious about modeling the weather and climate.
Some of this touches on issues of Measurement Theory….”scale-awareness’, uncertainty on whether the measurements (numbers) we have actually represent the actuality of the climate attributes we think the represent, transformations performed on data in the process of down-scaling causing loss of ‘meaningfullness’, etc.
In the long run, without a sharp focus on verification and validation, are we wasting our efforts? Unverified, unvalidated models (IMO) are an interesting curiosity rather than a tool.
Speaking of the amazing Higgs-boson, what’s up?
Re: “scale awareness”
Do the National Climate modeled climate statistics accurately mimic nature across varying time scales? Are such climate statistics being evaluated to compare accuracy? Stochastic models may actually reduce the uncertainties in climate evaluation compared to deterministic models. Markonis and Koutsoyiannis find:
Markonis, Y., and D. Koutsoyiannis, Hurst-Kolmogorov dynamics in long climatic proxy records, European Geosciences Union General Assembly 2011, Geophysical Research Abstracts, Vol. 13, Vienna, EGU2011-13700, European Geosciences Union, 2011.
Re: “are an interesting curiosity rather than a tool”
Unvalidated models presented as truth by alarmists can cause great harm to the public when misguided politicians fail to realize that “the emperor has no clothes.”
Hagen –> if you are interested, see
Measurement and Meaning in Biology
Author(s): By David Houle, Christophe Pélabon, Günter P. Wagner, and Thomas F. Hansen
The Quarterly Review of Biology, Vol. 86, No. 1 (March 2011)
Thanks Kip for that article.
Such scale issues are also explored by Adrian Bejan in Shape and Structure, from Engineering to Nature (2000) Cambridge. e.g. heartbeat rate from shrew to whale. (Aside: That raises the interesting challenge of the origin and preservation of quantitative genomic algorithms that control shape during growth, which I don’t think either doc addresses.)
“Scale” raises the interesting question as to whether scale issues of deep convection and associated cloud variations are properly incorporated into climate models. e.g.,
Do current climate models properly reproduce Willis Eschenbach’s “Thermostat hypothesis” diurnal variation of clouds – albedo?
Getting precipitation right
One foundational challenge is first to get the physics right in every area!
David Stockwell (2008) Tests of Regional Climate Model Validity in the Drought Exceptional Circumstances Report found that the CSIRO’s drought predictions were contrary to the historical evidence using hindcast/forecast evaluations.
Furthermore: Afternoon rain more likely over drier soils Christopher M. Taylor, Richard A. M. de Jeu, Françoise Guichard, Phil P. Harris & Wouter A. Dorigo, Nature Letter DOI: doi:10.1038/nature11377 online 12 September 2012
WJR Alexander et al. found that precipitation and runoff vary strongly with the 21 year Hale cycle but NOT with surface evaporation. Linkages between solar activity, climate predictability and water resource development* J. So. African Inst. Civil Engineering Vol 49 No 2, June 2007, Pages 32–44, Paper 659
Will the National Strategy incorporate state of the art professional management methods for verification and validation, to systematically apply the scientific method, discover such missing physics, and correct the models to match the evidence?
WRT Taylor et al. (2012) good catch. It’s behind a paywall, and when I searched for it with Google Scholar, I found one article referencing it: Evaluating global trends (1988-2010) in harmonized multi-satellite surface soil moisture that when I clicked on it I got a “AGU’s Online Services Login” screen that told me “Purchase option not available for this content”. I’m not that much into conspiracy theories (heh) but I’m wondering what they’re hiding when an outsider can’t even buy content.
If “Skeptics” with money want to use it well, they might consider paying the fee to make content like this open-access. AFAIK most reputable journals offer this option. If they don’t, this becomes a cause célèbre in and of itself.
Based on the abstract, they’re evaluating effects on a 50-100 km scale, where differential convection from heating of dryer soils could tend to collect cooler air from nearby areas where evaporation has delivered the water vapor to the air. This is certainly different from the continental scale (200-1000 km), where increases in soil mosture could easily lead to more precipitation generally. Without seeing the text I can’t even determine which scale the models they’re denigrating are using. Either way though, I’d like to see an evaluation by a qualified and objective climate scientist. Prof. Curry?
Oops that link should be Evaluating global trends (1988-2010) in harmonized multi-satellite surface soil moisture
let us know when scafetta’s “model” does precipitation
steven mosher | September 14, 2012 at 12:58 pm
Steven, I don’t understand your post at all. You are the only person in the thread to mention either Scafetta or “scafetta’s ‘model’ “, and as is your lamentable habit, you provide no link to the model and no clue what it is you are mumbling about … you may find this style of cryptic posting cute or endearing, but from this side of the screen, it’s a pain in the fundamental orifice.
Not only that, but your posting style is damaging to your reputation, and it detracts from the weight that your words should have. You are a smart man, indeed a brilliant man, and your words should carry weight … but this kind of bogus post of yours reduces you to looking like one among thousands of jumped-up, clueless internet fools. I doubt that is the persona that you think you are presenting, but I assure you, it is definitely what you look like when you post like that.
Willis Eschenbach | September 14, 2012 at 1:35 pm | Reply
“you provide no link to the model and no clue what it is you are mumbling about … ”
“this kind of bogus post of yours reduces you to looking like one among thousands of jumped-up, clueless internet fools”
I reduced your comment for brevity. It should have a moral at the end:
“If it ducks like a quack…”
p.s. Scaffetta is associated with Sky Dragons. You are associated with Scaffetta. Sky Dragons is crank science. Guilt by association. Moser wasn’t being very cryptic. It was a thinly veiled insult. The false flattery you provided in return might be a bit more subtle. Ya think it’ll work? Mebbe get a little mutual back patting in motion?
I threw up in my mouth a little when I wrote that last sentence…
its an ongoing question I have to david who seems to think that scafetta has a model of the climate. its none of your business.
Scafetta’s temperature model currently appears more accurate than the IPCC’s. Consequently, anyone combining combining WJR Alexander’s discoveries connecting precipitation/runoff and the Hale cycle with Scafetta’s model would have a more accurate decadal precipitation model for the temperate regions.
(Global response in the tropical region may be different. See the flow records of the Nile.)
The impact of solar cycles on precipitation has been tracked from medieval through modern times by via the price of wheat or grain. e.g., SPACE CLIMATE MANIFESTATION IN EARTH PRICES – FROM MEDIEVAL ENGLAND UP TO MODERN USA, L.A. PUSTILNIK, G. YOM DIN Solar Physics, Volume 224, Numbers 1-2 (2004), 473-481, DOI: 10.1007/s11207-005-5198-9
Perhaps you could enlighten us as to the performance of IPCC’s models from medieval through modern times.
David you refer to scafetta’s climate model. it not a climate model. When called on that you change your tune and call it a temperature model.
its not even that.
Last I looked, Scaffeta’s thing was a curve-fitting pattern matching exercise to solar system variations.
In this case WebHubTelescope is correct which once again proves that even a blind squirrel occasionally finds an acorn.
Maybe Scafetta discovered the dawning of the Age of Aquarius all over again.
Space weather appears to have regional affects with Northern Europe differing from Southern Europe and Central Europe. See:
Possible Space Weather Influence on Earth Wheat Markets 2009 L.A. PUSTILNIK, G. YOM DIN
Is there any hope that the National Climate Strategy might aspire to such regional performance and develop models providing similar regional and long range accuracy for Northern America?
I understand Scafetta’s model to be a model of global temperature which is one measure of climate. It obviously is not a 3d global climate model a la IPCC. Models like Scafetta’s and trend comparison’s like Lucia’s can point out that the 3d GCM’s are missing physics.
Does that answer your concerns?
“The promise of uncertainty reduction, when not realized, stands as a metric of poor management, poor scientific method, or outright scientific failure.”
With absolute certainty, it’s (massively expensive) outright failure.
“This is a FAR GREATER impediment to providing information for regional decision makers than is model resolution. I see no strategy here for actually addressing this issue.”
Far darker (malicious deception &/or deep ignorance – intolerably dark either way) than that. They’re outright recommending that funding NOT be prioritized for natural variability exploration.
This is an intolerably severe error in judgement.
They shamefully attempt to characterize natural variability exploration as a hopelessly impossible pursuit, highlighting only their own personal lack of ability &/or integrity.
The only possible conclusion:
They lack expertise and can’t be trusted.
The only sensible option is to redirect behemoth “climate modeling enterprise” funding towards natural variability exploration, the base prerequisite – at the level of absolute fundamentals – to the assembly of meaningful climate models.
Until natural variability is FULLY understood, models are art supporting fiction-based culture.
I wonder if somewhere lurking in the wings, is a realization that Smith et al August 2007 is going to turn out to wrongly predict what global temperatures would be in 2014. When this paper was written, it seemed to answer the need for an explanation as to why global temperatures were not rising as much as predicted in the 21st century. This it did. Whether it was the correct explanation is, however, doubtful, IMHO.
However, as JCH is emphasing, decadal predictions are now supposed to be too difficult to do accurately, and so we can disregard what Smith et al wrote in 2007. I wonder if there is not some message in the NRC proposal that underlines JCH’s point. That in 2007, decadal predicitons were simply not possible.
They were considered difficult to do long before Smith et al. Trying to say that is an invention post Smith et al, as some sort of excuse, is dishonest.
It is also dishonest to say either the success or failure of Smith et al says much about climate models. It’s likely that early successes in decadal models will be nothing but luck.
What they can do with Smith et al, hopefully, is learn – to advance the ball.
Smith et al is already correct on one thing. Unforced natural variation suppressed the AGW signal in the first years of their forecast.
There is now the problem that HadCrut3 is rendered a regional temperature series, and is simply an incorrect way to judge Smith et al, which is a global forecast. According to Gistemp and Hadcrut4, Smith et al is not in as much trouble as you think.
Primary versus Derivative Climate-Change Models Regarding climate models, simple energy-balance entropy-gradient models have of course for many decades provided our primary understanding of climate change.
Derivative Climate-Change Models The following classes of derivative models are good and useful, of course …
• purely historical models, and/or
• purely statistical models, and/or
• computational fluid dynamical models, and/or
• purely economic models.
But none of these are primary, eh? :!: :!: :!:
The Primacy of Thermodynamics There’s a reason why the basic thermodynamical principles associated to energy and entropy are called “The First Law” and “The Second Law” — it is because no natural system, and no artifical system, have ever been observed to violate the Two Laws.
This astounding perfect record of infallibility cannot be asserted of historical, statistical, computational, or economic models, eh? :smile: :grin: :lol: :!:
Hansen’s Foresighted Modeling Strategy So perhaps James Hansen and his colleagues are wise to rest their climate-change predictions so largely upon the Two Laws! :smile: :grin: :lol: :!:
And Hansen’s thermodynamic focus is not surprising … ever since Hansen’s pioneering studies of energy-balance in the Venusian atmosphere, he has *always* founded his conclusions and predictions upon thermodynamical principles.
How to Disprove Hansen’s Climate-Change Worldview Now, if the thermodynamically predicted “acceleration of sea-level rise this decade” is not seen … and neither are the concomitant ice-melts and sea-warmings … then the climate-change community will conclude that some crucial piece(s) is/are missing from our present scientific understanding of earth’s energy balance.
And climate-change scientists will embrace this paradigm-shifting conclusion happily, as in “Hurrah! Here’s a chance for new, terrific, path-breaking science!”.
Realizing Climate-Change Paradigm-Shifts But to realize that paradigm-shift, climate-change scientists will not resort to more complicated dynamical modelss … neither will they resort to more complicated statistical models … rather they will first amend our simplest thermodynamical models of planetary energy balance.
Conclusions When climate-change scientists press for larger and more sophisticated computational models, and for more comprehensive and more accurate observational surveys, this amounts to a scientific vote of confidence that Hansen’s climate-change worldview is essentially correct … such that “all” that remains to be done, its accurately tune its (rather simple) thermodynamical parameters, via more extensive observation and modeling.
This model-parameter tuning is of course important, eh? And we can be quite certain that it will be accompanied by the usual scientific disputation, and even tendentious squabbling! :) :) :)
But it does not alter the simple main thrust: most scientists think Hansen’s energy-balance worldview is scientifically correct.
Uhhh … unless sea-level rise doesn’t accelerate, that is! :) :) :)
‘And climate-change scientists will embrace this paradigm-shifting conclusion happily, as in “Hurrah! Here’s a chance for new, terrific, path-breaking science!”.’
Goddammit, you owe me a keyboard now I’ve spluttered coffee all over it. :-)
LOL … the plain fact is, that scientists in general — and younger scientists in particular — absolutely *LOVE* to prove that the senior scientists have got it all wrong!
That is why, in predicting “acceleration of sea-level rise this decade,” Hansen and his colleagues have pushed our their scientific chins a pretty long way! ;) ;) ;)
Every young climate-scientist in the world would *LOVE* to someday give a lecture and/or publish an article explaining exactly why Hansen and his colleagues got it all wrong. Conversely, if sea-level rise *DOES* accelerate, then Hansen and his colleagues will sustain their reputations as top-dogs of climate-change science.
So it’s a tough-but-fair game, eh? :) :grin: :lol: :!:
Very good AF OMD. I would like to add that the second law in particular arises from deep statistical mechanical considerations, so that in a way, large scale statistics is a primary view of our understanding. The statistics always shows that the mean value evaluation of energy-balance and entropy accounting are what ultimately allows for long-range predictions.
yeah that was actually pretty good. usually the Hansen adulations bore me, and Web seems to think that because I disagree with him on some specifics, I diagree with the sorts of things he says here.
BillC, when fighting the stupid there is often collateral damage. When agenda politics also plays a role, feelings can get trampled. And then there are the old white guys that are prone to yelling at the clouds. Encountering crackpots and cranks at every turn doesn’t help matters.
Isn’t this the parable of the Incredible Hulk? Hulk get angry, and must smash all in path.
Is that science in action? More like education in the school of hard reality.
I certainly agree that entropy is increasing locally and a lot of heat is being dissipated.
The other NRC – the Nuclear Regulatory Commission – has excellent guidance documents for both model verification and validation, and use of existing data. These common sense guidelines are rigorously enforced in the nuclear world. It is a shame that climate modeling and data are not routinely subjected to the same standards. The societal impacts of Fukushima are piddling compared to the projected impacts of CO2 (if the CAGW crowd is correct) or to the socio-economic impacts of ill-considered actions (if they are wrong).
John Plodinec, the reactor melt-downs at Fukushima did not arise from a paucity of scientific and engineering understanding … they came from TEPCO executives who, in the interest of maximizing short-term profits, chose to willfully ignore the “inconvenient truths” that were asserted by Japan’s scientists and engineers
Duh. :shock: :oops: :shock: :oops: :shock:
So what? I was comparing consequences, not causes.
@A Fan of John Seidel
That the executives in Japan chose to act in a reckless manner does not, to my mind, give carte blanche to climateers to avoid showing their work is any good for anything.
Any more than rampant overuse of emoticons gives your remarks immunity from critical examination.
Re: “Traditionally the number of ensemble members has not been large (e.g., around three in the CMIP3 data set), nor has it been based on rigorous statistical considerations.”
Fred Singer exposes the consequences of this foundational ignoring of the chaotic nature of climate:
NIPCC vs. IPCC: Addressing the Disparity between Climate Models and Observations: Testing the Hypothesis of Anthropogenic Global Warming (AGW), Interim Science Update, Presented at Majorana Conference in Erice, Sicily August 2011
The IPCC’s 0.2 C/decade continues to be 2 sigma hotter (> 95%) than the historical 32 year temperature trend.
Will the National Strategy for Advancing Climate Modeling every require sufficient runs for models to be statistically significant and then to verify and validate the models against objective evidence?
[Posted this on the wrong (earlier) thread, so am re-posting here]
Thanks for re-posting the link to your DOE BERAC slides “What Can We Learn from Climate Models?”.
After going through these, this is what I have “taken home”:
GCMs are better than they were: more computer power, more physical processes included, more comparisons with physical observations, good results for regional applications over short time scales
too many uncertainties in assumptions, too many “science gaps”, poor prediction capability, overconfidence in results, poor for global applications or over longer time scales, lack of formal model verification and validation (circularity)
For what purposes are models fit?
Yes – explore scientific understanding
Maybe – attribution of past climate changes
– prediction/attribution of extreme weather events
– prediction of global climate changes, due to non-predictable factors and abrupt climate changes (“black swans” and “dragon kings”)
Some conclusions (without going into all the detail):
– GCMs should be used more for understanding natural (solar) forcing and natural climate variability
– need a plurality of different GCMs for different purposes
– GCM-centric approach is not the best for “decision support”
This presentation clears up a lot of the confusion surrounding the validity of GCMs as they are today in attributing past climate change or predicting (or projecting) future climate or extreme weather trends.
The slides do not state specifically whether or not GCMs are suitable for estimating climate sensitivity (but it hints that this is not so).
After reading this presentation, I remain rationally skeptical that GCMs are much better than simple educated guesses in telling us why our climate behaves as it does over longer time periods, or in making projections for the future that have any real significance for “policy makers”.
PS If I have misinterpreted or missed out on any critical points, please straighten me out. Thanks
Sounds like a plea to throw good money after bad. The US is tapped out. Find the money somewhere else.
Check out the “validating climate models” page for the new website that accompanies this strategy document.
They show two examples of model outputs for run lengths of 12 hours for one and maybe 48 hours for the other. That’s not climate forecasting it’s weather forecasting and it isn’t even the usual 10-day forecast.
That’s just pathetic. Who exactly do they believe this is going to fool?
My favorite part was this, which is mentioned but doesn’t seem to be particularly understood:
What they are saying is that despite huge increases in all of the areas that they discuss (computer speed, model size, model complexity, ocean-atmosphere coupling, etc), in thirty years (not twenty as they state) we have made absolutely no progress in calculating what is claimed to be a fundamental metric, the “climate sensitivity”.
Now, after thirty years and millions of man-hours have led to no progress at all, anyone with half a brain would say “Huh … maybe we’re doing something fundamentally wrong”.
But not the NRC, oh, no, they are out to definitively prove they don’t have a brain. They cheerfully advocate more of the same …
Now, as I’m sure folks here have heard, madness has been defined as doing the same thing over and over and expecting a different result.
By this definition, the NRC advice, which consists of lets do what we have been doing only harder and with finer resolution and greater speed, is assuredly mad.
We don’t need new faster computers or bigger, more complex programs. We need to throw out the existing programs, and start over with the correct paradigm. The current paradigm is that the surface temperature slavishly follows the TOA forcing … except it doesn’t do that. The climate responds and reacts to changes in forcing, it does not follow them wiggle for wiggle as the current paradigm claims.
As an example of a different kind of conceptual/computer model of the climate that includes the response of a living climate, see e.g. Hsien-Wang Ou, Possible Bounds on the Earth’s Surface Temperature: From the Perspective of a Conceptual Global-Mean Model. Unfortunately, this kind of model is completely ignored by the childish let’s-keep-doing-it-wrong-and-ignore-continuous-failure recommendations of the NRC …
“Now, as I’m sure folks here have heard, madness has been defined as doing the same thing over and over and expecting a different result.”
You mean like pointing out the flaws in climate pseudo-science over and over in blog postings and expecting this will cause a sea-change in belief?
Uhhh … or what they’re saying *could* amount to James Hansen and his colleagues basically got climate-change science right, way back in 1981.
Now, ain’t that a more rational explanation, Willis? :smile: :grin: :lol: :!:
It’s not complicated, Willis Eschenbach! :smile: :grin: :lol: :!:
Hansen’s 1988 predictions turned out to be wrong, as has been demonstrated elsewhere on this site and many other places.
His projections for increases in anthropogenic GHGs (primarily CO2) were pretty close (CO2 actually increased slightly faster than he projected for his “Case A”)
His projection for the rate of temperature increase, however, turned out to be exaggerated by a factor of 2:1.
In fact, it came out quote close to his “Case C”, for which he assumed no further GHG increase after 2000.
It appears to me that Hansen’s assumed climate sensitivity was exaggerated by a factor of 2:1
Now, ain’t that a rational explanation, fan?
(I’ll pass on the smileys)
Verifying Hansen’s scientific consistency spanning the three-plus decades of 1981 through 2012 sure is a lot more informative than parsing juvenile neodenialist quibbles and squabbles, eh manacker? :smile: :grin: :lol: :!:
The only bit of ‘scientific consistency’ that I can see about Hansen is that he is consistently wrong about everything,
If your last clause was supposed to have any meaning, your ridiculous way of expressing it meant that it passed me by.
Hansen was more right than wrong. He projected global temperature would rise during the 1988-2020 period. So far he’s been right. It has risen. He got the amount wrong, but anyone who expects spot-on accuracy in a long-term projection is not very wise, and might be an easy mark for a con.
If Hansen’s projections were for the stock market, I would be pleased had I invested based on them.
Max_OK, your comment is entirely correct! :smile: :grin: :lol: :!:
Climate Etc folks can verify for themselves that Hansen’s 1981 through 2012 articles present a consistent climate-change worldview that has been very largely validated. :smile: :grin: :lol: :!:
Folks who read for themselves … and verify for themselves … and think for themselves … can appreciate for themselves that neodenialist climate-change quibbles and squabbles are nugatory, eh Latimer Alder? :smile: :grin: :lol: :!:
Hansen ‘consistently wrong about everything’? Please provide the long list of things that Hansen has been wrong about over the last thirty-odd years.
‘He projected global temperature would rise during the 1988-2020 period. So far he’s been right’
Well big f…g deal! Arrhenius predicted that temperatures would rise back in 1907. You don’t get a lot of brownie points for that.
Tell you what, I’ll make a couple of Hansenesque predictions too.
1. In 50 years average global temperatures will not be less than they are today
2. In 50 years time, average global sea level will not be lower than it is today.
There. That’s climatology done. we can all pack up and go home.
For my next trick I;ll predict that Aldershot will beat Morecambe in the football tomorrow. Or it’ll be a draw.
At least you have the grace to admit
‘He got the amount wrong’ which is a pretty major failing for a forecaster.
Here’s Hansen’s publications page.
Over to you.
Or did our fan of discourse have a point?
Sorry, fan, your blah-blah has not changed the fact that Hansen’s 1988 projections of temperature rise due to rising GHG concentrations (primarily from CO2) turned out to be exaggerated by a factor of 2, as I showed you.
I suspect that this was because Hansen’s model used a climate sensitivity that was 2x too high.
What is your rational explanation, fan, eh?
You bet “he got the amount wrong”
By a factor of 2X as a matter of fact.
This is because his models used a climate sensitivity which was also “wrong” (i.e. exaggerated) by a factor of 2X.
Quite simple, actually.
And that was my point to fanny.
Max (not from OK)
Manacker, have you noticed in your reading of the climate-change literature, that James Hansen and colleagues have tuned their predictions to focus upon Earth’s Energy Imbalance and Implications?
That’s for the common-sense reason that global measures are less sensitive to local fluctuations, such that the resulting climate-change predictions are less vulnerable to nedenialist agnotology, cherry-picking, and demagoguery.
That’s a mighty smart adjustment, eh makacker? :smile: :grin: :lol: :!:
Your way of attempting to communicate is very annoying and off-putting. Makes one think you are batty and therefore that it would not be worth one’s while to click on you links.
Latimer Alder boasted on September 14, 2012 at 3:58 pm
“Tell you what, I’ll make a couple of Hansenesque predictions too.
1. In 50 years average global temperatures will not be less than they are today
2. In 50 years time, average global sea level will not be lower than it is today.”
But you pulled those forecast out of your butt, which means what you did can’t be replicated, and therefore your butt model is not scientific, although it might be big.
‘That’s for the common-sense reason that global measures are less sensitive to local fluctuations, such that the resulting climate-change predictions are less vulnerable to nedenialist agnotology, cherry-picking, and demagoguery’
Fantastic. Wonderful. Gosh I’m overwhelmed.
But has it made Hansen’s predictions any better? Reduced the error in temperature from 100% overestimate to something mire manageable – 85% perhaps?
Or is it the same old same old…?
Back in the nursery
Nanny: .’Look children, Uncle Jimmy’s got a big new ‘puter and he says that the temperature will go up. And it does. Isn’t Uncle Jimmy and his ‘puter clever!’
A Fan of Himself (Age 5): Wow – he’s nearly as clever as I am :-) :-) :-). I will love Uncle Jimmy forever no matter what stupid stuff he does
Mad Max: He’s right, he’s right, he’s right. I love Uncle Jimmy.
Sensible Max: But he said it would go up twice as much as it did. I don’t think Uncle Jimmy’s anywhere near as clever as you all think he is.
Nanny: Stop asking difficult questions Sensible Max. We all know Uncle Jimmy is a very very very clever man because he keeps telling us so! And that’s all you need to know.
A Fan (sotto voce) Neat idea…I must do that if I ever grow up.
Nanny: Its jelly and cake time…no more questions children. Next week we’ll have nice Uncle Bernie Madoff to tell us all about putting our money in his piggy bank.
A Fan and Mad Max together : We love Uncle Jimmy…he’s our top man
:-) :-) :-)
Sensible Max ; I think having this ‘Uncle’ gig every week will only lead to tears before bedtime…
Climate realists have their one endearing, wigged-out personality, and the climate skeptics go nuts. Meanwhile, the skeptic’s GoTeam consists of some 30+ crackpots who comment here regularly (see http://tinyurl.com/ClimateClowns).
Ain’t it something to behold? One Fan from the sideline, benignly heckling the opposition, and they get lathered into a froth? Can’t make the free-throws, eh team?
A big part of the ‘problem’ here is that our fan of discourse is running very large rings around the crypto-denialists, laughing all the while. Keen eyed readers will note that Latimer Alder’s responses contain no content other than bile. LA still hasn’t provided a certain list he needs to provide. LA needs to stop talking and start saying something.
Oh yeah, LA is essentially practicing the fine art of English aristocratic rhetoric. Say it with some articulation, and it goes down more smoothly, no matter the empty content.
The bile part is kind of funny, because Latie actually threatened me with some British libel law a while back. These bullies can dish it out but can’t take it. LA evidently comes here because this is not sovereign English land.
You know, BBD, one of the freak-show pleasures of this blog is eavesdropping on those complete creep-out, heavy-petting, buddy-buddy, delicate, oh-so-intimate schmooze-boogers you hive-bozo weirdos are forever dramatically launching some good-comrade sweetheart’s way–with appropriate deep sighs and wet eye-lashes special effects, of course. And your last is pretty much as good as it gets in that particular genre.
And, I’m thinkin’, BBD, that when you carbon-piggie enviro-hypocrites attend your never-ending, boondoggle, CO2-spew eco-confabs (all such conferences easily held in a low-cost, low-carbon, video-conference format, we all understand, but, as us hapless taxpayers stuck with your tab also understand, low-cost, low-carbon options are not the style of you gravy-train-addicted dorks most convinced of the pestiferous, carbon-peril crapola, and, mouth-off wise, most committed to reduced-carbon life-styles), your last comment pretty much typifies the character of the all-nighter, boozy, grab-ass, towel-snapping hob-nobbing and palsy-walsy networking indulged in by you and your trough-addicted, fellow parasites at your little gatherings. Am I right, BBD? Or am I right?
And, oh by the way, just in case there are any especially clueless, complete-water-melon-brain geek-balls who are unfamiliar with Latimer’s magisterial work–let me clarify that Latimer is on the side of history. ‘Nuff said, right comrades!
Another splendidly bilious but absolutely content-free comment. Bravo. Do I detect a bat-squeak of pent-up homoeroticism behind all the alliteration?
Yr: “Do I detect a bat-squeak of pent-up homoeroticism…”
Looks like moderation ate my last so I’ll more artfully re-make my previous reply and hope for the best.
Dear, dear, BBD, what are we to make of your last? Earlier we got from you the “zinger” neologism “crypto-denialists” and now we get “bat-squeak”. I mean, like, where does the hive come up with lame-brain, gag-reflex-inducing wordsmith-losers like you, anyway? Jeez.
And then there’s the “other part” of your comment. You know, the part, BBD. Well, BBD, despite your best efforts to call me out as a “HXXO!”, I regret to inform you that the only thing you “detected” in my prior comment was the that raw-nerve of yours, I touched big-time, and the lefty’s love of ageist, sexist, racist, (e. g. old, white, men (that last word said with a dramatic lowering of the voice and with an asp-like rasp in the throat), and, yes, even sexual-preference put-downs, when it serves the “cause.”
You know, BBD, I don’t know which of you two, right-out-of-the-box, cracker-jacks prizes I prefer, you or fan (I assume even you hive-bozos disavow that Michael creep-out and he’s just one of your can’t-take-a-hint, pesky tag-alongs.
Yr: “Thought so.” cum smiley-face
Thought so? Thought so WHAT, BBD? Maybe, just maybe, your thought was that yours truly is an “old, white, guy”–that’s it, isn’t it, BBD? Well, if you and your moon-faced chum with the idiot grin thought that, BBD, you just might have a point, I suppose.
This Thread Starts With: Overall, climate modeling has made enormous progress in the past several decades,
During the past several decades, more and more warming has been forecast and Earth is no longer warming. That is not any progress.
Check out the “Users of climate models” page for the new website that accompanies this strategy document.
This is a wish list. They give examples of users of climatology and conflate climatology with climate model predictions. Climatology has been around since the Pharoahs. They built their civlization around the Nile River because climatology indicated that’s where the water was in the past and therefore where the water was likely to be in the future.
Again, who are they trying to fool by conflating climatology with climate modeling?
The best model for the future can be obtained by curve fitting the past ten thousand years and extend that forward. The actual history forecasts the future. That means that the average global temperature will stay within plus and minus one degree C most of the time and within plus and minus two degrees C all the time, just like the ice cores show has happened.
Lot said little understood.
One simple graph is worth many erudite pronouncements:
There are more in the store.
Jean Dickey says the sequence is:
1. poleward SST gradient.
2. GPH gradient.
3. circulatory morphology.
She gives an outline of the thermal wind relation in Appendix A.
Dickey, J.O.; Marcus, S.L.; & Chin, T.M. (2007). Thermal wind forcing & atmospheric angular momentum: Origin of the Earth’s delayed response to ENSO. Geophysical Research Letters 34, 7.
 Numerous papers have pointed out the robust correlation between the El Nino/Southern Oscillation (ENSO) and interannual variations in length-of-day […] this correlation arises because stronger (weaker) sub-tropical jet streams, concentrated in the upper troposphere (~200 hPa) of both hemispheres, increase (decrease) the axial component of AAM during the mature phase of El Nino (La Nina) events, thereby causing LOD to increase (decrease) in order to conserve the total angular momentum of the Earth system. The increase in jet stream strength during El Nino arises from a general warming of the tropical troposphere associated with the transfer of latent and sensible heat from the equatorial Pacific ocean to the overlying atmosphere, causing geopotential heights and in particular the tropopause to become elevated at low latitudes, thereby increasing the poleward height gradient and geostrophic wind speeds […].
 Since zonal wind anomalies are linked to the meridional temperature gradient through the thermal wind equation (see Appendix A), we also examined the difference between average equatorial (15N – 15S) and subtropical (30N – 15N, 30S – 15S) TT as an approximate index of the tropical temperature gradient (TTG). The resulting correlations (Figure 1, dark blue curves) maximize at the same one-month lag as the LOD and AAM, indicating that the poleward gradient of the temperature response, rather than the temperature anomaly itself, controls the timing of the Earth rotation response to ENSO forcing. Interestingly the TTG has a higher correlation with the ENSO indices than either AAM or LOD, consistent with its role as an intermediary in transmitting the ENSO signal to the atmosphere and solid Earth through angular momentum forcing. It is also interesting to note that the magnitude of the TTG correlation is larger than the maximum TT correlation, confirming the robust signature of the ENSO cycle on the atmospheric temperature gradient as well as on the temperature itself.
Condensed notes from paragraphs 2 & 7:
The increase in jet stream strength during El Nino arises from a general warming of the tropical troposphere associated with the transfer of latent and sensible heat from the equatorial Pacific ocean to the overlying atmosphere, causing geopotential heights and in particular the tropopause to become elevated at low latitudes, thereby increasing the poleward height gradient and geostrophic wind speeds […].
[…] zonal wind anomalies are linked to the meridional temperature gradient through the thermal wind equation (see Appendix A) […]
[…] the poleward gradient of the temperature response, rather than the temperature anomaly itself, controls the timing of the Earth rotation response to ENSO forcing.
[…] magnitude of the TTG correlation is larger than the maximum TT correlation, confirming the robust signature of the ENSO cycle on the atmospheric temperature gradient as well as on the temperature itself.
TTG = tropical (poleward meridional) temperature gradient
TT = tropical temperature
zonal = east-west oriented
meridional = north-south oriented
AAM = atmospheric angular momentum (a summary of global wind)
LOD = length of day (inversely related to earth rotation rate)
ENSO = El Nino / Southern Oscillation
SST = sea surface temperature
GPH = geopotential height (relates to mass distribution & pressure)
You recommend reading jean dickey and I am quite happy to do so but can you provide a suitable link where the paper which isn’t hiding behind a pay wall? Thanks
The detailed quotes I’ve shared above are from the following paywalless link:
Dickey, J.O.; Marcus, S.L.; & Chin, T.M. (2007). Thermal wind forcing & atmospheric angular momentum: Origin of the Earth’s delayed response to ENSO. Geophysical Research Letters 34, 7.
(Apologies: WordPress garbled a link I gave elsewhere in this thread.)
your link at 5.45 pm goes directly to the subscription pay wall of agu. To proceed i must either log in with my password or buy the article. I want to read this author but am not willing to pay. Have you got another link?
I cannot explain why I have free access and you do not.
Thanks for your efforts. I simply cannoyt proceed any further without putting in a password or buying the article, yet you can. Very strange. I will try on another browser but that seems an unlikely explanation
Oh check this out. A list of six climate modeling agencies in the United States.
Six is also the number of tits on a tomcat. Coincidence? I think not!
Pielke Sr insists that regional climate models have demonstrated no skill whatsoever. If he is right – and I have no way to judge – then this effort seems like a waste of resources. Of course someone somewhere should be working on the problem, but putting big money into an effort that has had no success to date seems like the very definition of waste. This sounds like a workfare program for climate scientists.
Mark, I agree wholeheartedly that incrementalism critically fails & massively wastes, but your ignorance-by-design-regime goes nowhere and stays there forever. That’s neither a sensible nor acceptable option. How to efficiently & effectively fund simple, brilliant revelation? Can this even be done in an administratively defensible manner? These are the questions for sensible leaders. Best Regards.
‘Increasing spatial resolution does not automatically lead to improved accuracy of simulations.”. True but almost irrelevant. Sticking with very coarse grids does not help, but at least we can handle them with today’s computing power. Recommendation 6.1 is right on: today we have NO estimate of accuracy of the models. Indeed, climate scientists defend a usage of a constant heat of vaporization of water (I’ll call the medium so modeled a “dry water”), but they can’t provide any estimate how it impacts the accuracy of results. These extreme simplifications at least allow us (us? them?) to run the models today, but is it still a science?
We are trying to eat an apple before it is ripe.
Curiuos George suggested:
“We are trying to eat an apple before it is ripe.”
Very accurate analogy
(…if you change “we” to “they”).
“My prediction is that #3 unified weather-climate prediction will have the greatest impact; it will force a focus on subseasonal and seasonal timescales […] Improving simulations of the MJO, ENSO, AO etc will require getting the coupling between the atmosphere and ocean correct, which is needed for climate models to get the circulations (and hence regional climates) correct.”
Development of DEEP understanding of natural variability is PREREQUISITE to meaningful pursuit of “Recommendation 6.1” (uncertainty characterization, quantification, communication, etc.).
Conventional mainstream assumptions about the physical & statistical nature of often severely mischaracterized so-called “uncertainty” are ruled out absolutely (on the basis of pure logic) by the following observations:
terrestrial polar motion (wobble) signal in decadal-extent cross-ENSO annual-grain LOD (interhemispheric angular momentum imbalance) anomalies
solar signal in decadal-extent semi-annual-grain LOD (hemispherically alternating mid-latitude westerly wind jet) anomalies
There are dozens of ways to isolate these patterns. Pattern stability is robust across methods.
Every sensible reader should be prepared to both realize and acknowledge that while many of the things we discuss are grey, there ARE things that are black & white. A line of social intractability has to be drawn in the clearest terms when communications run up against an unwillingness to acknowledge something analogous to 1+1=2.
There’s no basis for trust (whatsoever) if 1+1=2 can’t or won’t be acknowledged.
At first, silence might be interpreted as ignorance. With persistence it looks more like deep ignorance. When 1+1=2 has been pointed out countless times over the course of years and ignored, we might as well reclassify the darkness from deep ignorance to malicious deception, as that is the practical impact.
The envelopes bounding well-constrained (LCAM, CLT, TWR) measures of annual & semi-annual circulation can be decomposed cross-ENSO. This reduces by orders of magnitude the size of the set of permissible climate model states. Ignoring this finding indefinitely is sure to be a nail in the coffin of climate science integrity.
I refer responsible readers to the writings of earth orientation & climate explorer Jean Dickey of NASA JPL [ http://technology.jpl.nasa.gov/people/j_dickey/ ] for essential cross-disciplinary background:
1. Dickey, J.O.; Marcus, S.L.; & Chin, T.M. (2007). Thermal wind forcing & atmospheric angular momentum: Origin of the Earth’s delayed response to ENSO. Geophysical Research Letters 34, 7.
2. Dickey, J.O.; & Keppenne, C.L. (1997). Interannual length-of-day variations and the ENSO phenomenon: insights via singular spectral analysis.
3. Dickey, J.O.; Marcus, S.L.; & de Viron, O. (2003). Coherent interannual & decadal variations in the atmosphere-ocean system. Geophysical Research Letters 30(11), 1573.
LOD = length of day
LCAM = Law of Conservation of Angular Momentum
TWR = thermal wind relation
CLT = Central Limit Theorem
ENSO = El Nino / Southern Oscillation
MJO = Madden-Julian Oscillation
AO = Arctic Oscillation
1. Dickey, J.O.; Marcus, S.L.; & Chin, T.M. (2007). Thermal wind forcing & atmospheric angular momentum: Origin of the Earth’s delayed response to ENSO. Geophysical Research Letters 34, 7.
FYI the anomalies depicted are from sliding decadal Gaussian climatologies.
One more loose end to tidy up here…
On MJO ….
forgot to mention:
Blanter, E.; Le Mouël, J.-L.; Shnirman, M.; & Courtillot, V. (2012). A correlation of mean period of MJO indices and 11-yr solar variation. Journal of Atmospheric and Solar-Terrestrial Physics 80, 195-207. doi:10.1016/j.jastp.2012.01.016.
It’s a bit of a mystery why this wasn’t published years ago. (I suspect hostile peer review. I’ll inquire when opportunity permits. I’ll audit if/when time/resources permit.)
A fan of *MORE* discourse | September 14, 2012 at 12:44 pm
So your brilliant idea is that we have made absolutely no progress in calculating “climate sensitivity” over the last 30 years because Hansen understood climate science so well and got it so perfectly right 30 years ago?
LOL … in-depth climate-modeling serves multiple purposes Willis Eschenbach! :) :) :)
• Verifying Hansen’s 1981 global-scale energy-balance models,
• Assessing decadal-scale fluctuations (associated to drought),
• Better risk assessment for more efficient insurance markets,
• Improving hurricane and tornado forecasting, and
• Informing our near-term versus long-term moral/economic balance.
How may we further augment your appreciation and enjoyment of the multiple merits of climate-change science, Willis Eschenbach? :) :) :)
This is an interesting example of one of the problems I see in how some people approach the scientific process: Specifically, how they conceptualize “progress” in understanding complex issues.
It reminds me of a few occasions when I have asked international graduate students I was working with about the progress of their research — and had them tell me that the research was going poorly. When I pressed further, they said that their advising faculty were not happy, because to date their experiments hadn’t confirmed the working hypothesis that motivated the study. They said that despite their dedicated and hard work exploring their theoretical construct, they were not making “progress.”
Of course, I suggested to them that maybe getting results that did not confirm the hypothesis might have educational value.
“Of course, I suggested to them that maybe getting results that did not confirm the hypothesis might have educational value.”
Joshua, obviously the overall sentiment here at CE is that models offer absolutely no educational value at all. They are all completely and utterly a useless waste of time, money and effort. The value of their output is nil. We are not able to learn anything…. not anything at all from modeling our climate. sarc off/
It’s clearly not the overall sentiment, John.
Still, such a binary perspective on climate models, and scientific investigation more generally, is not that uncommon. (it’s one of the major problems with the process of scientific research more generally – only strongly positive results are considered worth publishing – which leads to the kinds of problems with the publication process that are often discussed here at Climate Etc.)
I wouldn’t be surprised if there are a number of comments in this thread that echo Willis’ sentiment – that “no progress” has been made – and as a corollary, that GCMs are worthless.
“It’s clearly not the overall sentiment, John”
True, Joshua, not the overall sentiment…. but from what I read a good portion.
Steven Mosher | September 13, 2012 at 5:49 pm | Reply
“The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern “knowledge” is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. “If I am the wisest man,” said Socrates, “it is because I alone know that I know nothing.” the implication was that I was very foolish because I was under the impression I knew a great deal.
My answer to him was, “John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”
The basic trouble, you see, is that people think that “right” and “wrong” are absolute; that everything that isn’t perfectly and completely right is totally and equally wrong.
However, I don’t think that’s so. It seems to me that right and wrong are fuzzy concepts, and I will devote this essay to an explanation of why I think so.
When my friend the English literature expert tells me that in every century scientists think they have worked out the universe and are always wrong, what I want to know is how wrong are they? Are they always wrong to the same degree? Let’s take an example.
In the early days of civilization, the general feeling was that the earth was flat. This was not because people were stupid, or because they were intent on believing silly things. They felt it was flat on the basis of sound evidence. It was not just a matter of “That’s how it looks,” because the earth does not look flat. It looks chaotically bumpy, with hills, valleys, ravines, cliffs, and so on.
Of course there are plains where, over limited areas, the earth’s surface does look fairly flat. One of those plains is in the Tigris-Euphrates area, where the first historical civilization (one with writing) developed, that of the Sumerians.
Perhaps it was the appearance of the plain that persuaded the clever Sumerians to accept the generalization that the earth was flat; that if you somehow evened out all the elevations and depressions, you would be left with flatness. Contributing to the notion may have been the fact that stretches of water (ponds and lakes) looked pretty flat on quiet days.
Another way of looking at it is to ask what is the “curvature” of the earth’s surface Over a considerable length, how much does the surface deviate (on the average) from perfect flatness. The flat-earth theory would make it seem that the surface doesn’t deviate from flatness at all, that its curvature is 0 to the mile.
Nowadays, of course, we are taught that the flat-earth theory is wrong; that it is all wrong, terribly wrong, absolutely. But it isn’t. The curvature of the earth is nearly 0 per mile, so that although the flat-earth theory is wrong, it happens to be nearly right. That’s why the theory lasted so long.
There were reasons, to be sure, to find the flat-earth theory unsatisfactory and, about 350 B.C., the Greek philosopher Aristotle summarized them. First, certain stars disappeared beyond the Southern Hemisphere as one traveled north, and beyond the Northern Hemisphere as one traveled south. Second, the earth’s shadow on the moon during a lunar eclipse was always the arc of a circle. Third, here on the earth itself, ships disappeared beyond the horizon hull-first in whatever direction they were traveling.
All three observations could not be reasonably explained if the earth’s surface were flat, but could be explained by assuming the earth to be a sphere.”
Are they good enough yet that they can make the elephant’s ears wiggle?
Good line for “The Big Bang Theory”.
Feynman, right? Give me four variables and I can model an elephant. Give me five and I can make his ears wiggle.
I sure blew that one. It was the elephant’s trunk, not his ears. And it was John von Neuman, attributed by Fermi not Feynman, and quoted by Freeman Dyson. Too many two syllable physicist’s names where first syllable begins with an F and second with an M for my old brain to keep sorted out I guess.
Maybe these thoughts are worthwhile writing. This is merely saying the sort of things others have written, but from the point of view of signal to noise ratio physics. The proponents of CAGW claim that forecasts decades into the future are valid, because weather averages out to zero over these periods of time. To some extent, this is true, if by weather, one means variations on a day to day, week to week, month to month, or even year to year basis.
However, what does not necessarily average out is variations in climate; that is to say natural occurrences which vary over a period of decades of more. The rule for noise to average out, is that it’s time constant, (roughly the period over which the average value of the noise averages out to zero) is short compared with the period over which the signal is being integrated. If the time constant of the noise is comparable with, or longer than, the integration period, then there will always be a residual value for the noise, which will be confused with any signal.
Now we know that occurrences like the PDO have a time constants measured in decades; the PDO cycle is some 60 years. So these natural variations with time constants measured in decades, centuries, or even millenia, will always have residuals that will be confused with the signal. Unless and until we know the amplitudes and time constants of all the natural climate variables, the predictions of non-validated climate models can be guaranteed to be wrong.
Jim Cripwell, you are mistakenly conflating the statistical principle that “fluctuations average to zero” with the physics principle that “energy-balance is exactly zero”.
The latter principle is by *far* the stronger of the two! :) :) :)
That is the common-sense reason why James Hansen and his colleagues strongly advocate high-precision global-coverage energy-balance observation
It’s not complicated, Jim Cripwell! :smile: :grin: :lol: :!:
Hansen’s time period for determining the energy imbalance was interesting.
As I pointed out the the Skeptical Gates before, the satellite data is remarkably useful. Using the detrended mean sea level data and the UAH southern hemisphere oceans there is a remarkable correlation between sea level and lower troposphere temperature. Being detrended, that is only the thermosteric portion of sea level.
You can though use the satellite atmospheric data with the sea surface temperature to build a nifty global wattmeter. If you don’t cherry pick a particular range to make a point, even glean some useful information that would be useful for regional or global scale models. The problem really isn’t with the models, it is with the modelers.
Consider the Arctic sea-ice observations as an example of fluctuations in action.
Clearly we have an excellent signal-to-noise ratio over the last several decades. There is no doubting that changes in sea-ice extent of 40% is a good signal, see http://doc-snow.hubpages.com/hub/Sea-Ice-Loss-2012-What-Do-The-Records-Mean. This is indeed a much better signal than the 1 degree of warming (or less for SST) in global temperatures, which is much less than a percent of the average temperature on the Kelvin scale.
Yet, the Arctic sea-ice extent still shows remarkable fluctuations from year-to-year. The last minimum was in 2007 and so there are fluctuations that we need to explain.
So it seems as if the assertion of someone like Cripwell is that we can hit closer and closer to a zero minimum in sea-ice extent, but as long as the fluctuations exist, that there is no basis to claim that something abnormal is happening. After all, as Cripwell says, these fluctuations “will be confused with any signal”.
Interested in Curry’s upcoming post on this topic. This is uncertainty with the emphasis on aleatory effects and almost free of epistemic errors.
The simple explanation is that as the mean of some variate changes, the variance increases accordingly. This is in keeping with the increase of extreme values with climate change. Even some Republican weathermen understand this:
NCR: …….In this regard, the role of internal variability has been under-investigated in the exploration of future climate change …
Dr. Curry The above statement is excellent, I couldn’t have said it better myself.
NASA is not only Jimmy H and Gavin S.
There is also a very clever lady Dr. Jean Dickey at JPL.
Here what she says:
One possibility is the movements of Earth’s core (where Earth’s magnetic field originates) might disturb Earth’s magnetic shielding of charged-particle (i.e., cosmic ray) fluxes that have been hypothesized to affect the formation of clouds. This could affect how much of the sun’s energy is reflected back to space and how much is absorbed by our planet. Other possibilities are that some other core process could be having a more indirect effect on climate, or that an external (e.g. solar) process affects the core and climate simultaneously.
Of course she has to acknowledge vies of her employer too.
It’s good to see more climate enthusiasts (of all orientations) paying attention to Jean Dickey’s beautiful work.
I’ve provided links to her contributions here:
General audience members new to Jean Dickey’s work…
For an example of the clarity of her thinking and her beautiful, easy-to-read communication style, please see the following as an easy starting point:
Dickey, J.O.; Marcus, S.L.; & Chin, T.M. (2007). Thermal wind forcing & atmospheric angular momentum: Origin of the Earth’s delayed response to ENSO. Geophysical Research Letters 34, 7.
If everyone takes a few minutes to understand the basics of this one, short, easy-to-read paper, the quality of discussion here could potentially rise quite dramatically, quite quickly.
_Best Regards to All__!
I find it difficult to take seriously a paper that proposes a significance to conservation of angular momentum between the solid earth and its atmosphere. I tried to get a ratio of the mass of the atmosphere to the mass of the rest of the planet and my calculator ran out of zeroes to the right of the decimal point. I’ve heard of the tail wagging the dog but this is more like the tail of a flagellum wagging a great blue whale.
And yet the Earth’s daily rotation varies by as much as a millisecond. Go figure.
Yeah, forget about the ocean, liquid mantle, tides from the sun, moon, and Jupiter, among others whose influence on LOD fluctuation is many magnitudes greater than winds. It’s the atmosphere.
Ding! Duly noted!
The effect of winds is to vary the length of day 0.7 milliseconds between February and August.
Do me a favor, buddy. Add up the number of milliseconds in six months, divide by 0.7, and write down the reciprocal as a decimal number. This may give you a more accurate idea of its significance in the grand scheme of things.
That’s the trouble with your avocation as a dingbat David – you miss both the obvious irony and make things up as you along.
‘As the Earth rotates beneath the tidal bulges, it attempts to drag the bulges along with it. A large amount of friction is produced which slows down the Earth’s spin. The day has been getting longer and longer by about 0.0016 seconds each century.’
I would say nah nah I win – if I were grossly stupid and immature.
[In my best Spock impersonation]
The answer is approximately 0.0000000000634195, Captain.
Tidal bulges are not winds you imbecile.
And if you learnt to read English – bozo – you would see that I said that the LOD varies by as much as a millisecond and not that it happens in a day. I think you should just pull your head in now because as well as being ignorant and rude – you’re a joke in poor taste.
Yeah, forget about the ocean, liquid mantle, tides from the sun, moon, and Jupiter, among others whose influence on LOD fluctuation is many magnitudes greater than winds.i> Christ – one of those idiots. What was that again – I take the number of milliseconds in some months divide it by 0.7 and take the inverse? In other words I take the total change in your estimation (0.7 milliseconds) and divide it by the number of milliseconds in a couple of months? To get the rate of change in milliseconds per millisecond? For God’s sake – stop to think for once in your life..
I need to do that again just to make it clear what a bozo is.
‘Tidal bulges are not winds you imbecile.
Yeah, forget about the ocean, liquid mantle, tides from the sun, moon, and Jupiter, among others whose influence on LOD fluctuation is many magnitudes greater than winds.
Christ – one of those idiots. What was that again – I take the number of milliseconds in some months divide it by 0.7 and take the inverse? In other words I take the total change in your estimation (0.7 milliseconds) and divide it by the number of milliseconds in a couple of months? To get the rate of change in milliseconds per millisecond? For God’s sake – stop to think for once – bozo.
Another pertinent question for you, dummy.
If the earth’s rotation slows by 0.7 millisecond, how many feet per second does that work out to at the equator?
Answer, about 1 foot per second or roughly 0.68 miles per hour. Over a period of six months an acceleration or deceleration of equatorial winds of less than 1mph causes exactly how much difference in wave height, surface area, and evaporation rate? If you think it’s significant I have a bridge to sell you.
And again the lack of English comprehension. First of all missing the irony and then again even after I had spelt it out. It is the sort tour de force of slack jawed, drooling incomprehension that can win prizes. I suggest you enter the next slack jawed, drooling contest you come across. You have the X factor David.
Hey you two
Change in the intensity of the magnetic field as observed on the surface varies continuously under impact of the geo-magnetic storms.
These changes interact with less volatile magnetic generator located in the liquid part of the Earth’s core, acting as an electromagnetic brake on its flow, since angular momentum has to be conserved, result is the Earth’s continuous angular acceleration / deaceleration.
Thus by far greatest part of the LOD change comes from the Earth’s core magnetic properties, as Jean Dickey acknowledges
These longer fluctuations are too large to be explained by the motions of Earth’s atmosphere and ocean. Instead, they’re due to the flow of liquid iron within Earth’s outer core, where Earth’s magnetic field originates.
And really – the winds are not caused by the change in LOD but by pressure differences and the winds in turn cause a change in LOD. Please – don’t embarass yourself further.
I had read this. I was just a little bored and having some fun. This guy just spouts nonsense as a matter of habit and is rude and repulsive to boot. I am not one to suffer obnoxious fools.
‘Scientists have long known that the length of an Earth day — the time it takes for Earth to make one full rotation — fluctuates around a 24-hour average. Over the course of a year, the length of a day varies by about 1 millisecond, getting longer in the winter and shorter in the summer. These seasonal changes in Earth’s length of day are driven by exchanges of energy between the solid Earth and fluid motions of Earth’s atmosphere (blowing winds and changes in atmospheric pressure) and its ocean. Scientists can measure these small changes in Earth’s rotation using astronomical observations and very precise geodetic techniques.
Over the other timescales mentioned – I am not convinced that LOD in itself is a climate driver but may vary with other mechanisms that interact to determine the climate.
Robert I Ellison (September 15, 2012 at 7:19 am) wrote: “Over the other timescales mentioned – I am not convinced that LOD in itself is a climate driver but may vary with other mechanisms that interact to determine the climate.”
There’s a serious misunderstanding &/or misinterpretation here that needs to be promptly corrected.
No one is saying LOD drives climate.
I’ve run into this very common accidental misunderstanding (&/or deliberate misrepresentation in some cases) countless times in online climate discussions.
LOD is a very well-constrained (and thus uniquely & indispensably useful) INDICATOR of climate.
Semi-annual & annual LOD variations & envelopes inform about the nature of equator-pole and pole-pole climate field differentials. You may wish to think of Earth as a simple measurement device.
Whatever spatiotemporal chaos is going on, it’s PART OF THE INTEGRAL. Spatiotemporal chaos is NOT exempt from – and it canNOT evade – LCAM, CLT, & TWR. We can identify constraints on spatiotemporal field aggregates by studying what Earth naturally integrates.
Take the time to more carefully consider what aspects of your contributions here are and are not incompatible with the observations I volunteer. Earlier it was obvious that we were experiencing some kind of a severe miscommunication at the level of deep fundamentals. Now it’s quite obvious why. It may actually be the case that none of our ideas are incompatible. I suggest that we now afford each other time for careful independent reflection.
“These longer fluctuations are too large to be explained by the motions of Earth’s atmosphere and ocean. Instead, they’re due to the flow of liquid iron within Earth’s outer core, where Earth’s magnetic field originates.
Something important was overlooked, but I suspect it may be a long time before this narrative gets changed (because it’s currently politically convenient).
Lots of people on this blog seem to be missing the big picture that the hostess Dr. Curry understands. Moving to useful prediction models gives a reality testing basis that can’t help but improve models. Instead of an “EXPERIMENT” of the temperature and rain at a global average model output a hundred years away, (cannot be falsified, as some say here) models will be measured against the shorter term reality of climate, if these recommendations can be pursued. Hard to do in this budget climate but grounding in performance reality is a wonderful improvement.
Scott I think it is more than just the potential for falsification. Decadal regional models can actually be useful. Just incorporating the PDO impact improves the local forecast probability. With mid range models, the learning curve is reduced allowing for more internal oscillations to be identified and included in future forecasts. Like all models they will be wrong, but useful.
The current Arctic and tropical stratospheric ozone depletion could have been predicted and even the magnitude of ENSO is somewhat predictable with non-linear modeling like that used for rogue waves.
Self Organizing Criticality is a fact, not a fluke. Within limits, it is predictable.
That 1995 regime change was a precursor to the 1998 Super El Nino, it should have been shocking and inspiring instead of ignored by climatologists.
U are right about usefulness of regional models. Can any predict ENSO variation w skill?
My Global Warming models say it should be happening.
My data says it is not.
Gradually, models yield to the dictates of data.
Just a matter of time. 90% of people couldn’t care less about the constant fear mongering hysteria coming out of NASA or NOAA, from Hansen or that looneytoon ABC weatherman.
In a world where violence is increasing, governments spending is out of control, people are worried about their jobs and future and college graduate kids are back living in the basement, staring at “those” posters, the chance that ordinary people will ever, ever,ever give a flying phooey about what global warming/climate models predict is between zero and nill.
Either the earth is slowly boiling you or the climate alarm industry is doing the same thing.
And ‘between zero and nill’ is my estimate of the probability that any climateer will ever let their precious model be publicly tested over a reasonable short time span so that we can all see how good (or not) it is.
The last people who tried this (bravely but foolishly) were the UK Met Office. After one particularly disastrous forecast of a ‘barbecue summer’ that turned into a supposedly global warming caused cold damp flop, they gave up and conceded that such forecasts were too difficult for them. But insisted that though they can’t forecast six months ahead, their 100 year forecasts are still the bestest things since sliced bread. And the MO’s reputation has still not recovered.
If the climate modellers had wanted to truly demonstrate the quality and reliability of their predictions, they’ve had 25 years to do so. But instead they’ve ducked, dived, changed the subject, compared the models against each other (WTF did that achieve?), hidden behind the wardrobe, blamed the Big Oil Denier Be Nasty to Hard-Working Modellers and Fry The Planet Conspiracy. Anything at all to avoid that terrible moment when they actually have to compare the results against reality and face up to the awful truth that they are crap and a waste of time and money.
Easy to prove me wrong….just lay out the results of some climate model and show how all the ahead of time predictions have been shown to match with reality. No brownie points for getting one right by chance and attempting to hide the 99 wrong ones.
Tim Palmer talks about ‘Global warming in a nonlinear climate – Can we be sure?’ here. http://www.europhysicsnews.org/index.php?option=com_article&access=standard&Itemid=129&url=/articles/epn/abs/2005/02/epn05202/epn05202.html
Perhaps the one thing all physicists know about the weather is that it is chaotic – its evolution is sensitive to initial conditions. The prototype (Lorenz, 1963) chaotic model is given by the three equations:
X = -sigmaX+Y
Y = -XZ+rX-Y (1)
Z = XY-bZ
For suitable values of the parameters (eg r = 28, sigma= 10, b = 8/3) the model has a chaotic attractor (see background in Figure 1). As discussed below, this model doesn’t describe weather at all; however, like the weather it is a nonlinear dynamical system…
The recent ensemble-based results of Murphy et al (2004) and Stainforth et al (2005) show that uncertainties in processes which are unresolved in the current generation of climate models, lead to substantial uncertainty in forecasts of global warming (even more so, uncertainties in regional climate change). As discussed above, we cannot use observations to reduce substantially the uncertainty in these parameters, because the assumption behind parametrisation theory, that unresolved small-scale climate processes can be treated by statistical mechanical methodologies, does not stand up to close scrutiny.
Palmer is hopeful that petaflop supercomputing will improve the situation. I can’t help thinking it is some decades down the track yet.
To quote myself from an article a few years ago.
Climate modelling has been undergoing a quiet revolution – and it is not one that should be allowed to go unnoticed by the long suffering public. Weather has been known to be chaotic since Edward Lorenz discovered the ‘butterfly effect’ in the 1960’s. Abrupt climate change on the other hand was thought to have happened only in the distant past and so climate was expected to evolve steadily over this century in response to ordered climate forcing.
More recent work is identifying abrupt climate changes working through the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation, the Southern Annular Mode, the Artic Oscillation, the Indian Ocean Dipole and other measures of ocean and atmospheric states. These are measurements of sea surface temperature and atmospheric pressure over more than 100 years which show evidence for abrupt change to new climate conditions that persist for up to a few decades before shifting again. Global rainfall and flood records likewise show evidence for abrupt shifts and regimes that persist for decades. In Australia, less frequent flooding from early last century to the mid 1940’s, more frequent flooding to the late 1970’s and again a low rainfall regime to recent times.
Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.
It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.
Four multi-decadal climate shifts were identified in the last century coinciding with changes in the surface temperature trajectory. Warming from 1909 to the mid 1940’s, cooling to the late 1970’s, warming to 1998 and declining since. The shifts are punctuated by extreme El Niño Southern Oscillation events. Fluctuations between La Niña and El Niño peak at these times and climate then settles into a damped oscillation. Until the next critical climate threshold – due perhaps in a decade or two if the recent past is any indication.
James Hurrell and colleagues in a recent article in the Bulletin of the American Meteorological Society stated that the ‘global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ Somewhat artificial is somewhat of an understatement for a paradigm shift in climate science.
The weight of evidence is such that modellers are frantically revising their strategies. They are asking for an international climate computing centre and $5 billion (for 2000 times more computing power) to solve this new problem in climate forecasting. The monumental size of the task they have set themselves cannot be exaggerated.
James C. McWilliams of the Department of Atmospheric and Oceanic Sciences at the University of California discussed chaos and climate in a 2007 paper titled ‘Irreducible imprecision in atmospheric and oceanic simulations’. ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable’. Sensitive dependence refers to qualitative shifts in climate and models that occur as a result of small changes in initial states. Structural instabilities are qualitative shifts in modelled outcomes as a result of plausible (within the limits of accuracy of measurements) changes in boundary parameters.
The bottom line of all this is that the current generation of climate forecasting models cannot be relied on as accurate representations of future climate. It will be quite some time before the new models are good enough to model ‘sensitive dependence’ in climate. I doubt their chances at all; weather models are accurate, because of chaos theory in operation, over about 7 days at best.
The universe is the ultimate chaotic system. If its mass at the big bang was more or less by the mass of a grain of sand it would have either collapsed back in on itself before stars could form or it would have flown apart before they could form.
Turtles are chaotic systems too. So you can indeed say it’s turtles all the way down.
I’m not sure, speaking as an engineer, that there’s any practical advantage in knowing that turtles all the way down really is true.
Speaking as an engineer (as well as an environmental scientist) – if you know it’s turtles you have the opportunity to apply the correct maths to the system. This is the other tactic to solving the problem you know even though it is the wrong problem. The latter is the well known proclivity of the drunk to look for car keys under the streetlight because that’s where the light is.
I think you may have missed a few days of physics classes, or more likely, a decade or three.
I think we have demonstrated your lack of understanding of the basics of atmospheric physics, ocean and atmospheric coupling, the physics of complex and dynamic systems, ocean circulations, chemistry, hydrology, biogeochemical cycling in the environment – even statistical mechanics as it applies – or doesn’t to climate.
‘Climate is a profoundly nonlinear system in which variability on different time and spatial scales interact.’ http://journals.ametsoc.org/doi/abs/10.1175/BAMS-89-4-459
I really just drop that in as it is relevanrt to the topic. I am sure it is beyond you.
‘The recent ensemble-based results of Murphy et al (2004) and Stainforth et al (2005) show that uncertainties in processes which are unresolved in the current generation of climate models, lead to substantial uncertainty in forecasts of global warming (even more so, uncertainties in regional climate change). As discussed above, we cannot use observations to reduce substantially the uncertainty in these parameters, because the ssumption behind parametrisation theory, that unresolved small-scale climate processes can be treated by statistical mechanical methodologies,
does not stand up to close scrutiny.’http://www.europhysicsnews.org/index.php?option=com_article&access=standard&Itemid=129&url=/articles/epn/abs/2005/02/epn05202/epn05202.html
You are an electrician and about as ignorant and silly a troll as they come.
A google scholar search will verify my credentials, WebHubTelescope. We can’t say the same for you of course because you’re an anonymous coward.
“Climate tipping as a noisy bifurcation: a predictive technique”
Academics have such a penchant for belaboring the obvious. Yeah, there are two great attractors in the earth’s climate system and they’re established by solid and liquid phases of water where one or the other is dominant. What else is new?
I think you may have missed the obvious David. We would like to know when the glacial axe drops beforehand ideally.
Being a newbie, Springer apparently doesn’t understand the lingo of the Internet.
An “anonymous coward” is someone that doesn’t register a handle on a site such as SlashDot and so they automatically get the handle “anonymous coward”. All the others carefully pick their nicknames according to how much junk mail they wish to avoid :)
The McWilliams part of your narrative needs a (major) rework to be consistent with the earth orientation (observational) record. (Check for false base assumptions at the root and see Tomas Milanovic’s Climate Etc. contributions in parallel.)
Tsonis’ shifts line up PERFECTLY with the solar Hale cycle in Gaussian-averaged heliomagnetic field polarity at Earth. This raises questions that (so far) mainstream solar & climate scientists won’t or can’t answer satisfactorily.
I think turtles make more sense than you. If you have a problem with McWilliams you should take it up with him.
‘McWilliams held a Research Fellowship in Geophysical Fluid Dynamics at Harvard from 1971-1974 and afterwards worked in the Oceanography Section at NCAR where he became a Senior Scientist in 1980. In 1994, while still retaining part-time appointment at NCAR, he began his work at UCLA where he became the Louis B. Slichter Professor of Earth Sciences in the Department of Atmospheric and Oceanic Sciences and the Institute for Geophysics and Planetary Physics. In 2002, McWilliams was elected to the National Academy of Sciences. Today, he continues his career in academia at UCLA.’
Here is the paper I referred to. – http://www.pnas.org/content/104/21/8709.full – I guess like what’s his name – you reject reality and substitute your own.
I reject your unwarranted hostility and now move on.
There is nothing inconsistant in what Tomas has discussed and what the McWilliams paper discusses. You drop in a passive/aggressive and very cryptic comment about base assumptions. You criticise but do not offer any objective point on which to review. The general case with you is that you have a monmania which you believe explains everything in climate – and have great difficulty in putting things down coherently in English. The objective of communication is to explain concepts not to obscure them. So please do move on.
A line of social intractability has to be drawn in the clearest terms when communications run up against an unwillingness to acknowledge something analogous to 1+1=2. There’s no basis for trust (whatsoever) if 1+1=2 can’t or won’t be acknowledged. Goodbye Robert.
There you go. I count upwards of 30 counter theories to the consensus science offered up by commenters on this blog. They can’t all be right. I only find it remarkable that more of these head-butting incidents don’t occur. Most of these theories mix as well as oil and water.
This is a case of Chief’s non-deterministic chaos model butting up against Vaughan’s deterministic response model.
“I only find it remarkable that more of these head-butting incidents don’t occur.”
That is one (very important) area where this blog towers over WUWT. At WUWT, clearly-intractable disagreements get nearly-infinitely protracted in insufferable time/energy drains.
I extend my appreciation to all here who maturely realize that 3 back-&-forths is generally the sensible short-term limit of worthwhile exchange. Deep issues are settled over years, not days.
Dr. Curry: Your “Make the points YOU want to make” blog rule is enlightened & eminently sensible (perhaps the most desirable attribute of the blog). Watts could potentially learn something critically important from you in this particular regard. All the best.
I’ll be more receptive to 100-year climate predictions when I get a reliable 100-hour weather prediction.
Judith, where is the accountability here! No emphasis on validation. No defined independent audits. No demand for transparency. I know of no other multi-billion enterprise where the insiders control the entire process. And taxpayers are just expected to pay the bill and shut up! Climate science is becoming scandalous! Strong words to follow!
‘More money for climate modelling’ is unlikely to win a lot of votes as a campaigning slogan.
I look forward to your strong words, which I predict I will likely agree with.
Here’s some you might care to ponder: Contempt, evade, disingenuous, unaccountable, rent-seekers, ivory-towered, untrustworthy………..
Gentlemen: I was holding these words in reserve for a few more iterations of diplomatic refinement, but your comments remind me that sometimes raw is more effective, so here goes….
Curry, J. (2012). What can we learn from climate models?
V&V = Verification & Validation (the norm in engineering & regulatory science)
“A tension exists between spending time and resources on V&V, versus improving the model.”
I have no confidence that climate modelers know how to improve representation of natural variability, so this supposed tradeoff looks imaginary.
“Concerns about fundamental lack of predictability in a complex nonlinear system characterized by spatiotemporal chaos with changing boundary conditons”
Suggestion: Pay attention to insights arising from earth orientation data.
The chaos is BOUNDED in carefully-tuned spatiotemporal aggregate. This severely restricts the collective set of permissible model states.
Ongoing ignorance of this painfully simple OBSERVATION – accessible to amateurs from well-constrained variables using any one of dozens of methods – isn’t inspiring confidence in climate science’s ability to manage (with integrity) independently, to put it mildly.
Analogy (oversimplified to crudely expedite education):
The components of a student’s course mark can vary while holding the (aggregate) course mark constant. Due to the constraint that a+b+c+d+… = constant, relations among a, b, c, d,… in a population can be theoretically shown to facilitate paradoxical relations. (Another analogy: role of a differential in a car …so bright mechanical engineers might also be able to help climate scientists with aggregate constraints on the ~5.5 boundary-layer basin-loops (0.5 is because Indian ITCZ reaches Asia in summer, eliminating the northern loop).)
Simpson’s paradox is covered early in Stat 101. Political strategists know VERY WELL of paradoxes that can arise with vote-splitting (in multi-party systems) and electoral-area boundary-redrawing (what geographers call “modifiable areal unit problem”). Why can climate scientists not see that (&/or admit that) inter-regional inter-annual aggregates are NOT exempt from aggregation paradox UNIVERSALS?
CREDIBLE V&V DEMANDS that climate models be able to reproduce not only earth orientation data, but also their externally-modulated (solar, lunisolar) spatiotemporal aggregates.
If climate scientists can’t EXPEDIENTLY handle the job independently, one sensible option might be to divert the lion’s share of the funding to earth orientation experts (who seem to know more about climate than climate scientists, quite interestingly).
No end to NASA’s ingenuity
1880-2012 in 30 seconds
That NASA video very nicely shows the crucial distinction between local weather fluctuations that statistically average to zero as contrasted with a planetary energy-budget that balances exactly to zero.
Thank you, vukcvic! :) :) :)
If we use the paleo data and play it backwards it would be a roller coaster ride. Not much fun though with only a couple of degrees change.
I normally don’t read your contributions (don’t find funny any ‘comedian’ who permanently laughs at his own comments).
What the video shows it is over-exaggerated and poorly understood polar amplification which may reveal more than some of the AGW crowd are prepared contemplate.
First they ignore you, then they laugh at you, then they fight you, then you win.
ouTSIde the box aggregation…
I suggest everyone take a careful look at vukcevic’s stimulating contribution:
I’ve done the southern graph myself (April 29, 2010 & again more recently using a different method — see http://wattsupwiththat.com/2012/05/20/open-thread-weekend-11/#comment-990925 ).
vukcevic’s graph is accurate.
Volunteer contributors like vukcevic are key ingredients of the antidote to very-well-paid, tyrannically-hostile mainstream attempts at suppression of appreciation of & respect for NATURAL beauty. (No she doesn’t need plastic surgery!!)
Something looks quite seriously wrong with the data used to construct that video. For example, follow the Southern Ocean and the SouthEast Pacific SST and compare with ERSSTv3b (available via KNMI Climate Explorer website). Maybe E.M. Smith can comment — I believe he has looked into this issue in a fair amount of detail and pointed out some rather goofy quantitative practices.
Coordinates for KNMI ERSSTv3b:
Southern Ocean (60°S – 90°S)
Southeast Pacific (160°W – 70°W, 45°S – 90°S)
Start a model in the Roman Warm Time and get it to show output that gets the cold time that followed and then gets the Medieval Warm period and then gets the Little Ice Age and then gets the modern warm time and I will expect that model to get the cold time that is coming now.
I do not think there are any Consensus Climate Models that can do that.
Climate change is more likely to be determined with data – specifically radiant flux at TOA – than any model.
For example – we have a very simple global energy budget based on the 1st law of thermodynamics – energy is conserved such that the energy out of the global climate system is equal to the energy in a period. This happens even with greenhouse gases. The planet warms and the balance of outgoing and incoming energy is maintained. In the shorter term – the difference in incoming and outgoing energy results in planetary warming or cooling such that:
dS/dt = Energy in – Energy out
Where dS/dt is the rate of change of energy in the global system in a period.
So if the missing energy has been found in the ocean –– as it has – Then we can then ask what the source of the energy may have been. If you look carefully – here – you will see that all of the energy increase in the von Schuckmann period happened in the short wave. While at the same time the energy from the sun decreased – as we know from SORCE – Why is it that this is such a difficult and obscure mystery? I am sure I cannot answer that question.
Hansen denies that data from his own organisation has any veracity at all. But this is surely not a new idea – very much the same thing was shown in both ERBS and ISCCP-FD data. Here is one from Wong et al 2006 – Reexamination of the Observed Decadal Variability of the Earth Radiation Budget Using Altitude-Corrected ERBE/ERBS Nonscanner WFOV Data’ from the Journal of Climate.
I asked this question of FOMBS – and of course don’t get an answer. What if Tsonis et al, <a href="Latif and Keenleyside or <a href=" Takashi Mochizukia and his legion of colleagues are right and the world doesn’t warm for a decade or three?
So the question he still hasn’t answered – is if the whole thingy falls down about his ears because he has been a hell of a lot too absurdly over confident and specialises in flippancy – despite many much more clever people being less confident – he will not have helped the cause of addressing emissions at all. Well – Flipper – what do do you have to say about your irresponsible and misleading behaviour?
A fan of *MORE* discourse | September 14, 2012 at 1:32 pm
Thanks, Fan. Is the repeated incantation of my full name supposed to have magical powers? You can call me Willis, my friend.
In any case, let’s take your claims one at a time. You say that the global computer climate models are good for:
This is totally unclear. Which results are the later models supposed to be verifying? Which models “verified” the earlier Hansen model?
Despite the lack of clarity, using one model to “verify” another doesn’t impress me. You need to use reality to verify a model, not another model.
It is well-known and agreed by all (including the modelers, but apparently you didn’t get the memo) that the models are useless on decadal scales. See the failure to forecast the recent decade-long hiatus in the warming for one of many examples.
The models say that there will be more extreme weather events as the globe warms. However, there has been absolutely no sign of such events. If the insurance companies are following the models, they are gouging you on the premiums by wildly overestimating the risks.
Climate models can’t do that. Weather models can do a bit of that, a few days in advance, but not climate models. Among other reasons the gridcells are far too large to forecast such small-scale phenomena.
Say what? What on earth is a “moral/economic balance”? Are your morals affected by economics? How does one “inform” a balance? What are “near-term” morals and “long-term” morals, and how are they different?
I’m sorry, but that last one makes absolutely no sense at all.
The general rule is that all models are wrong, but some models are useful. Unfortunately, there is no evidence to date that the GCMs are useful for long-term climate forecasts, and plenty of evidence that they are useless for short-term climate forecasts.
“See the failure to forecast the recent decade-long hiatus in the warming for one of many examples.”
Failure to forecast natural and anthropogenic variability or “noise” in the climate system does not invalidate the models. This argument as an indication of the “failure” of a model fails to grasp the pupose of models in the first place. The models are always wrong, Willis, but that does not invalidate the usefulness in their accuracy to show underlying climate dynamics.
What exactly would invalidate a model?
Natural variability is “noise”? Glacial/interglacial is one goddam noisy son of a bitch, huh?
Willis Eschenbach, this oft-repeated neodenialist claim makes Climate Etc regulars smile … :) :grin: :lol: … because the claim is so wildly at-odds with the facts! :) :grin: :lol:
We appreciate this pellucid exemplar of neodenialist agnotology, Willis Eschenbach! :!:
Not an increase in severe weather. Insurance industry chose the wrong discount rate. A lot of that going around. The insolvency of everything from GM pension assets to Social Security trust fund to Fannie Mae to insurance company assets is based on someone blowing sunshine in the form of a rosy future discount rate up someone else’s ass.
Willis Eschenbach | September 14, 2012 at 4:29 pm | Reply
“Thanks, Fan. Is the repeated incantation of my full name supposed to have magical powers? You can call me Willis, my friend.”
I may have encouraged that by repeatedly responding to him with his full name John Sidles.
He’s an odd duck. Makes you look normal in comparison.
David Springer | September 14, 2012 at 2:17 pm |
Thanks, David. The only “association” that I have with Scaffetta is that I spoke out strongly against his claims here and also further down in the comments, and posted further reasons why he was wrong in “Riding a Pseudo-Cycle“.
Next, I did not post “false flattery”, some might do that but I never would. I know Steven Mosher, and indeed he is one very smart guy. He drives me mad with his cryptic posting style, but he’s nobody’s fool.
So I fear that all of your assumptions on the question are wrong …
Cryptic posting style? He posted a very long reply here yesterday, but I suspected he had been abducted by aliens and a substitution made, as not only was it not cryptic but it made perfect sense.
Scafetta just misunderstood the nature’s cycles, I don’t often agree with Mosher either but he is Ok when you got to know him.
“Scafetta just misunderstood the nature’s cycles”
How that whole episode appeared:
Strawman setup to provide Svalgaard opportunity to look heroic — just like we see with the Archibald posts.
You and Loehe are collaborators. Loehe and Scafetta are collaborators. It’s not that complicated.
David Springer | September 15, 2012 at 4:07 am | Reply
You and Loehe are collaborators. Loehe and Scafetta are collaborators. It’s not that complicated.
That’s as dumb as “You know Bob, Bob knows Jim, therefore you know Jim.” Sorry, David, that makes no sense at all, the world doesn’t work like that.
Since I have made a spirited, detailed, and very public attack on Scafetta’s ludicrous claims, and I have linked to that discussion here in this thread, your claim just shows that you are not paying attention.
The Skeptical Warmist (aka R. Gates) | September 14, 2012 at 4:50 pm
You need to read more carefully. I was responding to the claim that the models were useful at decadal scales. I pointed to the recent decade, where they have been a total failure, predicting warming that did not occur. I did not say it “invalidates the models”, that’s all you. I also pointed out that all models are wrong, so I’m not sure why you are repeating my statement.
Can they show “underlying climate dynamics”? We have no evidence that they can, and you yourself say that they have failed at forecasting “natural and anthropogenic variability” in the climate. If they cannot forecast the natural variability, why are you assuming that they can show us “underlying climate dynamics”?
Willis, climate models are about what creates a long term forcing or signal in the climate. Short term noise or natural variability is interesting, but says nothing at all about the models. It is not hard at all to know exactly why the troposphere temps were mostly flat these past 10 years nor why it would have been impossible for climate models to have forecast it.
i am definitely for more models. more bigger models. i think this would help.
Yr: “…more models. more bigger models.”
You know, lolowot, what I think we have here, on your part, is the expression of some over-anxious, infantile cravings that derive from your, deep-seated, profoundly unwholesome, unresolved adolescent obsession with getting to “first-base” with every taxpayer you meet, while making a sicko, pest of yourself in the process.
And so whenever some “denier”, like moi, tries to engage in constructive dialogue or build bridges with you hive-bozos, what do we get? A couple of pro forma, unctuous, trying-to-be-Mr.-
Cool-but-failing, leg-hump “lines” and then it’s an all out general-quarters drill as you launch another of your insatiable, forever probing, flaring, “I want my momma!”, sucker-orifice assaults on the chastity of our “bigger” and “more bigger” taxpayer-mammaries–which you euphemistically call “climate models.”
So, lolowot, why can’t you useless-eater, eco-dorks just get yourselves weened and quit with all your relentless hustles, cons, scams, come-ons, play-the-victim-false-flags, scare-mongering, and hype aimed at nothing more than a good milking and a quality burp on the tit of one or another “sucker” taxpayer? If you did, you guys wouldn’t give everyone the complete creeps like you do now, you know.
Sequester all the funding for climate models.
They’ll howl like stuck pigs but it will be over and buried in a couple of days.
The economic collapse that will come if the current policies are followed for another POTUS term will kill off wasteful spending like climate models anyway. May as well just do it now and get over it.
Oh ya, the Sequester the EPA budget.
Make that double x double Sequester the EPA budget. They can’t be controlled but they can be neutered.
Models, no matter how good at showing the underlying dynamics of climate, will always reach the barrier of natural variability, and some of this variability includes the unpredictable actions of humans (i.e. activities that might increase or decrease aerosols or greenhouse gases). On decadal time scales and shorter, this natural variabilty could be greater than any warming caused by the buildup of greenhouse gases. Santer even suggested this period could be as long as 17 years. To expect models to be useful for shorter periods and worse still, for regional forecasts over short periods, seems to be missing the point and purpose of climate models.
I am confused. Did humans unpredictably fail to produce enough CO2 from the atmosphere in the last decade? Are models no good in the short run, but good in the long run? Amazing science. How do you judge what is useful?
The models are about long-term forcing of the climate over many decades, and are generally going to be horrible at short-term forecasting as they don’t model the noise of short-term variability. Additionally of course, the model are not going to catch the tipping point Dragon King events which will send parts of the climate into a new dynamical regime. Thus, virtually no model (except for Maslowksi’s) would have foreseen that an ice-free Arctic is now quite possible before 2020. With CryoSat-2’s confirmation that the PIOMAS sea ice thickness model has been pretty much right on target, all the “other” models are now trying to catch up after the fact. This is the nature of models and the natural of noisy chaotic systems, undergoing what appears to be rather rapid change.
Dear Skeptical Warmist, in chaotic systems the distance between a predicted state and actual state keeps growing over time. Are you saying that climate modelers succeeded in creating a non-chaotic system describing the climate? Quite an achievement. Is the PIOMAS model describing the short-term correctly somehow “not general” (models … generally going to be horrible at short-term forecasting) ? You can’t have it both ways.
PIOMAS is not a climate model but a model for Arctic sea-ice thickness. You are incorrectly ascribing to models in general what should be reserved for global climate models.
Hey, monkey-boy, if you keep on ignoring the effects of GHG on the climate, the temperature will start to diverge beyond anything that will occur in a chaotic system, which according to the skeptics definition is a closed energy-conserving system.
That’s something you learn pretty quickly in a physics course. You may have missed that day of class.
Climate and weather models are examples of chaotic dynamical systems. The solutions diverge over time because there is a range of feasible inputs. Slight variations in initial or boundary conditions within the bounds of observation or measurements error create large variations in the solutions. The situation is shown diagramatically here – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=sensitivedependence.gif
Weather and climate change chaotically as well. But the the thermodynamic properties of the system as a whole are of course those of nonequilibrium thermodynamics driven by the sun. Within this nonequilibrium energy dynamic is local variability driven by a large number of processes that collectively determine climate. ‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of
temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ Hurrel et al 2009 A UNIFIED MODELING APPROACH TO CLIMATE SYSTEM PREDICTION
Confused George, You ask “Are models no good in the short run, but good in the long run?”
Maybe you can answer your own question.
Suppose a climate scientist in 1970 had a model of the climate and predicted a warming of 0.2 degC per decade for the next 4 decades, that would be pretty good wouldn’t it?
But how many times, in those forty years, would climate skeptic / deniers have said that he’d got it all wrong based on the last few years of data?
Your “after the fact” supposition about a hypothetical climate scientist in 1970 “who predicted a warming of 0.2 degC per decade for the next 4 decades” (using the GISS temperature record) is weak for several reasons.
1. (biggest reason) it is hypothetical (i.e. it didn’t happen). Climate scientists in 1970 were more concerned about a “coming ice age” than about global warming.
2. there were no climate models in 1970, so any prognosticator would have to be using some other method to forecast future warming.
3. you “cherry-picked” 1970, when the latest 30-year warming cycle began – why not pick 1910, when a statistically indistinguishable 30-year warming cycle began?
4. the last 30-year warming cycle ended in 2000 – since then there has been no global warming.
Come up with a more plausible story, TT.
The Skeptical Warmist (aka R. Gates) (September 14, 2012 at 5:08 pm)
“To expect models to be useful for shorter periods and worse still, for regional forecasts over short periods, seems to be missing the point and purpose of climate models.”
It’s not as hopeless as you suggest. You just need some help from other fields.
I don’t have all the answers, but I can help with a few deeply fundamental observational puzzle pieces that liberate explorers from otherwise absolutely-impenetrable obstacles.
I suggest reaching out for help where you need it and projecting a positive (“can do”) attitude so your funding doesn’t get slashed.
If some of your colleagues need help reproducing the results I’ve shown here, they are welcome to contact me. I’m not going to spoonfeed upfront because students need to develop intuitive conceptual understanding independently. However: I’m more than willing to patiently coach along the learning path as capable parties making a genuine effort encounter (and accurately & concisely describe) very specific computational obstacles.
For example, they may write: “I’ve gotten this far – I’ve removed the lunisolar components and I know that the next thing I need to do is take the complex wavelet transform, but I don’t know what equation to use. Can you help?”
The answer would be: Yes.
And I would help.
Of course a problem is that people are embarrassed to ask for help …and perhaps worried someone will maliciously publicly post and ridicule the correspondence. (A negative side-effect of “climategate” is the chill it put on correspondence.)
R. Gates, you have me confused. You say:
This means that you are saying that the natural variability is not a product of the underlying dynamics of the climate … but how can the natural changes not be the direct result of the underlying dynamics? What drives the natural variations if not the underlying dynamics? And when you see a change in the climate, how can you tell what is “natural variability” and what is not?
You also say:
I’m glad that you know, not just generally, but exactly why the tropospheric temperatures were mostly flat … and if so, as far as I can tell you are the only person I’ve ever heard of that claims to know that.
Please enlighten us?
Also, while you are at it, since you know exactly why the temperatures have stopped rising … how come you are so much smarter than the models that you are extolling? Heck, you make it sound like we could get rid of all of the models and just consult you about what the next decade will bring …
I don’t know whether or not you’re playing intentionally dense or not, but you’ve certainly read most of the same papers as I have and certainly know that a combination of a cool phase of the PDO (dominated by La Nina’s), a rather inactive sun, a series of moderate volcanoes, and increased anthropogenic aerosols from Asia have helped to mask the underlying long-term warming signal in the troposphere from anthropogenic greenhouse gas emissions (the ocean heat content is of course another story entirely). If you deny knowing this, then you perhaps you are not quite the researcher I thought you were.
Of course, the tropospheric heat content is far more fickle and subject to short-term “noise” of natural variability than energy reservoirs that have a greater amount of both thermal storage and thermal inertia. Specifically, the oceans and cryosphere– both of which have displayed that the Earth’s energy system continues to be quite out of balance with the needle tipped firmly toward “accumulate”, and nothing over these past ten years has changed that..
Asian sulphate seems a little bit more complex – depending on the mixing ratio of black carbon and sulphate -http://www.nature.com/ngeo/journal/v3/n8/full/ngeo918.html – the paper can be found if you look for it outside the paywall.
ENSO is certainly more variable over longer periods – here is an 11.000 year proxy based on red sediment in a South American lake – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=ENSO11000.gif – Here is a 1000 year proxy based on salt in ice at the Law Dome. – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=Vance2012-AntarticaLawDomeicecoresaltcontent.jpg – High salt content = La Nina. There are implications for rainfall but also for longer term global T.
Again – any change in the Earth’s heat content must be caused by changes in radiative flux at TOA. What minor warming there was in the last decade was in the SW as a result of cloud cover change. – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=CERES_MODIS-1.gif –
broken link – http://www.nature.com/ngeo/journal/v3/n8/full/ngeo918.html
It’s a long and winding road. But a gently increasing push keeps the show on the road. We’re only just getting going, but the
batphysicsmobile is on its way.
“…global climate sensitivity, defined as the global warming simulated by a climate model in response to a sustained doubling of atmospheric CO2 concentrations, still shows a similar 30 percent spread across leading models as it did 20 years ago.”
One definition of insanity is to keep doing the same thing over and over again and expecting a different outcome. Flawed assumptions = flawed outcomes.
GCM’s are insanely wrong. Built upon the radiative physics of CO2, then hammered into instruments of policy, with climate modelers advocating doing the same thing only more complex and more costly. All the contrarian modelers have been winnowed and off doing something else.
It is time folks for another idea, really. You might start with the Farmer’s Almanac, using data already collected for the last several glaciations. Make the framework ocean heat content and ocean circulation; then spend a few shekels on learning about clouds and their regional effects; apply the physics of “shade tree mechanics”; predict the climate for Azerbaijan in 3 years. Can’t do that? Tear it apart and start all over again.
Heh, what about leaving analytic-computational climate models alone for the time being?
And concentrate efforts to general unresolved problems on thermodynamic processes in non-equilibrium closed systems interacting radiatively with their environment (and with themselves)? It is pure basic research in physics, no more and no less.
It is quite important to note that the misconception of LTE (Local Thermodynamic Equilibrium) in semi-transparent planetary atmospheres is untenable. That is, local kinetic (Maxwellian) temperature of a small parcel of atmospheric stuff is never equal to its Planckian temperature, due to the presence of radiation coming from far away locations, with very different thermodynamic properties, absorbed and/or scattered over that particular parcel.
Therefore no general physical theory is applicable to atmospheric processes at the moment, such theory is still to be found.
It is premature, or rather, silly to go forward to applications with no adequate physical background. That is, the “science” in this field is as far from being settled as it can ever get, it is simply non-existent yet.
Those who try to build sophisticated computational models onto this “nothing”, are wasting money, that’s the sad state of affairs.
“Therefore no general physical theory is applicable to atmospheric processes at the moment, such theory is still to be found.” that works well enough, I might add.
You can use a near radiantless moist air system and a nearly pure radiant dry air system and sort the tough stuff out later. Of course you have two different definitions of entropy and have to consider reversible dissipation of mass and energy, but heh, just makes things,more interesting :)
Unfortunately, that leaves approximately 10% of the surface of the Earth not inside either thermodynamic system which the radiant guys don’t like. Kinda makes the global mean temperature a bit of a joke too.
Berényi Péter, are you suggesting that we should do real experiments on replicas of the atmosphere, to ascertain its physical properties, instead of relying on theory?
Non-equilibrium systems are much more complex and they may undergo fluctuations of more extensive quantities. The boundary conditions impose on them particular intensive variables, like temperature gradients or distorted collective motions (shear motions, vortices, etc.), often called thermodynamic forces. If free energies are very useful in equilibrium thermodynamics, it must be stressed that there is no general law defining stationary non-equilibrium properties of the energy as is the second law of thermodynamics for the entropy in equilibrium thermodynamics.
OK Robert, it looks like it is time to do some physics experiments!
Yup. Experiments. However, I would not go for “replicas of the atmosphere” directly, because the atmosphere is huge, would never fit into the lab. Fortunately it is not atmospheric physics we need, but a general theory of thermodynamic processes in non-equilibrium closed systems, interacting radiatively with their environment. This theory is definitely missing from the arsenal of classical physics. We still have no idea what variational principles are applicable to such systems, if any.
While a realistic model of the terrestrial climate system is not available for experimentation, there is a hard lesson to be learned from history of science. Breakthroughs never come from this kind of modelling, but from general principles derived from idealized experimental setups, and their application to real life problems later on. A sophysticated lab pendulum does not even come close to objects found in nature, still, by studying its behavior much can be learned about Newtonian dynamics, gravitation, conservation of energy & angular momentum, and the like, very useful concepts if one e.g. goes for a detailed description of lunar motion. On the other hand, a “realistic” model of the Sun-Earth-Moon system is utterly useless in the quest for such knowledge, it is just another toy at best.
What I have in mind is a chamber full of stuff which has an optical depth close to 1 in both the optical and infrared range of spectrum, that is, it’s neither fully transparent nor opaque to EM radiation. On the other hand the walls of the container should be as transparent as possible while locking stuff safely inside and insulating it thermally from its environment. The entire setup would be located in a larger chamber, whose walls are actively cooled and are kept at a very low temperature, while the inner chamber is irradiated by a low entropy short wave radiation source. Heat exchange between the inner chamber and the walls of the outer one can happen only by radiative coupling, so the space in between is supposed to be pretty good vacuum. The SW radiation source should be powerful enough to keep effective temperature of the inner chamber as high as allowed by the construction materials, preferably at several hundred degrees °C. This way the entire setup can be made much smaller, with less than a thousand cubic meters volume for the inner chamber.
It would never imitate the climate system, but one could run experiments on it, studying its behavior with different parameters like irradiation, optical depth, heat capacity, chemical composition, etc. One could even introduce ingredients into the inner chamber having condensation temperature close to the operating one, with some non-zero reflectance in the condensed state. Oh, it could be rotated as well on demand.
As soon as such a system is sufficiently understood, one can try to develop computational models of it, which would predict its behaviour. The big difference compared to current GCMs is that one could have as many experimental runs as needed with closely controlled parameters. That’s the way to do science, not the other way around, by running computational models a gazillion of times to fit them to a unique physical instance.
I bet we would learn enough general principles along the way to be able to return to the actual atmosphere eventually, but that may only come at the end of the road, never before we have stepped on it.
Would such an experimental device cost money? Yes, it would. Much less than a particle accelerator on the order of LHC, still, it would cost. But unlike cost of software development of current GCMs and that of supercomputer time needed to run them, it would worth every penny, because we could return to actually doing physics instead of playing computer games.
A fan of *MORE* discourse
LOL …please enlarge your conception of the transport-theory literature, Berényi Péter! :smile: :grin: :lol: :!:
Yes, that is some serious FUD, to assert that some rather basic physics is misguided.
The problem Fan is the assumption of “an” equilibrium. When you have a bi-stable system, something has to be reversible. In the climate models, the sensible portion of latent cooling is assumed to zero out. Condensation regains the energy lost or at least a constant percentage of the sensible energy lost in evaporation, to simplify the calculations. That is a risky assumption because the time from lost to regain is variable and the amount of regain is variable.
If the condensation falls as snow on a glacier, that is obvious. If the glacier melts, that is obvious. What is not as obvious is where the water condensates. If it condensates more in the high latitudes, more sensible heat would be lost because there is less atmospheric resistance than if it falls in the tropics. If the water vapor is transported into the stratosphere by deep convection, most of the sensible energy is lost and a portion of the mass would be lost. The water vapor can even become entrained in convection loops and pump heat like a heat pipe.
Generally, those situations would be trivial, but with the forcing of a doubling of CO2 estimated to be 3.7Wm-2 per degree and the increase in latent cooling estimated to be 4% or roughly 3.2Wm-2 per degree, the sensible is no longer trivial. As the system approaches one of the bi-stable points, the uncertainty is seriously not trivial.
Cliffs Notes version of the NRC study:
We are about to lose our funding in a political tsunami on November 6, so…
“Overall, climate modeling has made enormous progress in the past several decades”
Acceptance of this statement is a good test to distinguish between those who also accept scientific progress and those who are just deniers.
Deniers will dismiss most computer models with comments like garbage in garbage out, they’ll say that as the models aren’t perfect they must be useless, and we’ve heard those sentiments often enough on this blog.
Not all computer models though, any that happen to forecast a CS which is acceptably low will be acceptable to the deniers too. Different models will produce somewhat different results, so the only intelligent approach at the moment is to collate all the results and try to assess the range of probabilities.
“’Overall, climate modeling has made enormous progress in the past several decades.’
“Acceptance of this statement is a good test to distinguish between those who also accept scientific progress and those who are just deniers.”
Not at all. To debate this phrase in this way shows a fundamental ignorance of modelling.
The first stage of modelling is to model a complex system that is only partly understood to enable further study to build up towards a greater understanding of the system and to test the work already completed. it can
be used predictably to see what happens at a particular time if one parameter is varied. I would be very surprised if substantial progress has not been made in this area, although I strongly suspect that the rate of
progress has started to plateau because progress on understanding the physical processes controlling climate has not kept up with modelling and available computer power. The only issue here is whether directing
more funding at the real world physics might be more productive.
The second stage of modelling is when the first stage has reached a sufficiently advanced state to allow extrapolations into the future with some degree of reliability. As a sceptic it is absolutely clear to me that
modelling has not reached this stage. Using the models for prediction at this time is a classic instance of “garbage in” producing “garbage out”.
The only intelligent approach at the moment is to recognise the models limitations and their stage of development and to work towards filling in the huge gaps between the models and reality.
work towards filling in the huge gaps between the models and reality.
Scientist: OK I agree there was a gap but now I’ve done some work and partially filled it in.
Denier: But you now gone from one gap to two gaps. Its like I said, these models are fundamentally flawed and are even getting worse.
Someone who actually knows what they are talking about.
Weather forecasts have both demonstrable skill and appreciable error (1). Climate predictions for anthropogenic global warming are both broadly credible yet mutually inconsistent at a level of tens of percent in such primary quantities as the expected centennial change in large-scale, surface air temperature or precipitation (2, 3). Slow, steady progress in model formulations continues to expand the range of plausibly simulated behaviors and thus provides an extremely important means for scientific understanding and discovery. Nevertheless, there is a persistent degree of irreproducibility in results among plausibly formulated AOS models. I believe this is best understood as an intrinsic, irreducible level of imprecision in their ability to simulate nature.
You assume that models have small errors. Unfortunately, not so. The grid cell is of the size of Utah if not Texas, they typically have one cloud layer at most, and until recently they did not even conserve energy. Mountains of work to do there. This is hardly even a work in progress; it is a work in infancy. To base Kyoto protocols on such “science” is a sheer impudence.
‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’
There is a vast difference between opportunistc ensembles and systematically designed model families.
“Overall, climate modeling has made enormous progress in the past several decades”
The problem is the word “enormous”.
The “enormous” claim can’t be credibly made until modelers have a firm handle on natural variations.
Aren’t you a nature lover? Let’s show some basic respect for the power & beauty of nature.
When are the climate models start to incorporate the following natural climate variability and relationships?
Verdon, D. C. and S. W. Franks
Long-term behaviour of ENSO
“…phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.”
Mantua et al
The Pacific Decadal Oscillation (PDO)
“The “Pacific Decadal Oscillation” (PDO) is a long-lived El Niño-like pattern of Pacific climate variability. While the two climate oscillations have similar spatial climate fingerprints, they have very different behavior in time.”
“20th century PDO “events” persisted for 20-to-30 years, while typical ENSO events persisted for 6 to 18 months;”
“Several independent studies find evidence for just two full PDO cycles in the past century: “cool” PDO regimes prevailed from 1890-1924 and again from 1947-1976, while “warm” PDO regimes dominated from 1925-1946 and from 1977 through (at least) the mid-1990’s.”
Not at all. To debate this phrase in this way shows a fundamental ignorance of modelling.
The first stage of modelling is to model a complex system that is only partly understood to enable further study to build up towards a greater understanding of the system and to test the work already completed. it can be used predictably to see what happens at a particular time if one parameter is varied. I would be very surprised if substantial progress has not been made in this area, although I strongly suspect that the rate of progress has started to plateau because progress on understanding the physical processes controlling climate has not kept up with modelling and available computer power. The only issue here is whether directing more funding at the real world physics might be more productive.
The second stage of modelling is when the first stage has reached a sufficiently advanced state to allow extrapolations into the future with some degree of reliability. As a sceptic it is absolutely clear to me that modelling has not reached this stage. Using the models for prediction at this time is a classic instance of “garbage in” producing “garbage out”.
The only intelligent approach at the moment is to recognise the models limitations and their stage of development and to work towards filling in the huge gaps between the models and reality.
Nowhere in the article does it state
“Test the models to destruction”
Thus the models are not experiments designed to test an hypothesis, and the people who design, run and interpret the models are not scientists.
My first thought was old fashioned mainframes and tracked construction equipment. But these programs should be pushed to the limit to explore the limits of chaotic variability.
The people who brought us CAGW (global scares) now propose to bring us CARW (regional scares) and even CALW (local scares). The mind numbs at their arrogance. Or laughs.
Yes its all very amusing that scientists think they can predict the future from models. We can all have a good chuckle. These models can never predict exactly what will happen in 100 years time. They can’t even tell us when the next Hurricane, like Katrina, will occur. Even if they could tell us when, they couldn’t tell us where. And even if they could tell us when, and where, they couldn’t tell us exactly how strong the winds would be. And even if they could tell us when and where and exactly how strong the winds would be they couldn’t tell us how much rain to expect. The models are totally useless.
So there is really nothing to be concerned about. CAGW is just a figment of some computer programmer’s imagination. We can all agree about that.
I’m impressed by the quality of some of the above comments, but, to me, the NRC Strategy recommendations appear to be a somewhat verbose way of asking for more swill in the modellers’ trough. As we can never be sure that that all relevant factors have been identified, accurately quantified and correctly incorporated there is no reason to suppose that any model will ever work. It is probably time to stop flogging a dead horse. (Mixed metaphor — mea culpa)
“As we can never be sure that that all relevant factors have been identified, accurately quantified and correctly incorporated there is no reason to suppose that any model will ever work.
Absolutely right. No reason at all. I’m sure we can always say there is at least one unidentified relevant factor. Those warmists would no doubt challenge us to say what it was. And that just goes to show how just stupid they are. If we could do that, it wouldn’t be unidentified, would it?
As former President Eisenhower warned might happen in his farewell address to the nation on 17 Jan 1961  a “technological-scientific elite” has been richly blessed with public funds for telling politicians that their false illusions of grandeur are proven scientific facts. See:
Their illusions are bigger than the Sun’s “sphere of influence”
 Eisenhower’s Farewell Address (17 Jan 1961)
Maybe the climate models have successfully predicted the future already, and we only don’t know it yet. V&V comes to those who wait.
They would first have to agree, which they do not. There is no such thing as what the models predict, because different models, and even different runs of the same model, predict different things, especially at the regional and local levels, but globally too. That there is such a thing as what the models predict is a huge confusion.
That climate modeling 101 website is terrible to view. In order to read it you have to use key combination Ctl-A to highlight all the text.
Those keys apply to my windows machine. Don’t know if other OS highlight using the same keys.
Just so you don’t have look through the main post here is the link to the site. http://nas-sites.org/climatemodeling/index.php
Geez! Not a very good start. Top of main page they say:
That leaves the novice reader with the idea they have succeeded in simulating climate. That statement should be changed from “that simulate” to “that attempt to simulate”
That they have successfully simulated global climate, past and future, is their claim. They now want money to simulate future regional and local climates. If this claim is false that must be shown.
Garbage production funding
The Skeptical Warmist (aka R. Gates) | September 14, 2012 at 6:06 pm
Please, insert your nasty insinuations in the orifice of your choice, sit on them, and spin. They are untrue, unpleasant, and childish. I do not “play dense”, I leave that to others. The fact that you suspect people of doing that speaks volumes regarding your own proclivities. Now, back to what you call science.
First, I don’t have a clue what you know and don’t know, and by the same token, you don’t have a clue what I (or most folks) “certainly know”. Don’t even bother trying, your mind-reading skills are no better than anyone else.
Next, you are saying that the PDO and the sun are ruling the climate, and are enough to totally overpower the effects of CO2? Dang, my friend … sounds quite heretical. You sure you want to endorse “the sun did it”?
In addition, IF that is true, then have you calculated how small the temperature rise in previous decades would have been given a much more active sun plus the warm phase of the PDO? Please let us know your calculations as to the size of each of those effects, should be entertaining.
And you are claiming that volcanic eruptions have been more prevalent during 2000-2009 than they were during the previous decades? That’s simply not true. There were both more and larger volcanoes in the previous decade (1990-1999), and yet the climate warmed during that time … why is that?
Next, the usual claim about the Asian anthropogenic carbon aerosols (the Asian “brown cloud” that you refer to) is that they warm the surface … e.g. from Nature Climate Change:
Are you claiming differently, disagreeing with the Nature article, and saying the Asian aerosols are actually cooling the planet?
Finally, your accusation that if I don’t agree with you I must not be a good researcher is a pathetic joke. Which of us is correct is as yet undetermined, and will be for a while, so you might as well get used to it. You can’t unilaterally declare victory, it just makes you look foolish and unscientific.
Oh, yeah, one other point. An El Nino is when the Nino3.4 index is greater than 1, a La Nina is when the index is less than one. In the decade 2000-2009, 8% of the months were El Nino months, and … wait for it … 8% of the months were La Nina months.
So whatever kept the planet from warming, it wasn’t the ratio of La Nina to El Nino as you claim.
DATA SOURCE: NOAA
Steven Mosher | September 14, 2012 at 7:48 pm |
If it is none of my business, then what on earth are you doing posting about it in an extremely public forum?
Get a room if you want a private conversation, otherwise folks will foolishly assume that you are posting in a public forum because you think your ideas might be of interest to the public. And as a result, the public, myself included, will foolishly comment on them without realizing that you meant to whisper them in David’s ears alone.
It helps if people address themselves to the specific person they want to inform if they intend having a private conversation in a public forum. Mind you, i think it helps anyway to addres yourself to the person you are responding to -public or not-as otherwise conversations get confused as the nesting can sometime land you in the wrong place.
Tony, I believe that the protocol is that anyone can comment on anyone else’s comment. There are no private conversations here.
I totally agree-anyone can comment on anything. I’m merely pointing out that it helps the flow of conversation if you name the person you may be specfically replying to at the time, which might also help avoid sme of the food fights here when someone misinterprets a message intended for someone else..
David, tell that to Mosh, he claims that his conversation and comments to you are ” none of [my] business“.
I second Oliver M’s assessment of climate modeling, and I appreciate the comments by Jim Cripwell on the ineluctable limitations and impracticality of models.
Climate modeling is a waste of time and resources and is a pernicious disatraction from the task at hand, which is to put a stop to AGW propaganda, of which modeling is one of the chief instruments- and that’s about all it’s good for, to propagate politically motivated lies.
Thed marthemiatics involved will simply never allow climate models to be useful. There are too many variables known AND UNKOWN, and too much unpredictability in the ranges of variation of those variables. m
Computer models can never be accurate because they cannot be programed to be. The computer can’t produce useful information if its users can’t get their assumptions correct and in the case of climate modeling they never wil get them straight. How do you factor in hundreds of partial derivatives with huge sigmas – often bigger than their absolute values – attached to all of them? And trying to guess and quantify what factors you haven’t identified can only corrupt the process further
It ain’t happenin,’ and let’s stop wasting time and effort and money on a useless exercise that incidentally only facilitates the spread of disinformation.
Frankly, I resent the fact that my tax monies are being stolen by charlatans to engage in this wasteful and perversely motivated rubbish.
Yeah, total waste of time… Couldn’t possibly learn anything from models. Imperfect models of quantum mechanics taught us nothing about quantum systems. Because we can’t model QM systems perfectly, we can’t make any accurate predictions, so it’s a total waste of time for all those theoreticians to even try. Too hard, too many calculations. Yup, probably ought to close down all those physics and chemistry departments that use those models too. I mean who actually believes a particle can be in two places at the same time? Or quantum tunneling… What a crock.
And while we’re at it, we ought to quit this whole business of modeling the big bang or the development of string theory. Those guys are really whack. Nine dimensions? You gotta be kidding me. Why bother trying to find the connections between the universe and the subatomic. Yup, useless excersizes that incedentally only facilitate the spread of disinformation…. Total waste of time and taxpayer money being stolen by those theoretician charlatans engaging in their perversely motivated rubbish.
So it can only follow that all climate modelers are cut from the same clothe. Yeah, they don’t really want to learn how the climate system could be described through models…. No.. They ALL only want to use those models to push policy, ALL of them…. Every single one.
I’m with you Chad…. /sarc off/
The anti-science neodenialist vituperation that Judith’s post has attracted is aptly satirized by the following poem that was posted on Neven’s ASI weblog:
Kudos to Jim Pettit! :) :grin: :lol: :!:
Very scientific, indeed.
If unpredictable natural variability dominates on the decadal scale, as the report seems to say, then one wonders what the modelers intend to tell the farmers?
Thank you for this thoughtful question, David Wojick! :) :) :)
Answer Prepare for more heat and more drought. :shock: :!: :cry: :shock: :!: :cry:
David Wojick, what is your next question? :?: :?: :?:
More drought everywhere, Fan? So you would build reservoirs and irrigation systems where none are presently needed, throughout the world? That is what water resource managers do with droughts. How would you decide how big to make them? What an incredible waste of money. This is why no one is listening.
BTW Fan, GW is supposed to increase the water cycle, giving more precip overall. So drought cannot be the universal prediction.
Regarding water resource management, which is mostly long term reservoir regulation, a decade ago the US National Assessment used two prominent climate models to project runoff in various large watersheds. These were the Hadley and the Canadian. In most cases the predictions were of opposite sign, in some cases dramatically so. So we will either have more rain or less, perhaps a lot but maybe not, which is useless information. Has this changed, such that the models now all agree at the spacial and time scales of interest? I doubt it. If not then all these references to informing local managers are bogus at best.
Thank you for this thoughtful question, David Wojick! :) :) :)
Answer: Prepare for higher salinity. :shock: :!: :cry:
David Wojick, what is your next question? :?: :?: :?:
Sorry Fan, but I do not understand your point.
David Wojick, a crux question is, “What are the time-scales of interest?“
On decadal scales, climate-change only marginally relevant, eh?
On centenary scales, climate-change is utterly dominant, eh?
What is your fore-sight time-scale, David Wojick? :?: :?: :?:
How much extra sea does that image show? 5 meters? i am sure that wont happen by 2107. maybe 2200.
Fan: The Plan talks about informing water managers (and farmers) so the time scales of interest are annual to decadal. Nor do you address the central issue I raised, and you quoted, which is model disagreement. You are wasting our time.
Maximum storm tides for hurricanes hitting the Tampa region:
There’s no easy way to eyeball them but it looks like the maximum storm tide on the scale goes up about 6 feet per category.
A 6ft rise in sea level (1.8m) would seem to be equivalent to increasing the flooding from a hurricane to the next level up. So flooding from a cat 1 hurricane with 6 feet higher sea level would be equivalent to flooding from a cat 2 hurricane today.
This probably has more influence on non-hurricane storms though. A 6 foot higher sea level would make minor storms carry the same flooding potential as a category one today.
For example image here shows estimated flooding from a 3 foot storm surge (must be less than cat 1) and 3.3 foot (1 meter) sea level rise (total 6.3 ft):
The (central estimate) flooding in this image is comparable to the areas affected by a 6 foot surge in the cat 1 image.
“Sometimes I wonder whether the world is being run by smart people who are putting us on, or by imbeciles who really mean it.”
–attributed to Mark Twain
PETA-plops of power, whether or not it is just the weather.
5 meters higher sea level by 2200?
How about in 890 years (IPCC upper estimate of rate of rise) or in year 2900?
Come back down to planet Earth, lolwot.
Let me resume my professional standing for a bit – (h/t Cecil) –
Recent research suggests that the AMO is related to the past occurrence of major droughts in the Midwest and the Southwest. When the AMO is in its warm phase, these droughts tend to be more frequent and/or severe (prolonged?). Vice-versa for negative AMO. Two of the most severe droughts of the 20th century occurred during the positive values of the AMO between 1925 and 1965: The Dust bowl of the 1930s and the 1950s drought. Florida and the Pacific Northwest tend to be the opposite – warm AMO, more rainfall.
From Atlantic Multidecadal Oscillation web page of the Atlantic Oceanographic and Meteorological Laboratory.
This is a good primer on oceans – http://oceanworld.tamu.edu/resources/oceanography-book/oceananddrought.html
We have been warning for some time of the potential for ‘dust bowl’ conditions to evolve. Models are of course not regionally accurate at all – but we can gain some insight from patterns of sea surface temperature. These persist for decades to centuries. If we look at this new high resolution ENSO proxy for instance there is a suggestion of a centuries long increase in the frequency of La Nina to come. This has immense implications for rainfall globally and temperature.
Robert I Ellison
I think it would be nice to have more climate modelers working on the problem as well as bigger computers to run models on.
What problem are you referring to, Lol?
they did that too. Now how is your weather today?
The link is broken.
David Wojick, your claim is just plain false!
Our farm has already been in our family for 150+ years, and we plan for its fields and soils and forests to remain in good shape for another 1000 years :lol: :lol: :lol: :!:
Short-sighted neodenialist economic valuations are not what made America great, eh David Wojick :?: :?: :?:
I do not think the Plan is referring to 1000 year forecasts when they refer to providing information to farmers. Your responses are getting sillier with each response, a typical trend I think. We are trying to talk about the NRC Plan here.
David Wojick , it is plain folly to imagine that improving our decade-scale scientific understanding of climate-change can be separated from improving our century-scale scientific understanding of climate change.
Isn’t that correct, David Wojick? :) :) :)
No, it is ridiculous, but then so are you.
One way to address both decadal and millennial time scales is to spend our resources on building and analyzing as many multiproxy geologic records as possible for as much of the Pleistocene as possible. Combined with sound geologic work, it could also correlate atmospheric signals with storm/flood/fire and other events. This would give us actual observations on climate that could inform our theoretical constructs (models), as well as help refine the methods and resolution of geologic proxies.
Germany’s 1000 years empire lasted 1933-1945. The “plan” you are referring to is a misinformed rant. Do you actually believe that pre-industrial societies were a paradise? Read why the Babylonian and Roman empires collapsed.
I think it would be nice to cut climate forecasting-related funding by >70% and use the proceeds to solve real problems, ie $100M might just keep asian carp out of the great lakes and avert an ecological disaster. We live in a world of finite resources, lets prioritize (sometimes wild) conjectures about the distant future lower than the known threats right in front of our faces.
What’s your problem Concerned Citizen?
Technology will find a use for Asian Carp … human technological advances always win in the end. And as it gets warmer, we will have more and more carp to provide more renewable energy to counteract dwindling fossil fuel resource supplies.
Chalk up another fallacious strawman argument to the Go!Team
I don’t understand your use of the term “Strawman.” I’d rather spend money on real issues than imagined (or yet to be proven) ones. Rather simple, really. I’m stating my own opinion, not proposing arguments.
I’d add that the shrillness here amazes me. I’ve been placed on a team with which I am unfamiliar.
Nice phony “caveman lawyer’ naivete on display.
1. Didn’t see a recommendation to find more ways to “test” climate models against observations. Very important.
2. The idea that natural forces manifest themselves over relatively short time scales is an assumption at this point. That has not been proved.
Why put test in quotes?
I’m thinking of indirect tests like comparing incoming/outgoing radiation patterns as measured vs models, cloud formation frequency over time and space, stuff like that – not a direct test in the sense that the model predicts climate changes. I wouldn’t expect a perfect climate model would be able to do that due to climate’s spatio-temporal chaotic quality.
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?
I’m pretty astonished that one of the key recommendations is still missing :
Models’ Verification & Validation !
If models are not anchored onto real world, and then dully validated, they are useless. None of the nice climate models has ever been validated and so far, none is actually able to pass any V&V process.
Climate models? All modeling results and predictions were proven wrong as time went by. GIGO!
I know Sam and this is exactly why this recommendation is still missing.
Climate scientists know that all their nice models would be falsified by such V&V process.
V&V process is a norm for all proven science and non-science. Climate modeling is not science and not even non-science, nothing more than GIGOs. Laughable warmists cry for actions based on GIGOs.
The U.S. National Research Council (NRC) has just released a new study under the auspices of BASC that takes on the challenge of A National Strategy for Advancing Climate Modeling . Here is a summary of the recommendations: 1. … 9.
What are some other sources of improvements?
As it is easy to recognize from UAH satellite temperature measurements, where the temperature anomalies of the NH and the SH are correlating in the oscillations, climate is not a national matter, but a global matter, and it is a global matter of science and physics. Physicians do know that a simple model of heat current needs a heat source that produce the measured – mostly phase coherent – global temperature oscillations of frequencies of month, years, decades and centuries. The model has to calculate that heat power spectrum that results in temperature values as measured. Because a heat, driving a heat current, comes not out of nothing, it seems that the Sun is the driver of the heat current.
The problem with recommendations of a national strategy is that i.) it is politics but not science, and that ii.) global climate cannot be separated from the terrestrial heat source, the Sun and the dynamic solar system, and iii.) that no (national) democratic strategy and it’s work solve any scientific challenge of the global climate nature, moreover iv.) it blocks the work of climate science, because scientist make politics work, and v.) they do not really look for new aspects in heliocentric climate science.
New aspects in climate science – and ignored since two years by the modellers of climate – are correlations between solar tide like synodic frequencies and the measured global temperatures:
for more details of the new aspect.
Pingback: RS Workshop on Handling Uncertainty in Weather & Climate Prediction. Part I | Climate Etc.
Where to go from here?
My main worries about climate models:
Not enough knowledge of the CO2 molecule, its resonances or vibrations, the Q of the resonances, under what conditions and phases of the steps and stairs of quantum theory can we use classical thermodynamics or quantum, what happened in 1940? Are there longer wavelengths of excitation than 14.5 microns of CO2? Was the dramatic change in monentum of temperature in 1940 due to large concentrations of CO2 achieving quanrtum change together?
I’ll read Climate 101 before further comment.
this website is recommended for all ages, as it contains good and informative stuff. lista de emails lista de emails lista de emails lista de emails lista de emails
Pingback: Climate model discussion thread | Climate Etc.
It’s an awesome paragraph in support of all the online visitors; they will obtain benefit from it I am sure.
Show your work.
Show your data.
– Publish the requirements for the model (high level purpose to be achieved).
– Publish the specifications for the model (detail of how each requirement is to be achieved).
– Publish the model code and computing procedures.
– Document and publish the parameters used in the model.
– Document and publish the procedures used to set the parameters