by Judith Curry
The calculation of atmospheric radiative fluxes is central to any argument related to the atmospheric greenhouse/Tyndall gas effect. Atmospheric radiative transfer models rank among the most robust components of climate model, in terms of having a rigorous theoretical foundation and extensive experimental validation both in the laboratory and from field measurements. However, I have not found much in the way of actually explaining how atmospheric radiative transfer models work and why we should have confidence in them (at the level of technical blogospheric discourse). In this post, I lay out some of the topics that I think need to be addressed in such an explanation regarding infrared radiative transfer. Given my limited time this week, I mainly frame the problem here and provide some information to start a dialogue on this topic, I hope that other experts participating can fill in (and I will update the main post).
Atmospheric radiative transfer models
The Wikipedia provides a succint description of radiative transfer models:
An atmospheric radiative transfer model calculates radiative transfer of electromagnetic radiation through a planetary atmosphere, such as the Earth’s. At the core of a radiative transfer model lies the radiative transfer equation that is numerically solved using a solver such as a discrete ordinate method or a Monte Carlo method. The radiative transfer equation is a monochromatic equation to calculate radiance in a single layer of the Earth’s atmosphere. To calculate the radiance for a spectral region with a finite width (e.g., to estimate the Earth’s energy budget or simulate an instrument response), one has to integrate this over a band of frequencies (or wavelengths). The most exact way to do this is to loop through the frequencies of interest, and for each frequency, calculate the radiance at this frequency. For this, one needs to calculate the contribution of each spectral line for all molecules in the atmospheric layer; this is called a line-by-line calculation. A faster but more approximate method is a band transmission. Here, the transmission in a region in a band is characterised by a set of coefficients (depending on temperature and other parameters). In addition, models may consider scattering from molecules or particles, as well as polarisation.
If you don’t already have a pretty good understanding of this, the Wikipedia article is not going to help much. There are a few good blog posts that I’ve spotted that explain aspects of this (notably scienceofdoom):
- Theory and experiment – atmospheric radiation
- CO2: An insignificant trace gas Part III
- CO2: An insignificant trace gas Part IV
- others?
You find scienceofdoom’s treatments to be beyond your capability to understand? Lets try more of a verification and validation approach to assessing whether we should have confidence in the radiation transfer codes used in climate models.
History of atmospheric (infrared) radiative transfer modeling
I don’t recall ever coming across a history on this subject? Here are a few pieces of that history that I know of (I hope that others can fill in the holes in this informal history).
Focusing on infrared radiative transfer, there is some historical background in the famous Manabe and Wetherald 1967 paper on early calculations of infrared radiative transfer in the atmosphere. As a graduate student in the 1970’s, I recall using the Ellsasser radiation chart.
The first attempt to put a sophisticated radiative transfer model into a climate model was made by Fels and Kaplan 1975, who used a model that divided the infrared spectrum into 19 bands. I lived a little piece of this history, when I joined Kaplan’s research group in 1975 as a graduate student.
In the 1980’s, band models began to be incorporated routinely in climate models. An international program of Intercomparison of Radiation Codes in Climate Models (ICRCCM) was inaugurated for clear sky infrared radiative transfer, with results described by Ellingson et al. 1991 and Fels et al. 1991 (note Andy Lacis is a coauthor):
During the past 6 years, results of calculations from such radiation codes have been compared with each other, with results from line-by-line models and with observations from within the atmosphere. Line by line models tend to agree with each other to within 1%; however, the intercomparison shows a spread of 10-20% in the calculations by less detailed climate model codes. When outliers are removed, the agreement between narrow band models and the line-by-line models is about 2% for fluxes.
Validation and improvement of atmospheric radiative transfer models
In 1990, the U.S. Department of Energy initiated the Atmospheric Radiation Measurement Program (ARM) program targeted at improving the understanding of the role and representation of atmospheric radiative processes and clouds in models of the earth’s climate (see here for a history).
A recent summary of the objectives and accomplishments is provided in the 2004 ARM Science Plan. A list of measurements (and instruments) made by ARM at its sites in the tropics, midlatitudes and the arctic is very comprehensive. Of particular relevance to evaluating infrared radiative transfer codes is the Atmospheric Emitted Radiance Interferometer. For this of you who want empirical validation, the ARM program provides this in spades.
The ARM measurements have become the gold standard for validating radiative transfer models. For line-by-line models, see this closure experiment described by Turner et al. 2004 (press release version here). More recently, see this evaluation of the far infrared part of the spectra by Kratz et al. (note: Miskolczi is a coauthor).
For a band model (used in various climate models), see this evaluation of the RRTM code:
Mlawer, E.J., S.J. Taubman, P.D. Brown, M.J. Iacono and S.A. Clough: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16,663-16,682, 1997 link
This paper is unfortunately behind paywall, but it provides an excellent example of the validation methodology.
The most recent intercomparison of climate model radiative transfer codes against line-by-line calculations is described by Collins et al. in the context of radiative forcing.
There is a new international program (the successor to ICRCCM) called the Continual Comparison of Radiation Codes (CIRC), which established benchmark observational case studies and coordinates intercomparison of models.
Conclusions
The problem of infrared atmospheric radiative transfer (clear sky, no clouds or aerosols) is regarded as a solved problem (with minimal uncertainties), in terms of the benchmark line-by-line calculations. Deficiencies in some of the radiation codes used in certain climate models have been identified, and these should be addressed if these models are to be included in the multi-model ensemble analysis.
The greater challenges lie in modeling radiative transfer in an atmosphere with clouds and aerosols, although these challenges are greater for modeling solar radiation fluxes than for infrared fluxes. The infrared radiative properties of liquid clouds are well known; some complexities are introduced for ice crystals owing to their irregular shapes (this issue is much more of a problem for solar radiative transfer than for infrared radiative transfer). Aerosols are a relatively minor factor in infrared radiative transfer owing to the typically small size of aerosol particles.
However, if you can specify the relevant conditions in the atmosphere that provide inputs to the radiative transfer model, you should be able to make accurate calculations using state-of-the art models. The challenge for climate models is in correctly simulating the variations in atmospheric profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols.
And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of addressing this issue (this will be the topic of the next greenhouse post).
Note: this is a technical thread, and comments will be moderated for relevance.
Ferenc Miskolczi posts papers and comments detailing the development of his quantitative Line By Line (LBL) HARTCODE program, testing it against data, and comparing it against other LBL models. He then used it to quantitatively evaluate the Earth’s Global Planck-weighted Optical depth.
For the detailed discussion of the methodology and code, see:
F.M. Miskolczi et al.: High-resolution atmospheric radiance-transmittance code (HARTCODE). In: Meteorology and Environmental Sciences Proc. of the Course on Physical Climatology and Meteorology for Environmental Application. World Scientific Publishing Co. Inc., Singapore, 1990. 220 pg.
He provides extensive theoretical basis and experimental foundations. He printed the complete code in appendix C (for A. Lacis and others who might care to learn how LBL models calculate.)
For comparative performance, see: Kratz-Mlynczak-Mertens-Brindley-Gordley-Torres-Miskolczi-Turner: An inter-comparison of far-infrared line-by-line radiative transfer models. Journal of Quantitative Spectroscopy & Radiative Transfer No. 90, 2005.
Miskolczi published the first quantitative evaluation of the Global Optical Depth. He found it to be effectively stable over the last 61 years. See:
The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-weighted Greenhouse-Gas Optical Thickness, Energy & Environment, Special issue: Paradigms in Climate Research, Vol. 21 No. 4 2010, August. See his Context Discussion:
Using a quantitative <a href="Planck weighted Optical Depth, Miskolczi found:
I recall Nullius’s milk analogy about the way a small increase in the concentration of milk in water can affect transparency with enough depth (at least that’s how I remember it) illustrates the effect of small increases in the concentration of CO2 in the atmosphere. Does optical depth refer to this? If it does, and the optical depth has remained constant over 60 years, what does that say about the analogy?
See Ferenc Miskolczi’s latest April 2011 results:
The stable stationary value of the Earth’s global average atmospheric infrared optical thickness Presented by Miklos Zagoni, EGU2011 Vienna
From quantitative Line By Line evaluations of the global optical depth using the best available dat from 1948-2008, Miskolczi finds:
Thus Miskolczi finds there is NO correlation of global optical depth with CO2, only with H2O.
Furthermore, the global average is about constant – with very little trend.
So would “stationary” or “static” be better words than “saturated”?
Some will argue that teh TIGR2 data is flawed. What better is there?
Miskolczi has also adjusted to match satellite data.
Has anyone else taken the effort to actually quantitatively evaluate the global optical depth and how it changes – or explain why it does not?
The term Planck-weighted greenhouse-gas optical thickness (PWGGOT) means that the optical thickness is evaluated for diffuse thermal radiation from a black surface (Planck weighting) at the bottom of the atmosphere for transmission to space. Only the greenhouse-gas effects are considered, and the calculation does not include explicit effects from clouds.
David Hagen’s sentence “Thus Miskolczi finds there is NO correlation of global optical depth with CO2, only with H2O” means that there was no linear trend of the global year-round average PWGGOT over the 61-years. It would misread David’s sentence to suppose it meant that immediate CO2 effects do not affect the PWGGOT; of course they do. It is the 61-year linear trend that he refers to.
David’s sentence says that Miskolczi found a trend effect only from H2O. Not so. Miskolczi’s Figure 11 shows also a trend effect from temperature. Miklos Zagoni’s presentation, to which David refers, also does not mention this. There is also a trend effect from CO2. These two trend effects (CO2 and temperature) must be making real contributions to the calculated values of the PWGGOT (in the sense that those values are determined partly by the quantities of CO2 and temperature and the method of calculation of the PWGGOT), but the magnitudes of those contributions are not to be regarded as showing a statistically significant linear trend when one has to judge only from the 61-year time series of values of the PWGGOT and does not have this a priori information. It is the overall value of the PWGGOT that shows no significant linear trend; this trend cannot be predicted, from just the method of calculation of the PWGGOT alone, because it depends essentially also on the trends of the time series of radiosonde data, which contain the information of the CO2, temperature, and H2O profile time series.
Miskolczi entitled his 2010 paper ‘The stable stationary value of the earth’s global average atmospheric Planck-weighted greenhouse-gas optical thickness’ and his paper does not use the term ‘saturated’. Christopher Game
Miskolczi provides further details, comparing
Linear trends in the NOAA time series for the first and last 50 years. i.e., for 1948-1997 with 1959-2008. The first show small decline while the latter a small rise in the global optical depth.
Errata: HARTCODE
http://miskolczi.webs.com/hartcode_v01.pdf
The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-weighted Greenhouse-Gas Optical Thickness
http://multi-science.metapress.com/content/c171tn430x43168v/?p=ad1e44ae55754e548ae474618bfb4102&pi=8
Planck-weighted Optical Depth definition.
http://miskolczi.webs.com/tau.jpg
I will just summarize some areas to fill in on the post.
The radiative transfer models in GCMs are band or “integral-line” type models that are calibrated from line-by-line models that themselves are calibrated on theory and direct measurement of spectral lines from radiatively active gases. In this way, this part of the GCM has a direct grounding in physics, and is very easy to verify with spectroscopic measurements. These models are crucial for quantifying the forcing effects of increased CO2 and H2O, and cloud-forcing effects in the atmosphere. The radiative transfer codes get their clouds from other parts of the GCM physics, and are not responsible for clouds per se, but can affect clouds through the processes of radiative heating and cooling in and on the surfaces of clouds that may impact the cloud lifetimes and development. More sophisticated GCMs also carry dust and other aerosols and the interaction of radiation with those directly or via their cloud effects. Obviously another place that radiative forcing is important, apart from the atmosphere, is the ground where longwave and shortwave fluxes interact with the land or ocean energy budgets. Radiation also helps to define the tropospheric and stratospheric temperature profiles, and how well GCMs reproduce these is an important metric. For climate models, GCMs have to obey a realistic global energy budget, as viewed from space, and this is mainly down to their radiative transfer model and cloud distribution.
Thanks!
Here’s a good site for the ocean’s absorption of SW vs. LW for non-scientists. Unfortunately, I don’t see any reference to heating of the ocean from down-welling LWR. From the site:
“Note that only 20% of insolation reaching Earth is absorbed directly by the atmosphere while 49% is absorbed by the ocean and land. What then warms the atmosphere and drives the atmospheric circulation shown in Figure 4.3? The answer is rain and infrared radiation from the ocean absorbed by the moist tropical atmosphere. Here’s what happens. Sunlight warms the tropical oceans which must evaporate water to keep from warming up. The ocean also radiates heat to the atmosphere, but the net radiation term is smaller than the evaporative term. Trade-winds carry the heat in the form of water vapor to the tropical convergence zone where it falls as rain. Rain releases the latent heat evaporated from the sea, and it heats the air in cumulus rain clouds by as much as 500 W/m2 averaged over a year (See Figure 14.1).
At first it may seem strange that rain heats the air. After all, we are familiar with summertime thunderstorms cooling the air at ground level. The cool air from thunderstorms is due to downdrafts. Higher in the cumulus cloud, heat released by rain warms the mid-levels of the atmosphere causing air to rise rapidly in the storm. Thunderstorms are large heat engines converting the energy of latent heat into kinetic energy of winds.
The zonal average of the oceanic heat-budget terms (Figure 5.7) shows that insolation is greatest in the tropics, that evaporation balances insolation, and that sensible heat flux is small. Zonal average is an average along lines of constant latitude. Note that the terms in Figure 5.7 don’t sum to zero. The areal-weighted integral of the curve for total heat flux is not zero. Because the net heat flux into the oceans averaged over several years must be less than a few watts per square meter, the non-zero value must be due to errors in the various terms in the heat budget.”
http://oceanworld.tamu.edu/resources/ocng_textbook/chapter05/chapter05_06.htm
Jim
you said:
“………….The radiative transfer models in GCMs are band or “integral-line” type models that are calibrated from line-by-line models that themselves are calibrated on theory and direct measurement of spectral lines from radiatively active gases……..”
Please could you explain (in layman’s terms) a little bit more about this calibration process. What is it? What is done to ensure that errors are not carried over from one type of model into the next and then magnified iteratively? Or have I misunderstood…in which case sorry in advance:)
Jim D:
You say;
“For climate models, GCMs have to obey a realistic global energy budget, as viewed from space, and this is mainly down to their radiative transfer model and cloud distribution.”
Hmmm.
Yes, that is literally true, but it is misleading.
As I have explained on two other threads of this blog, the “radiative transfer model” in each GCM is significantly affected by the climate sensitivity to CO2 in each model, and agreement with the “global energy budget” is obtained by adopting an appropriate degree of aerosol forcing (i.e. “cloud distribution”) in each model.
The values of climate sensitivity and aerosol forcing differ by ~250% between the models. Hence, the GCMs emulate very different global climate systems.
The Earth has only one global climate system.
Richard
Yes, a part of the radiative transfer model is how it handles aerosols, both in clear air and in clouds. This includes pollution and volcanic emissions. Global observations of aerosol effects haven’t been enough to constrain this very well, and it is a complex effect, especially when clouds are involved. Given this lack of observational constraints the aerosol part of the radiative model and aerosol amounts have some wide error bars (as we see honestly portrayed in the IPCC report). As long as models do something within these error bars, they are plausible, but this is an area where more observations are needed and are being actively carried out (e.g. in the DOE ASR program) to do it better. It is closely tied to the cloud-albedo variation.
Regarding the “confidence” in radiation models, Ferenc Miskolczi addresses radiation errors in his comments on Kiehl-Trenberth 1997/IPCC and in Trenberth-Fasullo-Kiehl (TFK) 2009:
For earlier color graphs, see Zagoni’s 2008 presentation on Miskloczi.
Miskolczi here is just quibbling with how the arrows are partitioned on these global energy budget summary diagrams. It is nothing fundamental about the radiative transfer models themselves.
Jim D
That difference is “only” a 30% error in attribution of upward emission St (to the top of atmosphere) that is not absorbed and re-emitted — identifying a major error in atmospheric radiation absorption/emission! Is that typical of GCM/energy balance accuracy expected? (Most other parameters were fairly close.) I thought Curry noted above:
I seem to recall a a climategate email bewailing: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t”. with some some clarification.
See the paper by: Kevin E Trenberth An imperative for climate change planning: tracking Earth’s global energy, Current Opinion in Environmental Sustainability 2009, 1:19–27
Trenberth notes the observed energy flow is 145 while the “residual” (error) is 30–100 * 10^20 J/year. (ie 21-29% unaccounted for “residual”).
Perhaps this 30% radiation error that Miskolczi identified together with the factor of 2 error in precipitable water column could help refine/find Trenberth’s missing energy?
He redefined what the window region was, and says the previous paper was wrong because they defined it differently from him. Seems like an opinion piece.
The other part about not accounting for interannual variability with existing data sources is well known. Is it random error or bias? It would be good to know because random error decreases with the length of the time series. Bias implies we need more or better instrumentation.
Judith
Re: Global Optical Depth & Precipitable Water
Planck-weighted Global Optical Depth
Ferenc Miskolczi has evaluated:
1) The Planck-weighted Global Optical Depth <a href=” ( tau a = – Ln (Ta) = 1.874)
2) The sensitivity of Optical Depth versus water vapor
3) The sensitivity of Optical Depth versus CO2
4) The sensitivity of Optical Depth versus temperature, and
5) The trend in Optical Depth for 61 years – very low.
Each of these parameters are testable against each of the Global Climate Models, and vice versa. These would provide independent objective tests of the “confidence” in the radiative code and atmospheric properties in each of the GCMs and Miskolczi’s evaluations and 1D model.
Precipitable Water
In his work, Miskolczi reevaluated the atmospheric profile, obtaining water vapor, CO2, and temperature vs depth (pressure). In doing so he found:
6) Precipitable water u=2.533 prcm
6) This precipitable water is factor of two higher than the standard atmospheric profile USST-76 (u=1.76 prcm).
e.g. See Fig 5 in Miskolczi Greenhouse effect in semi-transparent planetary atmospheresQuarterly Journal of the Hungarian Meteorological Service Vol. 111, No. 1, January–March 2007, pp. 1–40
A. Lacis (below) noted that:
Global climate models have been found to poorly, especially in predicting precipitation. Anagnostopoulos, G. G. , Koutsoyiannis, D. , Christofides, A. , Efstratiadis, A. and Mamassis, N. ‘A comparison of local and aggregated climate model outputs with observed data’, Hydrological Sciences Journal, 55:7, 1094 – 1110
GCMs appear to markedly overpredict warming compared with global temperature changes. See:
McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290.
Global Climate Models apparently do not predict the significant correlation between precipitation / runoff and the 21 year Schwabe solar cycle. See:
WJR Alexander et al. Linkages between solar activity, climate predictability and water resource development Journal of the South African Institution of Civil Engineering, Volume 49 Number 2 June 2007
Miskolczi’s reported water vapor results with evidence of poor GCM performance raises key questions:
9) What is the precipitable water vapor relative to Miskolczi’s atmospheric column (u=2.53 prcm) versus the USST-76 standard atmospheric column (u=1.76 prcm)?
10) Which atmospheric profiles were used to tune the GCM’s?
11) Could tuning to USST-76 etc. having low water vapor cause GCM’s to over predict climate feedback and poorly predict precipitation?
I look forward to learning more on these issues.
Recommend running another thread just on the uncertainties in atmospheric profiles.
errata Miskolczi’s papers at
http://miskolczi.webs.com/
Miskolczi (2010) Table 2
http://miskolczi.webs.com/Table2.jpg
Miskolczi 2007
http://met.hu/idojaras/IDOJARAS_vol111_No1_01.pdf
Alexander et al 2007
http://www.lavoisier.com.au/articles/greenhouse-science/solar-cycles/Alexanderetal2007.pdf
Miklos Zagoni gave a Poster presentation at the European Geosciences Union General Assembly, Vienna, 7 April 2011
This includes updated results and a description of Miskolczi’s Planck weighted global optical depth. See especially methodology on Slide 6, and results on Slides 16 and 17.
David,
Your unmitigated faith in Ferenc nonsense does not speak well of your own understanding of radiative transfer.
I include below an excerpt from my earlier post on Roger Pielke Sr’s blog http://pielkeclimatesci.wordpress.com/2010/11/23/atmospheric-co2-thermostat-continued-dialog-by-andy-lacis/
The basic point being that of all the physical processes in a climate GCM, radiation is the one physical process that can be modeled most rigorously and accurately.
The GISS ModelE is specifically designed to be a ‘physical’ model, so that Roy Spencer’s water vapor and cloud feedback ‘assumptions’ never actually need to be made. There is of course no guarantee that the model physics actually operate without flaw or bias. In particular, given the nature of atmospheric turbulence, a ‘first principles’ formulation for water vapor and cloud processes is not possible. Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. However, once these coefficients have been set, they become part of the model physics, and are not subject to further change. As a result, the model clouds and water vapor are free to change in response to local meteorological conditions. Cloud and water vapor feedbacks are the result of model physics and are thus in no way “assumed”, or arbitrarily prescribed. A basic description of ModelE physics and of ModelE performance is given by Schmidt et al. (2006, J. Climate, 19, 153–192).
Of the different physical processes in ModelE, radiation is the closest to being ‘first principles’ based. This is the part of model physics that I am most familiar with, having worked for many years to design and develop the GISS GCM radiation modeling capability. The only significant assumption being made for radiation modeling is that the GCM cloud and absorber distributions are defined in terms of plane parallel geometry. We use the correlated k-distribution approach (Lacis and Oinas, 1991, J. Geophys. Res., 96, 9027–9063) to transform the HITRAN database of atmospheric line information into absorption coefficient tables, and we use the vector doubling adding method as the basis and standard of reference for GCM multiple scattering treatment.
Direct comparison of the upwelling and downwelling LW radiative fluxes, cooling rates, and flux differences between line-by-line calculations and the GISS ModelE radiation model results for the Standard Mid-latitude atmosphere is shown in Figure 1 below. (available on Roger Pielke Sr’s blog)
As you can see, the GCM radiation model can reproduce the line-by-line calculated fluxes to better than 1 W/m2. This level of accuracy is representative for the full range of temperature and water vapor profiles that are encountered in the atmosphere for current climate as well as for excursions to substantially colder and warmer climate conditions. The radiation model also accounts in full for the overlapping absorption by the different atmospheric gases, including absorption by aerosols and clouds. In my early days of climate modeling when computer speed and memory were strong constraints, the objective was to develop simple parameterizations for weather GCM applications (e.g., Lacis and Hansen, 1974, J. Atmos. Sci., 31, 118–133). Soon after, when the science focus shifted to real climate modeling, it became clear that an explicit radiative model responds accurately to any and all changes that might take place in ground surface properties, atmospheric structure, and solar illumination. Thus the logarithmic behavior of radiative forcings for CO2 and for other GHGs is behavior that has been derived from the GCM radiation model’s radiative response (e.g., the radiative forcing formulas in Hansen et al., 1988, J. Geophys. Res., 93, 9341–9364) rather than being some kind of a constraint that is placed on the GCM radiation model.
Climate is primarily a boundary value problem in physics, and the key boundary value is at the top of the atmosphere being defined entirely by the incoming (absorbed) solar radiation and the outgoing LW thermal radiation. The global mean upwelling LW flux at the ground surface is about 390 W/m2 (for 288 K), and the outgoing LW flux at TOA is about 240 W/m2 (or 255 K equivalent). The LW flux difference that exists between the ground and TOA of 150 W/m2 (or 33 K equivalent) is a measure of the terrestrial greenhouse effect strength. We should note that the transformation of the LW flux that is emitted upward by the ground, to the LW flux that eventually leaves the top of the atmosphere, is entirely by radiative transfer means. Atmospheric dynamical processes participate in this LW flux transformation only to the extent of helping define the atmospheric temperature profile, and in establishing the local atmospheric profiles of water vapor and cloud distributions that are used in the radiative calculations.
Armed with a capable radiative transfer model, it is then straightforward to take apart and reconstruct the entire atmospheric structure, constituent by constituent, or in any particular grouping, to attribute what fraction of the total terrestrial greenhouse effect each atmospheric constituent is responsible for. That is where the 50% water vapor, 25% cloud, and 20% CO2 attribution in the Science paper (for the atmosphere as a whole) came from. “Follow the money!” is the recommended strategy to get to the bottom of murky political innuendos. A similar approach, using “Follow the energy!” as the guideline, is an effective means for fathoming the working behavior of the terrestrial climate system. By using globally averaged radiative fluxes in the analysis, the complexities of advective energy transports get averaged out. The climate energy problem is thereby reduced to a more straightforward global energy balance problem between incoming (absorbed) SW solar energy and outgoing LW thermal energy, which is fully amenable to radiative transfer modeling analysis. The working pieces in the analysis are the absorbed solar energy input, the atmospheric temperature profile, surface temperature, including the atmospheric distribution of water vapor, clouds, aerosols, and the minor greenhouse gases, all of which can be taken apart and re-assembled at will in order to quantitatively characterize and attribute the relative importance of each radiative contributor.
Validation of the GCM climate modeling performance is in terms of how well the model generated temperature, water vapor, and cloud fields resemble observational data of these quantities, including their spatial and seasonal variability. It would appear that ModelE does a generally credible job in reproducing most aspects of the terrestrial climate system. However, direct observational validation of the GCM radiation model performance to a useful precision is not really feasible since the atmospheric temperature profile and absorber distributions cannot all be measured simultaneously with available instrumentation to the required precision that would lead to a meaningful closure experiment. As a result, validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations that utilize laboratory measurements for cloud and aerosol refractive indices and absorption line radiative property information.
Thanks Andy. I agree that that validation is a mess owing to specifying the atmospheric characteristics, but clear sky validation has been done quite successfully by the ARM program.
Hi A Lacis,
Given your background as a GISS climate modeler, I am interested in your view of Willis Eschenbach’s analysis of the performance of various models at http://wattsupwiththat.com/2010/12/02/testing-testing-is-this-model-powered-up/#more-28755.
Thanks,
Chip
Andy,
Does the GISS ModelE cover all wavelengths between 200 nm (UV) and 50000 nm (longer IR) ? Or are there gaps? More specifically, does it cover the 2400 -3600 nm wavelengths?
http://www.giss.nasa.gov/tools/modelE/modelE.html#part3_3
and if you look through the code you see contributions from Andy.
A. Lacis
Judith has called for “Raising the level of the game”
Will you rise to the level of professional scientific conduct?
Or need we treat your comments as partisan gutter diatribe?
Your diatribe on “unmitigated faith in Ferenc nonsense does not speak well of your” professional scientific abilities or demeanor. I have a physics degree and work with combustion where most heat transfer is from radiation. (“Noise” can distort a round combustor into a triangular shape!) Though not a climatologist, I think I am sufficiently “literate” to follow the arguments – and challenge unprofessional conduct when I see it.
Per professional science, I see Miskolczi has created a world class software program to professionally calculate radiative exchange in his Line By Line HARTCODE. He then validates and round robin compares his HARTCODE LBL code with peer reviewed published papers.
Miskolczi’s HARTCODE was used at NASA to correct/interpolate satellite data: AIRS – CERES Window Radiance Comparison, AIRS-to-CERES Radiance Conversion
He then applies this radiative code using published data to evaluate atmospheric radiative fluxes, analyze them, and form a 1D planetary climate model.
He takes the best/longest available 61 year TIGR radiosonde data and the NOAA reconstruction data. Miskolczi then calculates a “Planck-weighted spectral hemispheric transmittance using M=3490 spectral intervals, K=9 emission angle streams, N=11 molecular species, and L=150 homogeneous atmospheric layers.” That appears to be the first quantitative evaluation of the Global Optical Depth and absorption.
He has posted preliminary results showing that NASA satellite data show supporting trends parallel to his analysis. See Independent satellite proof for Miskolczi’s Aa=Ed equation Note that TIGR Su = Ed + St is linear the surface upward flux Su, and the NASA data is parallel to that.
From the scientific method, I understand that professionally you could challenge:
1) The data
2) The code
3) The uncertainty, and bias, or
4) Missing/misunderstood physics.
You say: radiation “is the part of model physics that I am most familiar with, having worked for many years to design and develop the GISS GCM radiation modeling capability.” You could have evaluated Miskolczi’s definition of the “greenhouse effect”, his Planck Weighting, or calculation of the optical depth or atmospheric absorption. As you didn’t, I presume they are correct.
Having developed them, you presumably have the tools to conduct an alternative evaluation to check on the accuracy of Miskolczi’s method and results. Judith cites your paper: “Line by line models tend to agree with each other to within 1%; however, the intercomparison shows a spread of 10-20% in the calculations by less detailed climate model codes. When outliers are removed, the agreement between narrow band models and the line-by-line models is about 2% for fluxes.”Presumably your radiation model has not more than double the error of Miskolczi’s HARTCODE (<2% vs <1%).
Why then do you not try to replicate Miskolczi’s method in a professional scientific manner? Are you afraid you might confirm his results?
You observe: “The reason this “Miskolczi equality” is close to being true is that in an optically dense atmosphere (the atmospheric temperature being continuous) there will be but a small difference in the thermal flux going upward from the top of a layer compared to the flux emitted downward from the bottom of the layer above.” Spencer makes a similar criticism. Miskolczi (2010) quantitatively measures and shows a small difference between up and downward flux. For the purpose of Miskolczi’s 1D planetary greenhouse model, that difference between upward and downward flux appears to be a second order affect that does not strongly bear on the primary results of his global optical depth or absorption. He still accounts for the atmospheric variation in temperature, water vapor, CO2 in the empirical data.
The only serious issue you implied in you posts is how representative the TIGR and NOAA atmospheric profiles are to the global atmosphere: “. . .the atmospheric temperature profile and absorber distributions cannot all be measured simultaneously with available instrumentation to the required precision that would lead to a meaningful closure experiment.”
Regarding Miskolczi’s Planck weighted Global Optical Depth and Absorption, other issues that have been raised is how well he treats clouds, and the accuracy of the TIGR/NOAA data.
You could quantitatively show contrary evidence that
1) there are systemic false experimental error trend in the data over the last 61 years.
2) there are major errors due to how clouds and convection are treated in this 1D model; or
3) poor data distribution causes major errors in his results.
Instead I see you respond with scientific slander, asserting that Miskolczi imposed the results of his subsequent simplified climate model on his earlier calculations.
In your comments on Ferenc Miskolczi’s greenhouse analysis: you said that “There is no recourse then but to rely on statistical analysis and correlations to extract meaningful information.” You claim that “radiative analyses performed in the context of climate GCM modeling, have the capability of being self-consistent in that the entire atmospheric structure (temperature, water vapor, ozone, etc. distributions) is fully known and defined.”
You go on to state: A Lacis | December 5, 2010 at 12:12 pm
“We also analyze an awful lot of climate related observational data. Data analysis probably takes up most of our research time. Observational data is often incomplete, poorly calibrated, and may contain spurious artifacts of one form or another. This is where statistics is the only way to extract information.” Consequently you claim: “And this climate model does a damn good job in reproducing the behavior of the terrestrial climate system, . . .”
However, when another scientist, Miskolczi, conducts such “statistical analysis and correlations to extract meaningful information” you scientifically slander him, asserting that he did not conduct his analysis as you believe it should. Yet he conducted a much more detailed analysis along similar lines to your proposed method.
You assert: “Instead of calculating these atmospheric fluxes, Miskolczi instead assumes that the downwelling atmospheric flux is simply equal to the flux (from the ground) that is absorbed by the atmosphere.”
You claim that he imposed the results of a consequent simplified model on his detailed calculations. I challenged you that you were asserting the exact opposite of his actual published method. When confronted, you refused to check his work or to correct your statement, or show where I was wrong. I find your polemical diatribes to sound like the Aristotelians criticizing Galileo. Your words border on professional malpractice.
You state: “Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. . . . However, once these coefficients have been set, they become part of the model physics, and are not subject to further change.”
Yet when Miskolczi does the same “tuning” of the atmospheric profiles with the available TIGR and NOAA data to obtain empirical composition, temperature, pressure and humidity, you accusing him of imposing his simple model on the calculations.
You have not shown ANY error in the radiative physics he built on, nor in the coding of his HARTCODE software, nor in the atmospheric profiles he generated.
You say: “The working pieces in the analysis are the absorbed solar energy input, the atmospheric temperature profile, surface temperature, including the atmospheric distribution of water vapor, clouds, aerosols, and the minor greenhouse gases”. I understand Miskolczi to include these with the effect of “clouds and aerosols” in the atmospheric profiles.
You state: “validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations that utilize laboratory measurements for cloud and aerosol refractive indices and absorption line radiative property information.”
It appears that Miskolczi has done exactly that, with independent data. It appears that when he comes up with results opposite to your paradigm, you conduct a viscous ad hominem attack scientifically slandering him instead of responding in a scientifically professional objective way.
You again accuse: “Roy Spencer” of making “water vapor and cloud feedback ‘assumptions’”. I understood Spencer to have done the opposite. In his recent papers, he actually measures dynamic phase/ feedback magnitudes form satellite data and phase angle.
In your cited Pielke post you state: “there is really nothing that is being assumed about cloud and water vapor feedbacks, other than clouds and water vapor behave according to established physics. Climate feedbacks are simply the end result of model physics.” However, you make assumptions on the stability of Total Solar Insolation, on the variability of clouds with cosmic rays, and on the cause and magnitude ocean oscillations. The cause and magnitude of natural CO2 changes vary strongly with each of those assumptions. The strong difference between your results and those of Miskolczi and Spencer raise serious questions on your results and your (low) estimates of uncertainties involved.
At the International Conference on Climate Change
Ferenc Miskolczi presented: Physics of the Planetary Greenhouse Effect
http://www.heartland.org/bin/media/newyork08/PowerPoint/Tuesday/miskolczi.pdf
And
Miklos Zagoni, presented: Paleoclimatic Consequences of Dr. Miskolczi’s Greenhouse Theory
http://www.heartland.org/bin/media/newyork08/PowerPoint/Tuesday/zagoni.pdf
Physicist Dr. Ir. E. van Andel,has addressed The new climate theory of Dr. Ferenc Miskolczi.
Physicist Mikolas Zagoni has presented on: The Saturated Greenhouse Theory of Ferenc Miskolczi.
There are numerous technical posts on Miskolczi at Niche Modeling.
From preliminary reading Miskolczi’s work appears professional and it had been peer reviewed. I would have preferred further refinement of its language. I do not have the time or tools to professionally review, reproduce or test Miskolczi’s work. I may be wrong in my “lay scientific” perspective. However antiscientific diatribes don’t persuade.
You have the radiation modeling tools. Use them. You refused to read, or follow Miskolczi’s methods and calculations. Your actions speak much louder than your words. Until you provide credible qualitative or quantitative scientific rebuttal, I will stick to believing in the scientific method. I continue to hold that peer reviewed published papers like Miskolczi’s and Spencer’s have more weight than alarmist polemical posts.
Will you step up to the challenge of “Raising the level of your game” with a professional scientific response?
David,
In my remarks about Miskolczi’s paper, I never claimed the his line-by-line HARTCODE results were erroneous. I am not familiar with HARTCODE, so I said, let’s assume the Miskolczi is doing his line-by-line calculations correctly. I am more familiar with the line-by-line results of radiation codes like FASCODE and LBLRTM, which agree with out line by line model to better than 1%. Our GCM ModelE radiation model agrees with our line-by-line model also to better than 1%.
The real problem with Miskolczi’s results in not HARTCODE, but what he uses it for. Why on Earth would anybody want to calculate all of atmospheric absorption in the form of a useless “greenhouse gas optical thickness” parameter. You should know that atmospheric gaseous absorption is anything but uniform. You need to preserve as much of the spectral absorption coefficient variability as possible, and to treat this spectral variability properly in order to calculate the radiative transfer correctly. This kind of approach is something that might have been used a hundred years ago when there were no practical means to do numerical calculations.
Can Miskolczi calculate the well established 4 W/m2 radiative forcing for doubled CO2, and its equivalent 1.2 C global warming, with his modeling approach?
And you are misinterpreting Roy Spencer’s remarks about the results of our Science paper where we zeroed out the non-condensing greenhouse gases. Roy thought that the subsequent collapse of the greenhouse effect with water vapor condensing and raining out was because we had “assumed” that water vapor was a feedback effect. I just pointed out that we had made no such assumption. Water vapor in the GISS ModelE is calculated from basic physical relationships. The fact that water vapor condensed and precipitated from the atmosphere was the result of the thermodynamic properties of water vapor, and not assumptions of whether we wanted the water vapor to stay or to precipitate.
A. Lacis
Thanks for your response and queries. It is encouraging to hear that: “Our GCM ModelE radiation model agrees with our line-by-line model also to better than 1%.” That is very respectable compared to the 2004 intercomparison of LBL models using HITRAN 2000. I believe performance and resolution have improved since then. E.g. Miskolczi uses: “wavenumber integration [] performed numerically by 5th order Gaussian quadrature over a wavenumber mesh structure of variable length. At least Δnj ≈ 1 cm−1 spectral resolution is required for the accurate Planck weighting.”
AL: “Why on Earth would anybody want to calculate all of atmospheric absorption in the form of a useless “greenhouse gas optical thickness” parameter.”
DH: One foundational reason is to uphold the very integrity of science against authoritarianism. A second foundational reason is to provide an independent check on the validity of predictions of Catastrophic Anthropogenic Global Warming (CAGW) compared to natural causes for climate change. What is the relative magnitudes of these competing factors?
The orthodox model dominated by “anthropogenic CO2 warming + H2O Feedback” implies an increasing optical absorption or optical depth with increasing fossil combustion CO2. Alternative theories seek to quantify natural causes of climate change including the 21 year (“double”) magnetic Hale solar cycle, perturbations of Jovian planets on the sun and in turn the earth, solar modulation of galactic cosmic rays impacting clouds, rotation of the milky way galaxy affecting galactic cosmic rays etc. One or more of these are being tested to explain the strong correlation between variations in the Earth’s Length Of Day (LOD) with climate etc. Miskolczi (2004) (Fig 18, 19) demonstrates greater accuracy of LBL evaluations of atmospheric emission compared to conventional bulk models. That helps evaluation of global energy budgets. Quantitative LBL based 1D models can also provide an independent test for the “runaway greenhouse” affect.
Both NASA and Russia lost rockets & space craft due to programming errors. We the People are now being asked for $65 trillion for “climate mitigation”. Many of scientists, engineers and concerned citizens are asking for “a second opinion” and for exhaustive “kicking the tires” tests.
AL:
DH: In light of this validation difficulty you noted, Miskolczi’s comprehensive detailed quantitative calculation of the Planck weighted global optical depth directly from original TIGR radiosonde or summary NOAA data provides an independent test of this primary CO2 radiation climate sensitivity with the best available data extending back 61 years. If the global optical depth does NOT vary as expected, then that suggests other parameters have greater impact than conventionally modeled. To my knowledge, these Planck-weighted global optical transmittance and absorption parameters have never been calculated before. Nor have they been used as an independent test of GCM results. 1) Please let us know of any other publications that have done so.
AL: “You should know that atmospheric gaseous absorption is anything but uniform.”
DH: I agree. Miskolczi (2004), Miskolczi (2008) and Miskolczi (2010) show detailed nonlinear results of the absorption for water vapor vs CO2, for the surface, as a function of altitude, at Top of Atmosphere, as a function of latitude, grouped for atmospheric WINdow, MID Infra Red, Far Infra Red, for atmospheric down emission and up emission.
2) Can you refer us to any other paper(s) that provide equal or higher detail on this non-uniformity? – especially with full LBL quantitative calculations?
AL: You need to preserve as much of the spectral absorption coefficient variability as possible, and to treat this spectral variability properly in order to calculate the radiative transfer correctly.
DH: I agree. Miskolczi retains full “spectral absorption coefficient variability” for each absorbing gas species, across 3459 spectroscopic ranges; as a function of atmospheric column including altitude calculated over 150 layers, as a function of latitude, and as a function of radiant angle.
AL: This kind of approach is something that might have been used a hundred years ago when there were no practical means to do numerical calculations.”
DH: It is precisely because high power computational resources are now available that Miskolczi is now able to conduct these extremely detailed quantitative computations (compared to prior highly simplified bulk calculations. I would never have dreamed of doing this on my sliderule). Miskolczi calculates absorption for individual cells as a function of altitude and latitude, incorporating all the variations in temperature, pressure, and water vapor, as a function of wavelength including direct short wave (visible) absorption, and reflected absorption (when the surface is not black); as well as Long Wave (broken into sub groups of the atmospheric WINdow, Mid and Far IR). Each cell views IR emissions from other cells along 11 directions. Miskolczi calculates Planck weighted absorption to give a true solar weighting. All this is then integrated to obtain a Planck-Weighted Global Optical transmission tau, and then the corresponding Planck-weighted Global Optical Absorption Aa or Optical Depth. See Miskolczi 2010, Fig 10. This is then repeated for each of the 61 years of available TIGR/NOAA data.
From your description, I understand GCMs to only calculate simplified absorption and approximate this quantitative level of detail to reduce computational effort.
3) Do you know of any other publications calculating transmission and absorption to this high LBL level of detail? Have any others reported this 61 year mean giving of atmospheric transmission tau A = 1.868754 and atmospheric absorption Aa = 0.84568?
AL: Can Miskolczi calculate the well established 4 W/m2 radiative forcing for doubled CO2, and its equivalent 1.2 C global warming, with his modeling approach?
DH: Yes, Miskolczi (2010) Sect 2 pg 256-247 addresses that and calculates detailed sensitivities for other parameters – only possible by high resolution LBL calculations:
See also his detailed discussion of sensitivities in Sect. 4 p 259-260.
4) Do GCMs accurately calculate the full Beer-Lambert law with increasing saturation? See Jeff Glassman below.
AL: the results of our Science paper where we zeroed out the non-condensing greenhouse gases.
DH: Thanks for the clarification. How have you addressed the enormous buffer capacity of the ocean with numerous related salts? E.g. See Tom V. Segalstad. Segalstad highlights the arorthite-kaolinite buffer, clay mineral buffers, and calcium silicate-calcium buffer which are at least three orders of magnitude above the ocean’s carbonate solution buffers. CO2 variation is thus highly constrained within “small” dynamic variations over geologically “short” periods.
Miskolczi finds the atmospheric long wave (LR) transmission and Planck weighted Global Optical Depth (absorption) are highly stable. This suggests other factors such as solar & cosmic modulation of clouds and ocean oscillations may have much higher primary impact that currently modeled. There may be major systemic trend errors in the available TIGR data – that likely contaminate both GCM and Miskolczi’s models, and combinations thereof.
5) We look forward to seeing how well GCM’s can reproduce and/or disprove Miskolczi’s results. It will be fascinating to discover the causes of these dramatic differences in these 61 year long term sensitivities based on the best available data.
Andy Lacis:
Validation of the GCM climate modeling performance is in terms of how well the model generated temperature, water vapor, and cloud fields resemble observational data of these quantities, including their spatial and seasonal variability. It would appear that ModelE does a generally credible job in reproducing most aspects of the terrestrial climate system.
Hi Andy. Does the GISS ModelE GCM successfuly reproduce the fall in tropospheric relative humidity since 1948 empirically measured by radiosonde balloons?
If so, does this help explain the non-appearance of the tropospheric hotspot predicted by earlier, unrealisticl models?
I’m very pleased to see your comment on the ‘best of greenhouse’ thread that your models now take account of oceanic oscillations.
How much of the warming from 1975 to 2003 is now being attributed to them?
Thanks.
A. Lacis
“Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions.”
This is the major problem I have with climate models. I count 6 inter-related parameters that have to be calibrated, plus an unknown number of unknown parameters(cosmic rays, electromagnetic effects such as lightning, turbulence, droplet size, and ???). That would require n factorial(I’m not sure) interaction matrices that have to be produced by measurements(which I am sure no one has done, unlike the radiation transfer models). If at any time during the model solution a parameter ends up outside it’s measured range the whole calculation falls apart. You can’t project fitted data outside it’s measured range. I don’t see how such a model can be made reliable and testable. The fact that it can be tuned to mimic a series of measurements doesn’t prove a thing about its ability to produce accurate results in a long term prediction.
The radiative transfer through the atmosphere is only one part of how energy is transferred and at what rate. Just compare the rate of energy transfer from radiation, a few hundred watts per m^2 to that moved by evaporation and convection. Yes, the clear sky problem is likely OK. Unfortunately that is only a small part of the problem.
Very interesting thread.
For line-by-line calculations, absorption is computed on a prescribed spectral grid (at every model pressure and temperature level), with the equations of radiative transfer to calculate upwelling /downwelling radiative fluxes. Most of the absorption arises from molecular transitions, which from quantum physics is discrete, giving rise to absorption lines. Despite the discrete nature of molecular absorption and emission, this process is not monochromatic however (confined to a single wavelength). Rather, absorption for some transition becomes strongest at the line center and decays away from the center due to various ‘broadening’ mechanisms which arise simply from the Heisenberg uncertainy principle, pressure effects (dominant especially in the low atmosphere), or the motions of the molecules (where absorption is doppler shifted relative to an observer). The convolution of pressure and doppler effects gives rise to the so-called Voigt line-shape.
There is some absorption however that is much more continuous and stronger, moreso than is explained by local absorption lines. This is the ‘continuum absorption’ and some mechanisms for this feature, depending on the gas or region on the EM spectrum are still a matter of uncertainty. It is important in some areas of the H2O and CO2 regimes however, and becomes important esepcially in dense atmospheres where collisions between molecules can allow transitions to occur that are otherwise forbidden. In fact, in some atmospheres (such as Titan or the outer planets), even the diatomic molecules like H2 can become very good greenhouse gases due to this process.
Representing the continuum absorption in practical modelling has a few different approaches. Even for LBL calculations, several parameter choices must be made concerning the formualtion of the continuum, or the sub-
Lorentzian (or super-Lorentzian, depending on the gas) absorption features in the wings of the spectral lines. Due to this, good fits can be made between LBL calculations and observational spectra. One problem is that for people interested in climates on, say, early Earth or Mars, the radiative transfer issue even in clear skies is far from resolved.
Another approximate method involves band models, which are basically fits to transmission spectra generated by LBL calculations. Band averaging groups together many molecular transitions, and there are also other methods such as the correlated k-disribution (which I’m sure Andy Lacis will talk about due to this authorship on the 1991 paper with Oinas). One of the objects of the famous ‘Myhre et al 1998’ study which gives the ‘5.35*ln(C/Co)’ radiative forcing equation for CO2, is to compare LBL results from previous Narrow-band models and broad band models, which have different treatments of solar absorption or the vertical structure of the gas.
For global warming applications, the strength of an individual greenhouse gas depends upon the distribution of itself and the temperature, its absorption characteristics, and the concentration itself. For very low concentrations of greenhouse gases, the absorption is approximately linear with mxing ratio in the air, weakening to a square root and then eventually to a logarthmic dependence. This is why methane is often quoted as being a ‘stronger greenhouse gas than CO2.’ It’s not an intrinsic property of the gas, but rather a consequence that methane exists in much lower concentrations and can provide a greater warming potential, molecule-for-molecule, in the current atmosphere.
Without considering feedbacks, the temperature response to a given forcing (such as a doubling of CO2) can be almost purely determined by the radiative forcing for CO2, since the no-feedback sensitivity is merely given by the derivative of the Planck function with respect to the temperature. The ‘forcing’ depends obviously on the way forcing is defined, some authors using varying definitions, and these differences must be kept in mind when comparing literature sources. The IPCC AR4 defines ‘radiative forcing’ as the change in net irradiance *at the tropopause* while allowing the stratosphere to re-acquire radiative equilibrium (which occurs on timescales of months). Once this is done, it is found that the radiative forcing for a doubling of CO2 is given as ~3.7 W/m2. A comparison of 9 out of the 20 of the GCMs used in the IPCC AR4 shows differences in the net all-sky radiative forcing between 3.39 and 4.06 W/m2. See e.g.,
http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3974.1
In constrast, Table 1 of http://www.gfdl.noaa.gov/bibliography/related_files/bjs0601.pdf shows that the Planck feedback response has relatively very little spread amongst models (which agrees well with simple back of the envelope calculations as well), indicating a rather robust response for a no feedback atmosphere. It follows from all of this that most of the uncertainty assoicated with a doubling of CO2 comes from the various feedback effects, especially low cloud distribution, and these feedbacks involve not just radiative transfer but also dynamics and microphysics.
Thanks Chris.
Thanks Judith,
I think it needs to be dumbed down even further at least for a start.
RTE, I would hope, would be the one aspect of AGW science that all sides can come to agree upon. One thing I’ve tried ( and failed of course) to convey to my friends on the skeptical side is that RTE is really not up for serious debate. The angle I take is that this physics is actually used to design things that actually work. That might be an angle some skeptics would appreciate. How RTE get used in engineering. We used to not be able to talk about it ( or get shot) but it would make a nice entry point for your readers to invite somebody who uses RTE in their current engineering job.
So, I would start with observations ( measurements and experiments), then practical applications, then Theory. That fits the particular bent of thinking I see in the skeptical circles.
Maybe willis can have a go at this :)
How RTE get used in engineering.
Happens every day in semiconductor fabs. Automated critical dimension measurement and control equipment is always housed in temp controlled microenvironments.
It’s also easy to demonstrate. Take any industrial microscope at high magnification and look at something (e.g. a feature of say 40 – 100 nanometers) and place your hand within 4 or so inches from the stage assembly. Your image feature will warp out of focus solely from the heat of your hand. Remove your hand, and focus eventually returns. The heat from your hand affects the microscope stage via RTE.
Designing automation with this type of equipment requires tight control of the thermal environment for this reason (and probably spells out why it’s automated in the first place.)
Yeah I know that by “engineering” you probably meant something a little fancier or more esoteric as opposed to everyday stuff… but this is, if nothing else, a simple yet practical demo of RTE knowledge being used in every day engineering.
I hope I’ve contributed here as opposed to being intrusive.
thanks, this is the kind of example that people can relate to
Thats exactly the kind of thing I am refering to.
communications engineers, engineers in defense ( i worked in stealth .. broadband stealth) sensor specialists. We all know that RTE are accurate, they work, we rely on them day in and day out. we are not taking crazy pills and neither have we been subverted by some global watermellon conspiracy. For me its a threshhold question in having discussions with skeptics. If they wont learn enough to accept this, then we really can’t have a meaningful dialogue. So, I just ask them. can you at least accept physics that has worked for years and protected your freedom of speach?
When a skeptic uses a chip ( his computer chips) to deny a science that makes chips possible, then he’s got a tangible example of the problem with his position.
So, Judith How many industrial examples can we come up with.
Since what I described is the human version of a basic apartment radiator I doubt you really need engineering examples. This is why I’d wondered if I was submitting TOO simple an example. Surely nobody really can question RTE physics ?!?
I’m thinking you can use the temp meaurement record to prove the exact same concept *and* prove that GW is real. In fact you and I discussed this (very very briefly) some time back elsewhere. All you have to do is look at the tMin from a handful of desert (no humidity) stations. If tMin rises over time and rh doesn’t, it ain’t water vapour doing it, it’s GHG. There aren’t any other factors. IIRC you said you had the data (I don’t.)
If I’m wrong about this I’d appreciate knowing why.
If not, I’d like you to consider this because everyone here seems to be in agreement that it’s the simple irrefutable stuff, like you’re wanting here, that gets the message across. We don’t need juvenile pictures, really, that’s too dumbed down.
I should actually go back and revisit that desert study with a full dataset and my improved R skills!
Thanks.
On the RTE thing I would just start with the basics of radaitive transfer. I got my introduction on the job working with Radar Cross sections and then moved on to IR work. Just getting people to understand transmission windows and absorbion and scattering and reflection would be a good start.
We also did cool things to enable short range covert communications because of windows or the lack of windows.. to be more precise.
Yes Steve, you do need to understand reflection.
I have absolutely no problem with understanding how this works planet wide.
It is the actual physical changes that the planet is displaying blamed on Global Warming I have a problem with.
The evidence is far closer to pressure build-up than planetary warming. But again that is not Co2 analysis totally tied to temperature and not physical changes.
Ocean water does have defences it has build to prevent overheating and mass evaporation.
>Surely nobody really can question RTE physics ?!?It follows from all of this that most of the uncertainty assoicated with a doubling of CO2 comes from the various feedback effects, especially low cloud distribution, and these feedbacks involve not just radiative transfer but also dynamics and microphysics.< [Colose, December 5th above]
then it's worth it's space here. I vaguely remember Judith promising a thread on feedbacks sometime
I;m still interested in Dessler's use of the equation:
DTf = 1.2C/(1-f)
where f = sum of feedbacks (both signs)
ian, what are you interested in? I could probably help you
Thanks Chris
I put up a post on Dessler’s use of that equation some threads ago, but no takers then so I’m most happy for you to answer here
So, DTf (sensitivity) = 1.2C/(1-f)
1)1.2C is Dessler’s preferred temperature rise from purely CO2 x 2 – I’ve found the range reported as 0.8C to 1.2C – how correct is this range ?
2) Dessler included four (4) feedback elements to sum to “f” –
f(wv) H2O vapour = +0.6)
f(lr) = -0.3
f(ia) = +0.1
f(cl) clouds = +0.15
(are there more than 4 ?)
so “f” (summed) = +0.55
so DTf = 1.2/1-0.55 = 2.7
but if 0.8C is used, DTf = 1e, a 67% increase in sensitivity is required to reach 2.7
and, what bothers me most:
why cannot “f” (summed) = 1.0, in which case DTf is infinite (nonsensical). Is there something wrong with the equation as laid out by Dessler ?
Last point: Dessler claimed (October 2010, MIT) that the various f values above are observed data. By nature and very long experience, I am more inclined to observed data than theory, no matter how elegant, so my obvious questions are on the reliability, range and methods for these observed data
If you could kindly work through these questions, I am very interested
Ian,
1) The temperature value of “1.2 C” is determined by the product of two things. First, it is dependent on the radiative forcing (RF) for CO2 (or the net change in irradiance at the TOA, or tropopause, depending on definition). It’s also dependent on the magnitude of the no-feedback sensitivity parameter (call it λo to be consistent with several literature sources). Therefore, dTo=RF*λo. So we can re-write the Dessler equation (although this goes back a while, Hansen, 1984 one of the earliest papers I know of to talk about things in this context, Lindzen has some other early ones) as dT=(RF*λo)/(1-f). (I will use the “o” when we are referring to no feedbacks). It turns out virtually all uncertainty in dTo is due to the radiative forcing for CO2 (~+/- 10%, the best method to calculate this is from line by line radiative transfer codes which gives ~3.7 W/m2). The uncertainty in the no-feedback sensitivity is considerably less. The 1/(1-f) factor is really at the heart of the sensitivity problem
2) I’m not aware of any other radiative feedbacks than these
3) The f–> 1 limit is a good question, and a large point of confusion, even for people who work on feedbacks. You are right that it is nonsensical for dT to go to infinity (or negative infinity if the forcing is causing a cooling) but there’s a couple things to keep in mind. First, the equation you cite is strictly an approximation. It neglects higher order derivative terms (think Taylor series) and thus one can add second order and third order powers to the equation, and solving for the new dT with these new terms will leave, say, a quadratic expression that needn’t blow up to infinity.
Physically, f=1 does not necessite a runaway snowball or runaway greenhouse, nor is ‘f’ a cosntant that remains put across all climate regimes. Rather, it corresponds to a local bifurcation point, and it is fully possible to equilibriate to a new position past the bifurcation (although you need further information than the linear Dessler equation to determine where). Think of a situation where you are transitioning into a snowball Earth, and the ice-line reaches a critical (low) latitude to make the ice-albedo feedback self sustaining. In this case one needn’t create any new ‘forcing’ to complete the snowball transition. Rather, the previous state (with semi-covered ice) was unstable with respect to the perturbation, and the snowball Earth is a stable solution for the current solar and greenhouse paramters. Thus, just a little nudge in one of these forcings can tip the system past a bifurcation point, to end up in a new stable (say, ice-covered regime). But once the planet is ice-covered, or ice-free, you can’t get any more ice albedo feedback so the temperature won’t change forever.
4) On the observational side, the water vapor and lapse rate feedbacks are well diagnosed to be positive and negative, respectively. They are usually combined by experts in this area to be a WV+LR feedback, since they interplay with each other. Brian Soden or Isaac Held (along with Dessler, and others if you follow references around…the AR4 is a good start) have many publications on this. I’m not personally familiar with how well quantified the ice-albedo feedback at the surface is observationally (maybe an ice expert here can chime in). The sign is robust, but in any case as Dessler noted, it’s a pretty small feedback. It is important for the Arctic locally, but not very big on a global scale. There’s a lot of papers talking about the ice-albedo feedback though and its role especially in sea ice loss.
Clouds are the big uncertainty, especially low clouds. Longwave component of cloud feedbacks are pretty much always positive in models, and pretty good theories have been put out to explain this (Dennis Hartmann especially), so the shortwave (albedo) component is the big player here. I’m not in a great position to talk about every new update to cloud observations, but I don’t really know how much value they have right now to diagnose climate sensitivity. Even more so, we don’t know the 20th century radiative forcing with high confidence, mostly because of aerosols.
Thank you for your reply – it’s sufficient for me to plough through for now
As for f >= 1, I had suspected
>First, the equation you cite is strictly an approximation<
in any case. I rather thought the "=" in the equation should have been "proportional to" and we were missing some other factor(s)
chrisclose
Thanks for the comments. Re: “Physically, f=1 does not necessite a runaway snowball or runaway greenhouse, nor is ‘f’ a cosntant that remains put across all climate regimes. Rather, it corresponds to a local bifurcation point, and it is fully possible to equilibriate to a new position past the bifurcation (although you need further information than the linear Dessler equation to determine where). ”
Essenhigh explored the potential for bifurcation point with increasing CO2. He concluded:
Energy Fuels, 2006, 20 (3), 1057-1067 • DOI: 10.1021/ef050276y
Can you refer to any studies that do find a bifurcation point from increasing CO2?
By nature and very long experience, I am more inclined to observed data than theory, no matter how elegant,
Funnily enough I’m completely that way myself when it comes to climate science, unlike computer science where I’m the total theorist.
I think it must be because in a computer we can account for every last electron, whereas the climate is subject to many effects completely beyond our ability to see, such as magma sloshing around in the mantle, inscrutable ocean isotherms, rate of onset of permafrost melting radically changing the feedbacks, etc.
Digital climate models have something to offer, but so do analog models. The mother of all analog models of climate is mother nature herself.
We can learn a lot from closer studies of that ultimate analog model!
(Apologies to all the pessimists here for my being so upbeat about that.)
Spacing went goofy on that post. I’d wrote:
>>Surely nobody really can question RTE physics ?!?<
Well, no actually, but if it pushes the website closer to:
[Colose quote]
then it's worth it's space here … etc
Hi RandomE
Is the following the type of study you’re looking for?
“The arid environment of New Mexico is examined in an attempt to correlate increases in atmospheric CO2 with an increase in greenhouse effect. Changes in the greenhouse effect are estimated by using the ratio of the recorded annual high temperatures to the recorded annual low temperatures as a measure of heat retained (i.e. thermal inertia, TI). It is shown that the metric TI increases if a rise in mean temperature is due to heat retention (greenhouse) and decreases if due to heat gain (solar flux). Essentially no correlation was found between the assumed CO2 atmospheric concentrations and the observed greenhouse changes, whereas there was a strong correlation between TI and precipitation. Further it is shown that periods of increase in the mean temperature correspond to heat gain, not heat retention. It is concluded that either the assumed CO2 concentrations are incorrect or that they have no measurable greenhouse effect in these data.”
The above is a paper By Slade Barker (24 July 2002)
“A New Metric to Detect CO2 Greenhouse Effect
Applied To Some New Mexico Weather Data”
Here is the link http://www.john-daly.com/barker/index.htm
hope this helps
p.s. You say…”I’m thinking you can use the temp meaurement record to prove the exact same concept *and* prove that GW is real.”
Have I misunderstood that sentence or is that whats called confirmation bias?
regards
Is the following the type of study you’re looking for?
No, I don’t think so, but an interesting find regardless. Barker appears to be doing something else entirely.
However… thank you, anyway.
In what I spoke to Mosher about, only the min temps are needed, and these need to be looked at from a variety ultra low humidity stations around the world. Essentially the idea is to deal with only the coldest temps in super arid conditions, which would hopefully minimize the effect of water vapor. Rising min temps (which are expected, and from what I gather, what’s observed) would be attributable to GHG — I think, anyway. Maybe one of the experts here can say yea or nay before Mosh starts mucking about with data.
Have I misunderstood that sentence or is that whats called confirmation bias?
You understood it and no, it’s not confirmation bias. AGW is already proven. The goal is to help prove it to deniers.
You’re well aware climate stuff is a political battle. Bad guys (and no, they’re not climate scientists) do exist, largely in the form of political interests, and the goal is to keep them from silly things like ruining the economy. Whether you presume ruination to be in the form of (left) socialist creep or (right) corporate sellout is none of my business, but regardless of which direction ruination approaches from, they’re going to be wielding knowledge as their weapon of choice. Read your Sun Tzu: you can choose to embrace the weapon yourself, or you can get clubbed with it. Denying the existence of the weapon guarantees the latter outcome.
Randomengineer:
You say:
“AGW is already proven.”
Really? By whom, where and how?
Or do you mean effects of UHI and land use changes?
I and many others (including the IPCC) would be very grateful for your proof of AGW.
Richard
Or do you mean effects of UHI and land use changes?
All of the above. Soot, pollution, land use, and yes, CO2 all play a role. Call it climate change if you prefer. 6 Billion souls with fire and technology and agriculture would be hard pressed to NOT change the environment.
There’s plenty of room to be skeptical of magnitude and/or whether or not there’s a *problem* while still accepting the reality of the basic physics. For all we know the low feedback guys are correct and the effect of Co2 is minimal. That’s a great deal different however than claiming physics doesn’t work or that CO2 has no effect at all.
It’s like Mosher says; let’s get past the silly unwinnable physics arguments and move on to what’s important.
Randomengineer:
Please accept my sincere thanks for your good and clear answer to my question. Your response invites useful discussion, and it is a stark contrast to the typical response from AGW-supporters to such questions.
As you say;
“There’s plenty of room to be skeptical of magnitude and/or whether or not there’s a *problem* while still accepting the reality of the basic physics. For all we know the low feedback guys are correct and the effect of Co2 is minimal. That’s a great deal different however than claiming physics doesn’t work or that CO2 has no effect at all.”
Absolutely right!
If only more people would adopt this rational position then most of the unproductive ‘climate war’ could be avoided.
The important issue to be resolved is whether or not AGW is likely to be a serious problem. You, Judith Curry and some others think that is likely, while I and a different group of others think it is very unlikely. Time will reveal which of us is right and to what degree because GHG emissions will continue to rise if only as a result of inevitable industrialisation in China, India, Brazil, etc..
Without getting side-tracked into the importance of the land-use issues that interest Pielke, the real matters for discussion in the context of AGW are
(a) to what degree anthropogenic GHG emissions contribute to atmospheric CO2 concentration
and
(b) how the total climate system responds to increased radiative forcing from e.g. a doubling of atmospheric CO2 concentration equivalent.
So, in my opinion, the discussion in this thread needs to assess the possible effects on the climate system of changes to atmospheric GHG concentrations. And, as you say, this leads directly to the issue of climate sensitivity magnitude.
I think most people would agree that doubling atmospheric CO2 concentration must have a direct warming effect that – of itself – implies a maximum of about 1.2 deg.C increase to mean global temperature (MGT) for a doubling of atmospheric CO2 concentration. The matter to be resolved is how the climate responds to that warming effect; i.e. is the net resultant feedback positive or negative?
If the net feedback is negative (as implied by Lindzen&Choi, Idso, Douglas, etc.) then those on my ‘side’ (please forgive the shorthand) of the debate are right because a maximum climate sensitivity of 1.2 deg.C for a doubling of CO2 would not provide a problem. But if your ‘side’ of the debate is right that the net feedback is positive then there could be a significant future problem.
So, the behaviour of the climate system (in terms of changes to lapse rates, clouds and the hydrological cycle, etc.) need to be debated with a view to discerning how they can be understood. And that is the debate I think we should be having.
Again, thank you for your answer that I genuinely appreciate.
Richard
I’m glad we can talk, Richard. It beats name calling and such.
So, in my opinion, the discussion in this thread needs to assess the possible effects on the climate system of changes to atmospheric GHG concentrations.
You’re getting ahead of things just a bit. Let’s continue this on the next thread or so when our host starts discussing that part of things; I think this is planned already.
For now, let’s concentrate on the topic du jour: we pretty much agree that humans change their environment in myriad ways. We know CO2 is a GHG and we know why. We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. What’s relevant is just knowing that the understanding is basically correct.
Now, don’t tell your friends, but you’re in the “AGW is real” camp, same as me. That doesn’t mean we’re going to start campaigning for polar bear salvation or writing letters to the editor about imminent catstrophe. Dealing with reality and succumbing to fanatacism are different things.
Is this a fair summary?
If so, welcome to the dark side. We have cookies.
Randomengineer:
I, too, am glad that we can talk. Indeed, I fail to understand why so many of your compatriots think name calling is appropriate interaction, especially when I have always found dialogue is useful to mutual understanding (although it usually fails to achieve agreement).
You ask me if you have correctly summarised my position, and my answer is yes and no.
My position is – and always has been – that it is self-evident that humans affect climate (e.g. it is warmer in cities than the surrounding countryside) but it seems extremely unlikely that GHG-induced AGW could have a discernible effect (e.g. the Earth has had a bi-stable climate for 2.5 billion years despite radiative forcing from the Sun having increased ~30% but the AGW conjecture is that 0.4% increase of radiative forcing from a doubling of CO2 would overcome this stability).
So, I agree with you when you say,
“we pretty much agree that humans change their environment in myriad ways. We know CO2 is a GHG and we know why. We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming.”
But we part company when you say,
“Whether this is a fraction of a degree or more isn’t relevant at this stage. What’s relevant is just knowing that the understanding is basically correct.”
I think not. As I see it, at this stage what we need to know – but do not – is why the Earth’s climate seems to have been constrained to two narrow temperature bands despite immense tectonic changes and over the geological ages since the Earth’s atmosphere became oxygen-rich. This has to have been a result of the global climate system’s response to altered radiative forcing. So, as I said, we need to debate the response of the climate system to altered radiative forcing in terms of changes to lapse rates, clouds, the hydrological cycle, etc..
Please note that I am on record as having repeatedly stated this view for decades.
Hence, I am not and never have been on “the dark side”. But if adequate evidence were presented to me then I would alter my view. As the saying goes, if the evidence changes then I change my view.
To date I have seen no evidence that warrants a change to my view. All I have seen are climate models that are so wrong I have published a refereed refutation of them, assertions of dead polar bears etc., ‘projections’ of future catastrophe that are as credible as astrology (I have published a refereed critique of the SRES), and personal lies and insults posted about me over the web because I do not buy into the catastrophism. And the fact of those attacks of me convinces me that everything said by the catastrophists should be distrusted.
So, give me evidence that climate sensitivity is governed by feedbacks that are positive and not so strongly negative that they have maintained the observed bi-stability over geological ages despite ~30% increase in solar radiatived forcing. At present I can see no reason of any kind to dispute the null hypothesis; viz. nothing has yet been observed which indicates recent climate changes have any cause(s) other than the cause(s) of similar previous climate changes in the holocene.
Richard
Richard, I think I missed something.
I’d said : “We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. “
This isn’t a conclusion of rampant runaway warming, but a statement that merely adding GHG with all things being equal ought to raise the temp.
Your position then switches to discussion of paleoclimate, and as I read things I think what you’re saying ultimately is that the GHGs are naturally suppressed such that the increase of GHG doesn’t cause warming due to damping.
I don’t see a problem with asserting a natural state in real life working against a temp rise, but I find it somewhat unconvincing that the natural state that damps a temp rise is damping a temp rise that doesn’t happen in the first place.
It seems to me that the logical position is that yes there SHOULD be a temp rise, BUT the temp rise isn’t happening due to [unspecified factors.]
So I’m a bit confused regarding the position. Let me ask this then: should there be a temp rise with adding CO2 that doesn’t happen for [some reasons] or is adding CO2 something that never results in a temp rise?
Randomengineer:
I apologise for my lack of clarity. It was not deliberate.
I attempted – and I clearly failed – to state what I think to be where we agree and where we disagree. It seems that I gave an impression that I was trying to change the subject, and if I gave that erroneous impression then my language was a serious mistake.
So, I will try to get us back to where we were.
Please understand that I completely and unreservedly agree with you when you assert:
“I’d said : “We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. “
This isn’t a conclusion of rampant runaway warming, but a statement that merely adding GHG with all things being equal ought to raise the temp.”
Yes, I agree with that. Indeed, I thought I had made this agreement clear, but it is now obvious that I was mistaken in that thought.
And I was not trying to change the subject when I then explained why I think this matter we agree is of no consequence. The point of our departure is your statement – that I stress I agree – saying,
“merely adding GHG with all things being equal ought to raise the temp.”
But,importantly, I do not think “all things being equal” is a valid consideration. When the temperature rises nothing remains “equal” because the temperature rise induces everything else to change. And it is the net result of all those changes that matters.
As I said, a ~30% increase to radiative forcing from the Sun (over the 2.5 billion years since the Earth has had an oxygen-rich atmosphere) has had no discernible effect. The Earth has had liquid water on its surface throughout that time, but if radiative forcing had a direct effect on temperature the oceans would haver boiled to steam long ago.
So, it is an empirical fact that “merely adding GHG with all things being equal ought to raise the temp.” is meaningless because we know nothing will be equal: the climate system is seen to have adjusted to maintain global temperature within two narrow bands of temperature while radiative forcing increased by ~30%.
Doubling atmospheric CO2 concentration will increase radiative forcing by ~0.4%. Knowing that 30% increase has had no discernible effect, I fail to understand why ~0.4% increase will have a discernible effect.
I hope the above clarifies my view. But in attempt to show I am genuinely trying to have a dialogue of the hearing, I will provide a specific answer to your concluding question that was;
“So I’m a bit confused regarding the position. Let me ask this then: should there be a temp rise with adding CO2 that doesn’t happen for [some reasons] or is adding CO2 something that never results in a temp rise?”
Anything that increases radiative forcing (including additional atmospheric CO2 concentration) will induce a global temperature rise. But the empirical evidence indicates that the climate system responds to negate that rise. However, we have no method to determine the response time. Observation of temperature changes following a volcanic cooling event suggests that the response time is likely to be less than two years. If the response time is that short then we will never obtain a discernible (n.b. discernible) temperature rise from elevated atmospheric CO2. But if the response time is much longer than that then we would see a temperature rise before the climate system reacts to negate the temperature rise. And this is why I keep saying we need to determine the alterations to clouds, the hydrological cycle, lapse rates, etc. in response to increased radiative forcing. We need to know how the system responds and at what rate.
Alternatively, of course, I may be completely wrong. The truth of that will become clear in future if atmospheric CO2 concentration continues to rise. (All scientists should remember Cromwell’s plea, “I beg ye in the bowells of Christ to consider that ye may be wrong”.)
In conclusion, I repeat my gratitude for this conversation. Disagreement without being disagreeable is both pleasurable and productive.
Richard
All clear now, thanks. Summary: your position seems to be that RTE and GHE works as advertised (in a lab condition anyway) but where it concerns the real world, there are factors changing the general rule:
But,importantly, I do not think “all things being equal” is a valid consideration.
…which I think is fair enough.
I can further condense this.
Napoleon was fond of the aphorism (stolen from a Roman general) — “no battle plan ever survives contact with the enemy.” There’s nothing wrong with noting that history is replete with examples of being utterly wrong and basing one’s starting position on this.
I’m happy to see that where we part ways is where things get murky rather than where things are clearly visible via lab experiment and the like. I’ll see you on the next GHG forcing thread. Bring your A game. :-)
Cheers.
This is in reply to your comment below.
Napoleon? Unnamed Roman General? Build railroads son! Try von Moltke.
randomengineer, I’m one of those sticklers for protocol and, expressing the sentiment of Australian poet C. J. Dennis’s Sentimental Bloke, worry when people are “not intrajuiced” properly. I therefore wanted to introduce Mr Courtney to you, but I regret to say that googling your pseudonym turned up nothing useful.
So I’m sorry but I can only help with the other direction . If you google “Richard S. Courtney” you can best meet the good gentleman by skipping over the first few pages and going straight to page 5.
I hasten to point out the risk of confusing him with another Richard S. Courtney, of the Kutztown University Department of Geography in Pennsylvania. The former appears as the 20th name on the list of more than 100 scientists rebuking Obama as ‘simply incorrect’ on global warming. Unlike the latter Courtney, the former is listed among those 100 distinguished scientists as
“Richard S. Courtney, Ph.D, Reviewer, Intergovernmental Panel On Climate Change”
So you should realize you are dealing here with someone who knows what he’s talking about. To my knowledge no one has invited the Pennsylvania professor to serve as a reviewer for the IPCC.
One way to keep these two gentlemen straight is that the former has a Diploma in Philosophy (presumably what the Ph.D. stands for) from some (thus far unnamed) institution in the UK city of Cambridge. In the normal scheme of things a Diploma in Philosophy would constitute progress towards a degree in divinity, while the Pennsylvania professor is a Doctor of Philosophy from Ohio State University, whose 1993 dissertation is mysteriously titled, “A Spatio-temporo-hierarchical Model of Urban System Population Dynamics: A Case Study of the Upper Midwest.” In his dissertation he employs Casetti’s Expansion Method to redefine a rank-size model into a 27 parameter equation capable of identifying spatial, temporal, and hierarchical dimensions of population redistribution that I use to study the urban system of the Upper Midwest.
All sounds like complete gibberish to me so I suggest you stick with the divine Mr Courtney, distinguished reviewer for the IPCC with a Diploma in Philosophy, and not let his namesake distract you.
As I’ve never met either in person I’m afraid I have no other way of distinguishing them so you’re on your own there.
randomengineer, you say:
You should turn in your engineer’s badge for this whopper. You are telling us that there is only one thing on the entire planet that affects minimum desert temperatures — GHGs. I don’t think even you believe that.
You say “If I’m wrong about this I’d appreciate knowing why.” You are wrong because there isn’t a single place on earth where the climate is only affected by one single factor. Every part of the planet is affected by a variety of feedbacks and forcings and teleconnections. The desert may warm, for example, from an El Niño … and that fact alone is enough to destroy your theory that the desert temperatures are only affected by one isolated factor, GHGs.
Rising over time = 120 years. An el nino isn’t going to change this. It certainly isn’t going to change this from a variety of stations from around the planet. Desert = low humidity. Antarctica counts.
Yes there are always things that will affect temps, but factor out LOCAL humidity effects (over time in a desert) and you will see a better picture of the local temps unaffected by humidity.
Land stations otherwise suffer from land use change; are they really reflecting a global temp or are they a proxy for land use? Rural stations will show temp increase once irrigation is used.
Engineers do their best to control for the one variable they are interested in seeing. In my case, looking at temp without local land use interference. Factor out local humidity.
If temp rises over time sure maybe there’s more water vapour (potent GHG) globally but this can be factored in/out via plenty of other studies. If temp rises and GLOBAL water vapour is factored out then what’s left is mostly effects from remaining GHG.
I’m a “believer” and skeptic. A lukewarmer. I get physics. I’m skeptical that the alarmists are correct. My guess is what the results from the desert/low humidity study will show is ~0.2 degrees over the 120 years attributable to CO2. Others may expect higher. I don’t.
Rising over time = 120 years. An el nino isn’t going to change this.
I was going to point out the same thing but you beat me to it.
It’s even true of the Atlantic Multidecadal Oscillation, which is 10-20 times slower than El Nino events and episodes (with the variability being entirely in the latter) and therefore sneaks up on you, unlike ENNO and ENSO, but in a major way. So Willis could ding you on that one too except that I found a way to iron out the AMO as well.
What you do is use exactly 65.5 year moving average and that minimizes the r² when fitting the composite Arrhenius-Hofmann Law, AHL, to the HADCRUT record.
One might expect the r² to keep shrinking with yet wider smoothing, but instead it bottoms out at 65.5 years and starts climbing again.
(Caveat: that level of smoothing impacts the CO2 rise itself a small amount, easily corrected for by applying it to both the HADCRUT data and the AHL when fitting the latter to the former to get the climate sensitivity. It doesn’t make the AHL any smoother, just distorts it in the same way it distorts the record.)
Stephen:
Atmospheric RTE’s based on MODTRAN deal with relatively low levels of CO2 (if I’m not mistaken, in the order of 100 to 200 bar.cm for CO2). Combustion engineering deals with levels that can get much higher. The graph at google books here:
http://tinyurl.com/2cgg6p6
Page 618,
does not fully reconcile with the MODTRAN reconstruction. Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.
So I’ll throw it back to you. Where is Leckner’s mistake?
Cheers
JE
MODTRAN doesn’t increase forever. The upper limit is the emissivity of a black body at the same temperature when absorptivity = emissivity = 1. You can see that at the center of the CO2 band in atmospheric spectra which has an effective brightness temperature of about 220 K. I’ve calculated total emissivity for CO2 using SpectralCalc ( http://www.spectralcalc.com ) and the results agree quite well with both Leckner and MODTRAN. You just have to make sure you’re using the same mass path. The CO2 mass path for the atmosphere is ~300 bar.cm. The main difference between atmospheric emission using MODTRAN and using the Leckner model is that the atmosphere isn’t isothermal.
Sounds good to me, Steven, even though I’m not wholly sceptical in this area. I just wanna understand the basics so that I can start keying in better to the kinds of things being said earlier in the thread. If Willis felt tempted to have a bash, so much the better!
This is a fascinating thread, including its historical elements. Among the various informative links, one that caught my eye was the first of the Science of Doom links provided by Dr. Curry, which references the following article by Dessler et al and reproduces a figure from that article showing the tight correlation between modeled and observed OLR values as a function of atmospheric temperature and water vapor –
Dessler et al
Because the radiative properties of water vapor are critical to an understanding of both greenhouse effects per se and positive feedbacks estimated from warming-mediated increases in water vapor overall and in critical regions of the upper troposphere, the concordance between predicted and observed relationships linking water vapor to OLR istruck me as worthy of special notice.
Four points:
1: For an excellent layman’s introduction to radiative heat transfer in the atmosphere, I recommend the scienceofdoom blog.
2: A question for Steve Mosher or Willis E et al. If one increases the partial pressure of CO2 in the atmosphere (300 ppm, 400ppm, etc), will the absorbance of EMR continue to increase up to the point where there is so much CO2 that dry ice forms?
3: What is the difference between HITRAN and MODTRAN? (may not be relevant).
4: In blast furnace calcs, etc., we use Leckner’s curves. These provide a graph of delta q versus[CO2], that is fairly similar to the IPCC version (5.35ln[CO2]), but these do not continue exponentially beyond 200 ppm CO2. Why does climate atmospheric absorbance differ from engineering atmospheric absorbance? I realize there is a temperature and pressure gradiant, but the explanations I’ve seen do not fully explain this disconnect. See “the path length approximation” post on my blog. Please feel free to tear it apart.
I’ll take #3.
HITRAN is the database.
http://www.cfa.harvard.edu/hitran/
“HITRAN is a database, not a “simulation” code. It provides a suite of parameters that are used as input to various modeling codes, either very high-resolution line-by-line codes, or moderate spectral resolution band-model codes. These codes are based on the Lambert-Beers law of attenuation and may include many more features of modeling, for example line shape, scattering, continuum absorption, atmospheric constituent profiles, etc. ”
MODTRAN (and LOWTRAN) is a simulation code. Its quick and dirty. you might use it to estimate, for example, the IR signature a plane would present to a sensor on the ground. Like this
http://en.wikipedia.org/wiki/Northrop_YF-23 which we optimized for stealthiness using RTE.
It really is a sexy beast; my favorite in the experimental hangar at the museum.
Going back to my sixth-form physics over forty years ago (just scraped a bare pass at A-level! :-)), I remember the two types of spectra: absorption and emission.
WRT absorption spectra, IIRC, then when you shine full-frequency light through a gas, and at the other side analyse the emerging light using a prism, you find that, depending on what the gas was, there are black lines in the spectrum. What has happened is that certain photons of a particular frequency/wavelength have been absorbed by the gas, kicking electrons of its constituent atoms into higher-energy states. So those photons don’t get through the gas, accounting for the black lines. Where you get the lines is characteristic for a given gas. Hope I’m right so far.
WRT emission spectra, you get these when you heat elements up. The extra energy input causes emission of photons of specific frequencies/wavelengths, and so what you get in the spectrum is mostly black, with bright lines due to these extra photons. The pattern you get is characteristic for a given element. I think I recall this is the way that one can determine the elements in distant stars. I also remember Doppler effects when an emitting source is in motion, which shifts the location of the lines and enables estimates of velocity to be made.
When people talk about CO2 “trapping” energy, I have this idea of it being selectively absorptive of specific frequencies of electromagnetic radiation, and that that can be observed spectroscopically. I’m assuming absorption spectra apply here. But it’s a bit confused because an excited electron won’t stay excited forever and may drop back to a lower quantum energy, re-emitting the photon it received (at the same frequency/wavelength?).
And that radiation may in its turn excite some other CO2 molecule, and so on. We can’t talk of a single photon fighting its way through countless CO2 molecules before it hits the ground or alternatively finds its way into space. To me, it seems like what makes it through is the energy that that single photon represents; there might have been millions of re-emissions of (same-energy?) photons in the interim.
I get the impression that this relates to the term “optical density”, and intuitively, I’d guess that the greater the optical density, the more CO2 molecules per unit volume, and the more delay there is in the system. Moreover, it seems to be the delay which accounts for the rise in temperature of the atmosphere. So the chain of logic seems to be: more CO2/unit volume => greater optical density => greater delay in photons escaping the atmosphere => increase in temperature of the atmosphere.
I know there are other things involved, too. Such as conduction (apparently not a big factor?), convection, reflection and refraction, and so on.
Looking at the Wikipedia article, it is talking about spectral lines which I’m kind of guessing are for absorption spectra, and it’s also talking about “monochromatic” and “line-by-line”. So I’m getting this idea of cycling through different frequencies (each one being a “chrome” or colour, I suppose, though I realise we may not be talking about visible light, e.g. infra-red or ultra-violet) and in some way picking up the lines for all the different constituents of the atmosphere.
I’m laying all this bare even if I might have it hopelessly wrong just so some kind soul can perceive how I’m thinking and intervene where it’s necessary, at the right sort of level for me, which I think will be about the same level for anyone with (somewhat sub-par) A-level knowledge of physics (for Americans, A-levels are qualifications you need to get into university; so substitute whatever qualifications you guys need).
I’m trying to focus on a level that isn’t too high, but then again, not too low. All this talk of panes of glass or blankets is too low, and hopefully I will have indicated what is too high, although I definitely want to go higher if possible :-)
Long question, so I’ll just answer a bite-sized part. Absorption and emission as applied to the atmosphere.
When you look up in clear sky in the IR you see emission lines of CO2. This is because the CO2 is warmer than the background, which is cold space (simplified a bit), so the CO2 emission is larger (brighter) than the background.
If you look down from a satellite in the IR you see absorption lines of CO2. This is because the background black radiation from the ground is warmer than the atmosphere, but in the CO2 bands you see only the last emission which is from higher in the atmosphere which is colder, so it appears as a dark line in the spectrum.
Thank you, Jim D.
I found that a valuable comment; I hadn’t realised that we would be talking about absorption and emission spectra according to whether we were looking from the ground or from space.
If I might ask a question relating to that, would I be correct to assume the the absorption/emission lines would be in the same location?
As you were, Jim D. I infer from a reply I got from A Lacis that the two spectra, if overlapped, would eliminate any dark lines, so that they are presumably in the same location.
As you were, Jim D. I infer from a reply I got from A Lacis that the two spectra, if overlapped, would eliminate any dark lines, so that they are presumably in the same location.
Did Andy take Stokes shift into account?
Yes, they are the same wavelengths which are an intrinsic property of the molecules.
I have yet to see any consistency between these discussions of emission by CO2 and Kasha’s rule that the only emitting transitions are those from the lowest excited state of any given multiplicity (transitions preserve multiplicity). Corollaries of the rule are the independence of emission from absorption, and Stokes shift quantifying this. This has been known for 60 years, why does it never come up in these radiation discussions?
Using directly observed absorption spectra to indirectly infer emission spectra seems a bad idea. Emission spectra should either be observed directly or calculated, whichever is more convenient provided it yields reasonable answers.
_____
×⁻³⁴
The Kasha’s rule has a large effect when the exitation is done by a high energy radiation (high compared to the temperature). I do not think, it is important when the gas is in thermal equilibrium.
Ok, but is there any correlation between absorbed and emitted photons in CO2?
And if not, can one assume that the emission probabilities are in proportion to the line strengths?
I’d like to calculate the mean free path of a photon emitted from a CO2 molecule when the level is 390 ppmv, at STP. For photons emitted from the strongest line I’m getting a mean free path of around 4 cm based on the strengths and line widths at STP.
For weaker lines the mfp will be longer, but the probability of such photons will be smaller, so it’s not obvious whether the sum over all 100,000 lines or so converges quickly or slowly.
Kasha’s rule is particularly relevant to this since if it were applicable the series would converge faster.
Has anyone done this calculation before?
I do not think, it is important when the gas is in thermal equilibrium.
So are you claiming that the absorption lines work equally well as emission lines below 300 K?
Kasha himself in his 1950 paper says nothing that would imply this. Do you have any source for your claim?
Michael
Here is a nice graph from the top of tropopause looking down
It looks very like an absorption graph to me.
The large bite out of the black body envelope is caused by thermalisation of the 15um radiation.
http://wattsupwiththat.files.wordpress.com/2010/12/ir_window_anesthetics.png
Radiative heat transfer is very important in modeling combustion.
e.g. in modeling fires and developing fire fighting techniques; for boilers in electric power plants; and in modeling internal combustion in engines and gas turbines. Modeling “gray” atmospheres with combustion is particularly challenging and computationally intensive. e.g. See: CFD-Based Compartment Fire Modeling
In gas turbines, design errors on temperature and noise can result in a combustor being “liberated” with a few million dollars of damage per turbine blade set of stream damage in the turbine.
errata
http://forefire.univ-corse.fr/ecoleComb2010/01062010-1830-Codes%20incendies.pdf
John Eggert said on December 5, 2010 at 11:44 pm:
“Atmospheric RTE’s based on MODTRAN deal with relatively low levels of CO2 (if I’m not mistaken, in the order of 100 to 200 bar.cm for CO2). Combustion engineering deals with levels that can get much higher. The graph at google books here:
http://tinyurl.com/2cgg6p6 – Page 618,
does not fully reconcile with the MODTRAN reconstruction. Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.
So I’ll throw it back to you. Where is Leckner’s mistake?
Well, the vertical axis of his graph is annotated wrongly. But on a serious note it’s important to understand the basics.
What is emissivity?
Emissivity is a value between 0 and 1 which describes how well a body (or a gas) emits compared with a “blackbody”. Emissivity is a material property. If emissivity = 1, it is a “blackbody”.
The Planck law shows how spectral intensity (which is a continuously varying function of wavelength) of a blackbody changes with temperature.
When you know the emissivity it allows you to calculate the actual value of spectral intensity for the body under consideration. Or the actual value of total flux.
Emissivity is sometimes shown as a wavelength-dependent graph. In the Leckner curves the value is averaged across the relevant wavelengths for various temperatures. (This makes it easier to do the flux calculation).
Now some examples:
-A surface at 300K with an emissivity of 0.1 will radiate 46W/m^2.
-A surface at 1000K with an emissivity of 0.1 will radiate 5,670 W/m^2.
Same emissivity and yet the flux is much higher for increased temperatures.
Leckner wasn’t wrong. The question is confused.
How come the government income tax rate reaches a maximum and yet the more I earn, the more the government takes from me in tax?
I believe the question in your mind is about “saturation”. Maybe try CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.
Thanks for this, Scienceof doom. I am hoping to graduate from this thread so that I can launch into your site!
You say:
Now some examples:
-A surface at 300K with an emissivity of 0.1 will radiate 46W/m^2.
-A surface at 1000K with an emissivity of 0.1 will radiate 5,670 W/m^2.
Okay. So does this relate to the Stephan-Boltzmann equation: j = eoT^4 where e is the emissivity, o the proportionality constant, and T is in deg. K?
Anything less than perfect emissivity (where e = 1, so that we have a black body): would this be the “grey body” that I hear about so often? Is the effectiveness of a grey body quantified according to the value of e?
Elementary questions, I know, but that is what this thread is about for me and so I hope you will indulge me.
You are correct, these are calculated by using the Stefan Boltzmann equation. Plug the numbers in and you will get the answers I did.
You are mostly correct about “grey body” – although generally it is used for the special case where the body (or gas) is not radiating as a blackbody, yet the emissivity is constant across wavelength.
This doesn’t really happen in practice but can be useful to calculate the results of simple examples.
For a graph of how emissivity/absorptivity varies with wavelength seethe comments in The Dull Case of Emissivity and Average Temperatures.
Thanks once again, SoD. A valuable point about emissivity not necessarily being constant across wavelength. I wouldn’t have thought about that had you not mentioned it.
SOD:
I’ve read your section a number of times over the last number of months. It is a good reference when I’m talking to people about these things. The fact remains. If you are measuring how much hotter a gas will get in a blast furnace off gas, there comes a point when increasing CO2 no longer increases the heat of the gas. What you are saying is this doesn’t happen. But it does. And there is no confusion in the question. Either the calculation of atmospheric absorbance in combustion engineering is fundamentally the same or it is fundamentally different from the calculation of atmospheric absorbance in climate. One curve has an asymptote and the other doesn’t. Otherwise, there is very little difference between the two.
I’ve read your section a number of times over the last number of months. It is a good reference when I’m talking to people about these things. The fact remains. If you are measuring how much hotter a gas will get in a blast furnace off gas, there comes a point when increasing CO2 no longer increases the heat of the gas. What you are saying is this doesn’t happen. But it does. And there is no confusion in the question. Either the calculation of atmospheric absorbance in combustion engineering is fundamentally the same or it is fundamentally different from the calculation of atmospheric absorbance in climate.
I’ll take a stab at this one – I don’t think there is any fundamental difference here. If I understand that figure correctly (the earlier linked image from google books), I think those “effective emissivity” curves are integrated over all wavelengths (Eq. 8.76 in that book). This will weight the spectral emissivity of the gas by the blackbody (Planck) curve. So, what happens is when you get very high temperatures, the curve is peaking at shorter wavelengths and the CO2 absorption bands at 15 and 4 microns become less important. I don’t think there are any CO2 absorption bands at wavelengths shorter than 4 microns so it would just keep dropping for larger temperatures.
I guess in climate applications those effects aren’t usually considered, since even for “huge” temperature increases (say 10K), there is no significant shift in the peak of the blackbody curve.
Michael,
You pretty well have the basics of absorption line formation and line emission. The detailed mechanics of how a molecule emits or absorbs a photon of a particular frequency can be a bit complicated. Any single (say CO2) molecule will be in some specific vibration-rotation energy state. If a photon comes by with just the right frequency to an allowed higher energy vibration-rotation energy state, there is some specified probability that that photon will be absorbed, raising that molecule its new vibration-rotation energy state. The molecule will sit in that state for an exceedingly brief time before a photon (of the same wavelength) is spontaneously emitted, and the molecule returns to its original energy level. But before it got a chance to radiate, that molecule might have undergone a collision with another molecule that might knock it into a different vibration-rotation energy state.
FOrtunately, all these complications do not directly factor into calculating radiative transfer. A single cubic inch of air contains close to a billion billion CO2 molecules, so a statistical approach can be taken to doing practical radiative transfer modeling.
I like to use the example of an isothermal cavity (with a small pinhole to view the emerging radiation) to illustrate some basic principles of radiation. As you might expect, the radiation emerging from the pinhole will be continuous Planck radiation at temperature T (emitted by the back wall of the cavity). If we now place CO2 gas (also at temperature T) into the cavity, Kirchhoff’s radiative law states that the radiation emerging from the pinhole will still be continuous Planck radiation at temperature T. This is because in a strictly isothermal cavity, everything is in thermodynamic equilibrium, meaning that every emission-absorption transition and every collisional interaction must be in equilibrium (otherwise the temperature of some part of the cavity will change).
If this parcel of CO2 gas is pulled from the cavity, it will continue to emit radiation representative of temperature T, which if viewed against a very cold background, will appear as emission lines. If the background is heated to temperature T, the emission lines will still be there, but there will be superimposed absorption lines at the same spectral positions of the same strength yielding a blank continuous Planck spectrum of temperature T just as in the isothermal cavity. If now the background temperature is heating to a hotter temperature, absorption will win out over emission, and the resulting spectrum will be a pure absorption spectrum.
The line spectrum that CO2 exhibits will depend on the local pressure and temperature of the gas. Pressure generally only broadens the spectral lines, without shifting their spectral position. Temperature, on the other hand, changes the equilibrium collision vibrational-rotation energy state distribution, which can make some spectral lines be stronger, others weaker. Thus a flame spectrum of CO2 will be quite different from the ‘cold atmosphere’ spectrum that is relevant to current climate applications.
The basic atmospheric spectral line compilation is the HITRAN data base that contains line spectral position, line strength, line width, and line energy level information for more than 2.7 million spectral lines for 39 molecular species. This is the information that goes into a line-by-line model such as LBLRTM or FASCODE, together with the pressure, temperature, and absorber amount profile information that describes the atmospheric structure for which the line-by-line spectrum is to be computed. The line-by-line codes require significant computer resources to operate, but they are the ones that give the most precise radiative transfer results.
MODTRAN is a commercially available radiative transfer program that atmospheric radiation with moderate spectral resolution (one wavenumber) and with somewhat lesser accuracy. To assure maximum precision and accuracy, we use line-by-line modeling to test the performance the radiation model used in climate modeling applications.
A Lacis,
Thank you for your extensive post. It is so valuable to know I have the basics approximately right – that means I can build further on that.
I understood your first two paras very well.
Para 3:
I had to look up “isothermal” and Kirchoff’s law so I could catch your drift. It occurred to me that maybe others at my level are lurking and learning too, so:
(from Wikipedia):
“An isothermal process is a change of a system, in which the temperature remains constant: ΔT = 0. This typically occurs when a system is in contact with an outside thermal reservoir (heat bath), and the change occurs slowly enough to allow the system to continually adjust to the temperature of the reservoir through heat exchange. In contrast, an adiabatic process is where a system exchanges no heat with its surroundings (Q = 0). In other words, in an isothermal process, the value ΔT = 0 but Q ≠ 0, while in an adiabatic process, ΔT ≠ 0 but Q = 0.”
Right. So it looks like we are talking about thermal equilibrium in your example of an isothermal cavity.
(Kirchoff’s law – from Wikipedia):
At thermal equilibrium, the emissivity of a body (or surface) equals its absorptivity.
Right. So as much energy is coming in as is going out; Delta T = 0. The inner surface of your isothermal cavity seems to be acting as a black body (“Planck radiation”).
Para 4:
I’m assuming that “pulling CO2” from the cavity isn’t meant literally. It’s a thought experiment, right?
You seemed to have answered my earlier question to Jim D about whether absorption and emission spectra for the same gas would be complementary WRT the position of their lines. At least, that’s what I thought, but…
Para 5:
Hmm. Pressure relates to density of CO2, i.e. locally, the number of molecules per unit volume .The broadening of the lines where the pressure is greater – is that an intensity rather than frequency change?
I can see that a flame (presumably emission?) spectrum would be quite different from a cold atmosphere spectrum, but what I am not sure about is whether you’re saying some lines might disappear and new ones appear according to circumstances. In view of what SoD told me above, I understand that emissivity isn’t necessarily constant across wavelength. Overall, I’m a little confused about this point (probably my fault more than anyone else’s).
Paras 6 and 7: Thanks. Removes some of the mystery from what the heck HITRAN and MODTRAN are all about!
“The broadening of the lines where the pressure is greater – is that an intensity rather than frequency change?”
Normally, according to quantum theory, for a molecule to absorb a photon, the photon’s energy must exactly match the energy involved in the transition from one energy level to another within the molecule – e.g., the excitation of a vibration mode in CO2. However, if an encounter of a CO2 molecule with a neighboring molecule (N2, O2, etc.) adds or subtracts a small amount of energy, that amount can make up for a difference between the energy of the incoming photon and the energy needed for a quantum transition. This permits the total incoming energy to match what is needed.
The higher the density of molecules (i.e., the higher the atmospheric pressure), the greater will be the likelihood of an encounter that creates the needed energy adjustment. This means that at high pressure, photons slightly different in energy from the “exact match” energy will be more likely to be absorbed, so that an absorption line at a particular energy level will broaden to encompass these additional photons whose energy doesn’t quite match the line center.
Thanks for this, Fred – you put it beautifully and I can understand it very well. One more small piece of the jigsaw! :-)
Andy,
May I ask: According to your data and calculations, how much is the annual global mean atmospheric longwave absorbed radiation?
Thank you in advance
Judith, you write “And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of addressing this issue (this will be the topic of the next greenhouse post).”
When I first read Myhre et al, it states that 3 radiative transfer models were used. I have read the definition of radiative forcing in the TAR, and I was surprised that Myhre did not seem to discuss WHY radiative transfer models were suitable to estiamte change in radiative forcing. It has never been obvious to me that radiative transfer models ARE suitable to estimate change in radiative forcing. Can anyone direct me to a published discussion as to why radiaitve transfer models are suitable to estimate change in radiative forcing?
Jim, for clear sky radiative forcing, this thread just provided tons of documentation that the best radiation codes do a good job of simulating the spectral distribution and broad band radiative fluxes. In terms of forcing, the models have been validated from the tropics to the arctic, with over an order of magnitude in difference in total water vapor content. While the models have not been validated observationally for a doubling of CO2, we infer from the above two tests that they should perform fine. The Collins et al. paper referenced here directly addresses this issue (points out that some of the radiation transfer codes used in climate models do not perform well in this regard), but the best ones do.
Thank you Judith, but that is not my problem. My problem relates to the definition of radiative forcing in the TAR Chapter 6; viz
“The radiative forcing of the surface-troposphere system due to the perturbation in or the introduction of an agent (say, a change in greenhouse gas concentrations) is the change in net (down minus up) irradiance (solar plus long-wave; in Wm-2) at the tropopause AFTER allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropo-spheric temperatures and state held fixed at the unperturbed values”.
As I see this, the atmosphere is in two different states; one part has adjusted to radiative equilibrium and one has not. I assume that radiative transfer models reproduce this difference, but I have not been able to find out how. Can you direct me to a publication which explains this please?
Jim Cripwell
For a discussion of various sensitivities a 1D Line By Line radiation model based on radiosonde data, see Miskolczi (2010) Sect 2 pg 256-247 http://www.friendsofscience.org/assets/documents/E&E_21_4_2010_08-miskolczi.pdf
I think there needs to be a few words of caution regarding visually interpreting spectra.
I am not the best person to do this so I welcome correction.
Check the coordinates:
Are you looking at wavenumbers or wavelengths?
Wavenumbers vary according to frequency, the reciprocal of wavelength.
More importantly are you looking at
Transmission (%)
Absorption (%)
Cross-sections (cm^2/molecule)
Line Strengths (cm/molecule)?
It is the last one that can give rise to the most misleading of visual interprestations. Line Strength (Integrated Intensity) lacks the line shape component. It is a useful abstraction as it gives a measure of the total “strength” (the area under the curve of the line shape) which is a measure of the dipole strength of the transition. It can be misleading as such a spectrum has pin-sharp (zero width) lines which makes it look like there are big non-absorbing gaps between the lines which is not the case.
The actual units vary but line strengths (as in HITRAN line lists) should boil down to cm/molecule after manipulation and scaling.
Alex
The failure of catastrophic climate change as an idea is not in the basic physics.
Just as the failure of eugenics was not in the theory of evolution.
Dr. Curry said:
“However, if you can specify the relevant conditions in the atmosphere that provide inputs to the radiative transfer model, you should be able to make accurate calculations using state-of-the art models. The challenge for climate models is in correctly simulating the variations in atmospheric profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols.”
What about ocean cycles?
Ocean cycles may influence what the clouds and temperatures actually are, but once you can specify, predict, whatever the clouds etc, the radiative transfer models are up to the job of predicting the radiative fluxes. Actually predicting the clouds, water vapor, etc. is at the heart of the problem (the radiation codes themselves are not the problem, which is what this thread is about).
Discussing those topics further wil be very interesting. You are moving through this issue in an efective, fascinating way.
Your students are very fortunate.
Well, I guess many of us here are your students, in effect….
Dr. Curry
Ocean cycles models using past record are inherently flawed. This is applicable to both Pacific and the Atlantic oceans’ currents.
According to my findings there are the short term semi-periodic oscillations with an uncertain frequency (decades) and the long term (centuries) components which may or may not be reversible. None of these appear to be predictable.
North Pacific has number of critical points; here I show what I think are the most important.
http://www.vukcevic.talktalk.net/NPG.htm
All of them to certain extent may contribute to the PDO with an unspecified time factor and weighting. You may indeed notice that one (red coloured) is possibly the major contributor to the PDO, some 10-12 years later.
i’m preparing a thread next week on decadal scale prediction, i will be referencing your site
Thanks; that’s fine as long as you think it merits mention. I am still working on the SSW; some interesting results there, and I may have a possible answer to the Antarctica’s 2002 SSW riddle
http://www.knmi.nl/~eskes/papers/srt2005/png/o3col20020926_sp.png
that the papers on the subject missed.
Anyone,
Can I conclude from Dr. Curry’s post that rise in temperature from radiative forcing is 1.2 C when the concentration of CO2 is doubled?
I’m not asking about the others haved posted, just if what Dr. Curry says is correct, this is the result.
Thanks
This will be the topic of the next greenhouse thread.
Dr. Curry, you make a very strong statement:
“Atmospheric radiative transfer models rank among the most robust components of climate model, in terms of having a rigorous theoretical foundation”
I am not sure that we already have such a theory. Atleast it is not used in GCMs. The “theoretical” models of spectral line shapes and their behaviour are just fitted semiempirical anlogy models. Closest to a theoretical model are M. Chrysos & al.
http://blogs.physicstoday.org/update/2008/07/collisions_between_carbon_diox.html
http://prl.aps.org/abstract/PRL/v100/i13/e133007
http://pra.aps.org/abstract/PRA/v80/i4/e042703
Your reference gives a good agreement 2% between models. The NBM and other simplified methods that I have seen have been satisfied wiyh 10% accuracy compared to LBL. When we take into account that the HITRAN database claims 5% accuracy, is that accurate enough?
This is really interesting. I am not an expert on the nuances of HITRAN or line by line codes, I would like to learn more about how accurate you think these are. My statement was relative to other climate model components. What is accurate enough in terms of fitness for purpose? I would say calculation of the (clear sky, no clouds or aerosol) broadband IR flux to within 1-2 W m-2 (given perfect input variables, e.g. temperature profile, trace profiles, etc.). Also calculation of flux sensitivity to the range of CO2 and H2O variations of relevance, e.g. water vapor ranging from tropical to polar atmosphere, and doubling of CO2 within 1-2 W m-2.
This is a very good topic for discussion, thank you for bringing it up.
Michael,
The isothermal cavity is basically an idealized thought experiment. In application to radiative transfer, it is not so much about heat transfer as it is about establishing the statistical population distribution of the molecular vibrational-rotational energy states under conditions of full thermodynamic equilibrium. When the absorption spectrum of a gas is measured in the laboratory, it is done under carefully controlled pressure and temperature conditions so that both the amount of gas and its thermodynamic state are accurately known.
A similar gas parcel in the free atmosphere is said to be in local thermodynamic equilibrium (LTE) because conditions are not isothermal, there being typically a small temperature gradient. But the population of its energy states will be close enough to those under thermodynamic equilibrium conditions that the spectral absorption by the gas will be essentially what was measured in the laboratory. It is only at high altitudes (higher than 60 km) that molecular collisions become too infrequent to maintain LTE, then corrections have to be made for a different population of energy states under non-LTE conditions. Also, water vapor condensed into a cloud droplet, or CO2 in the form of dry ice, will have very different absorption characteristics compared to the gas under LTE conditions.
The isothermal cavity along with Kirchhoff’s radiation law is a useful concept to demonstrate that emissivity must be equal to absorptivity, that only a totally absorbing (black) surface can emit at the full Planck function rate, and that the emissivity of a non-black surface will be one minus its albedo, in order to conserve energy.
A Lacis,
Thank you for this refinement of what you said earlier. It helps a lot in transitioning conceptually from “the ideal” to the real world. I really am most grateful for your help in improving my understanding.
The stratosphere is predicted to cool with increased CO2 concentrations in the troposphere.
Is this because less IR leaves the troposphere?
I think this is wrong. Does anyone have a conceptual explanation for this?
Thanks
CO2 will increase equally in the stratosphere, so it is a local effect there where it radiates heat more efficiently with more CO2. Heat there comes from ozone absorption of solar radiation, not surface convection.
Just to elaborate slightly on Jim D’s explanation, an increase in a particular greenhouse gas molecule such as CO2 will increase the ability of a given layer of the atmosphere to absorb infrared (IR) radiation – the layer’s “absorptivity” – and equally increase its ability to emit IR – its “emissivity”. If that type of molecule is the only factor operating, absorptivity and emissivity will increase commensurately, and the net effect turns out to be a slight warming. On the other hand, most of the absorptivity in the stratosphere derives from the ability of ozone to absorb solar radiation in the UV range, where CO2 does not absorb. This is responsible for most of the stratospheric heating, and so CO2 contributes little, because its absorption of IR from below is a lesser source of heating. In other words, additional CO2 does not increase stratospheric absorptivity substantially. On the other hand, most radiation from the stratosphere at the temperatures there is in IR wavelengths, where CO2 is a strong emitter. As a result, CO2 increases stratospheric emissivity more than absorptivity, with a resultant cooling effect.
Jim and Fred: Thanks Much.
Dr. Curry,
May I address my question also to you: According to your data and calculations, how much is the annual global mean atmospheric longwave absorbed radiation?
Thanks in advance
~ 150 W/m2
I think we should welcome Miklos Zagoni to this website and I hope he gets a reply to his question from Andy Lacis and Judith Curry.
Welcome Miklos, and thank you for joining this debate.
“how much is the annual global mean atmospheric longwave absorbed radiation?” ? ?
What exactly is this a question really supposed to be asking? If we are talking about the outgoing LW radiation at the top of the atmosphere – all of that radiation is emitted radiation, some of it having been emitted by the ground surface, most of it having been emitted from some point in the atmosphere, depending on the wavelength and atmospheric temperature and absorber distribution.
If we are talking about the LW radiation emitted by the ground surface – some of that radiation will escape directly to space without being absorbed by the atmosphere depending on the atmospheric absorber distribution. Under cloudy sky conditions, virtually all of the radiation emitted by the ground surface will absorbed by the atmosphere, unless the clouds are optically thin cirrus clouds.
LW radiation get emitted and absorbed multiple time within the atmosphere. That is what radiative transfer is all about. What is important for climate applications is calculating the heating/cooling that takes place at the ground and within each atmospheric layer, and the total energy that is radiated out to space – all required to keep a detailed energy balance/budget at the ground surface, in each atmospheric layer, and for the atmospheric column as a whole. One can in addition keep some spectral information in the process of doing the atmospheric radiative transfer, that might be useful for diagnostic comparisons with observational data.
Otherwise, the question by Miklos Zagoni makes no sense.
chriscolose:
Dear Chris,
Thank you, but you must refer to another quantity. LW atmospheric absorption, according to KT97(=IPCC2007 WG1 energy budget) is about 350 W/m2, while the updated distribution (TFK2009) gives 356 W/m2.
My question is: are these values generally accepted here, amongst radiative transfer specialists.
Thanks
Dear Andy,
We are talking about the greenhouse effect here (or, at least, we have it in mind in the background), so I think my question is about the quantification of the general (~global annual mean) effect of the presence of IR absorbers (=GHG’s) in the air…
Thanks, Miklos
Miklos,
I am sorry that I totally misunderstood what your question was about.
With respect to the IPCC2007 KT97 figure with 350 W/m2 of atmospheric absorption versus 356 W/m2 in an updated TFK2009 version, I would say that both figures are there primarily for illustrative purposes, rather than presenting technical results.
Note that the KT97 figure implies a planetary albedo of 0.313 = 107/342, as the ratio of reflected to incident solar energy, with 235 W/m2 as the corresponding LW outgoing flux. This figure also illustrates a somewhat stronger greenhouse effect of 390 – 235 = 155 W/m2. This compares to the often cited nominal greenhouse effect value of 390 – 240 = 150 W/m2, which corresponds to a planetary albedo of 0.3, with absorbed solar radiation of 240 W/m2. Both cases imply a global mean surface temperature of 288 K (390 W/m2).
In our recent Science paper using the GISS ModelE, we reported 152.6 W/m2 for the total atmospheric greenhouse effect. In the Schmidt et al. (2006) paper describing ModelE, three different model versions are shown with planetary albedos of 0.293, 0.296, and 0.297. These will produce slightly different outgoing LW fluxes. Observational evidence put the likely planetary albedo of Earth between 0.29 and 0.31. This uncertainty exists because it is very difficult to make more precise measurements from satellite orbit with existing instrumentation.
However, this uncertainty does not adversely affect climate modeling results and conclusions. But this is one reason why climate modeling studies are conducted by running a control run and experiment run simultaneously to subtract out biases and offsets that will e common to both runs.
Similarly, these potential biases and uncertainties in planetary albedo will affect the values of LW fluxes. So, the absolute value of model fluxes may differ. Accordingly, it does not make sense to compare the “accuracy” of atmospheric fluxes in a absolute sense between different models, since the reason for the differences may be complex, and not really having an impact on conclusions drawn, since the effect of these differences will be largely subtracted out by differencing the experiment and control runs.
Instead, the accuracy of atmospheric fluxes is better assessed by comparing model flux results with line-by-line calculations for the same atmospheric temperature-absorber structure.
A. Lacis
You note:
Yet Kevin Trenberth notes 23%-69% discrepancy in energy budgets between observed and accounted for. See above.
Is that level of discrepancy what is considered “properly” included?
An increasing number of evaluations are finding climate models projecting temperatures substantially above global temperatures. e.g.
McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290.
How are we to understand/explain these substantial divergences?
Errors in modeling? In data? In statistics?
I posted a comment on Dec 6 at 7.10 am. I have replied to my own second comment. Let me put it here at the end of the comments for emphasis.
Let me put this a little more strongly. Radiative transfer models deal with a real atmosphere. According to the IPCC definition of radiative forcing, one deals with a hypothetical atmosphere. Therefore whatever Myhre et al in 1998 estimated, it was NOT radiative forcing. And the same applies to all the other estimates that have been done ever since. What the numbers are, I have no idea. All I know is that they are NOT radiative forcing.
Radiative transfer is a standard consideration in design of microwave, millimeter wave, and infrared band systems. Fire control, missile, and communication systems’ performance hinges on the electromagnetic absorption along the path from transmitter to reflectors to receiver. Clear sky spectral absorption data from published and other sources has proven up to the task. Sometimes low absorption is needed, as in long range operations, and sometimes high absorption is sought, as when designing systems to operate concealed behind an atmospheric absorption curtain. Regardless, an accurate prediction is important.
Curry writes, “The challenge for climate models is in correctly simulating the variations in atmosphere profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols. These confounding effects beyond a standard dry atmosphere were too much for military and communication system design and analysis. Perfection was hopeless, and more complicated situations utterly unpredictable.
Curry then says, “And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of addressing this issue”. Radiative forcing in a limited sense applies radiative transfer, but it is not the same. Radiative Forcing is the paradigm IPCC selected for its climate modeling, and staunchly defended against critics. In radiative transfer, a long path in the atmosphere, as in hundreds of kilometers horizontally, or from the surface to the top of the atmosphere, is modeled end-to-end by an absorption spectrum. It represents the entire path, one assumedly tolerant of the temperature lapse rate. It also avoids microscopic modeling of molecular absorption and radiation. IPCC models the atmosphere in multiple layers, each with its own peculiar temperature, and hence temperature lapse rate, with radiative forcing characteristics, and in some considerations, molecular physics.
Radiative Forcing has severe limitations, some of which may be fatal to ever showing success at predicting climate. It doesn’t account for heat capacity, and therefore can provide no transient effects. A prime alternative to an RF model is a Heat Flow Model (a redundant term, though universally accepted — heat is already a flow). In a Heat Flow model, the environment is represented by nodes, each with its own heat capacity, and with flow resistance to every other node. A heat flow model can represent transient effects, and variable attenuation for cyclic phenomena, such as seasons, the solar cycle, and ocean currents.
Radiative Forcing has no flow variable, no sinks, and no sources. A heat flow model does. Feedback is a sample of energy, displacement, mass, rate, or information from within a system that flows back to the driving signal to add to, or subtract from it. Without a flow variable, the RF model must account for feedbacks without representing them. Consequently, IPCC redefined feedback, and produced a most confused explanation in TAR, Chapter 7. To IPCC, a feedback loops are imaginary links between correlated variables. This is a severe restriction for the RF paradigm especially because IPCC has yet to account for major feedbacks in the climate system, including the largest feedback in all of climate, the positive and negative feedback of cloud albedo, and the positive feedback of CO2 from the ocean that frustrates IPCC’s model for accumulating ACO2. It doesn’t have the carbon cycle or the hydrological cycle right. The RF model looks quite unrepairable.
Curry talks about “doubling CO2”. This is an assumption by IPCC that greatly simplifies its modeling task, while
simultaneously exalting the greenhouse effect. IPCC declares that infrared absorption is proportional to the logarithm of GHG concentration. It is not. A logarithm might be fit to the actual curve over a small region, but it is not valid for calculations much beyond that region like IPCC’s projections. The physics governing gas absorption is the Beer-Lambert Law, which IPCC never mentions nor uses. The Beer-Lambert Law provides saturation as the gas concentration increases. IPCC’s logarithmic relation never saturates, but quickly gets silly, going out of bounds as it begins its growth to infinity.
Under the logarithm absorption model, an additional, constant amount of radiative forcing would occur for every doubling (or any other ratio) of CO2 or water vapor or any other GHG. Because the logarithm increases to infinity, the absorption never saturates. This is most beneficial to the scare tactic behind AGW. Secondly, the additional radiative forcing using the Beer-Lambert Law requires one to know where the atmosphere is on an absorption curve. This is an additional, major complexity IPCC doesn’t face.
Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. These data are at the core of radiation transfer, and that the “doubling CO2” error slipped through is surprising.
The big guns are riding into town.
Welcome Dr Jeff Glassman!
Let the serious debate begin. :)
Jeff Glassman
Thanks for the physics/chemistry perspective:
“The Beer-Lambert Law provides saturation as the gas concentration increases. . . .the additional radiative forcing using the Beer-Lambert Law requires one to know where the atmosphere is on an absorption curve.. . . CO2 appears to be in saturation in the atmosphere.”
The quantitative Line By Line Planck weighted Global Optical Depth calculations by Miskolczi (2010) show remarkably low sensitivity to CO2, and even lower combined variability to both CO2 and H2O given the available 61 year TIGR radiosonde data and NOAA. See Fig. 10 sections 3, 4. I would welcome your evaluation of Miskolczi’s method and results.
A copy of Miskolczi 2010 is available at:
http://www.friendsofscience.org/assets/documents/E&E_21_4_2010_08-miskolczi.pdf
Jeff – The images from Channels 1-7 in my Tyndall gas effect post illustrate directly that CO2 is not in saturation in the atmosphere.
I recommend the replies by Lacis and Moolten to you below. I will talk about radiative fluxes here, since your concern appears to be a lack of a flow variable. In fact radiation schemes do computations over multiple atmospheric layers, as you say, and what they compute for each level are upwards and downwards radiative fluxes (W/m2). It is the convergence or divergence of these fluxes that result in radiative heating or cooling in a layer, which also depends on the heat capacity of that layer. So in fact fluxes are central to these schemes, and their impact on the atmosphere.
Jeffrey,
Why do you say “Radiative Forcing” doesn’t account for heat capacity? There’s an energy equation which enforces energy balance in each cell, including that which comes and goes via radiative transfer, And the internal energy is calculated via specific heat.
In your proposed “Heat Flow Model”, do you really have flow resistance to every other node? Even between nodes far apart? With then a dense matrix to solve? What about transmission through the atmospheric window, say?
@glassman: Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. These data are at the core of radiation transfer, and that the “doubling CO2″ error slipped through is surprising.
When the same person posts ten paragraphs each eminently refutable, where do you begin? My theory, yet to be proved, is that the other nine paragraphs are best shot down by shooting down the tenth, which is the one quoted above.
Since this paragraph is stated simply as a fact, that I know from the data to be blatantly false, let me simply ask Mr. Glassman to support his statement, which in the interests of decorum in this thread I’ve refrained from attaching any other epithet to than “false.”
“Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. “
Jeffrey – the misconception inherent in your comment dominated thinking about the role of CO2 until about 60 years ago, when geophysicists realized that the atmosphere could not be represented as a simple slab wherein a “saturating” concentration of CO2 precluded any further absorption and warming, but rather had a vertical structure. Within that structure, absorbed photon energy is subsequently re-emitted (up and down) until a level is reached permitting escape to space. For CO2, this is a high altitude in the center of the 15 um absorption band, but much lower as one moves into the wings, which are essentially unsaturable.
This blog has a couple of informative threads on the greenhouse effect that address this phenomenon in some detail, and the links in the present thread are also valuable. I can see from your comment that you are well informed in some areas of energy transfer and radiation, but I suspect you have not had an opportunity to reconcile your knowledge with the principles of radiative transfer within the vertical profile of the atmosphere, and the resources I suggest may help. Others may be able to offer further suggestions.
This was intended as a reply to Jeff Glassman.
Jeff,
I am sure that you will agree that Beer’s Law exponential absorption only applies to monochromatic radiation. When you have spectral absorption by atmospheric gases vary by many orders of magnitude at nearby wavelengths, you specifically have to take that fact into account. Line-by-line calculations do that. So do correlated k-distribution calculations (which is what is being used in many climate models). Calculating “greenhouse optical depths” averaged over the entire spectrum like Miskolczi does, makes absolutely no sense at all.
You should take the time to become better informed on how climate models handle energy transports – radiative, convective, advective, etc. There are no heat capacity, no sinks, no sources, no flow variables , or feedbacks in radiative transfer calculations. It is only the temperature profile, surface temperature, atmospheric gas, aerosol, and cloud distributions (and their associated absorption and scattering parameters) that enter into radiative transfer calculations. Radiative energy transfer is incomparably faster than than energy transported by the other physical processes. Radiative transfer calculations provide the instantaneous heating and cooling rates that the climate GCM takes into account as it models all of the hydrodynamic and thermodynamic energy transports in a time marching fashion. All energy transports are properly being included in climate modeling calculations.
IPCC does not assume that infrared absorption is proportional to the logarithm of GHG concentration. Radiative transfer is being calculated properly for all atmospheric gases. The absorption behavior by some of the gases, for example CO2, happens to be close to logarithmic under current climate conditions, but the absorption for GHGs is nowhere close to being saturated except for the central portions of the strongest absorption lines. There are many more weak than strong lines.
Radiative forcings need to be understood for what they are, and what they aren’t. A radiative forcing is simply the flux difference between two different states of the atmosphere. It helps if the atmospheric state that is used as the reference is taken to be an atmosphere that is in radiative/convective equilibrium. The second atmospheric state may be the same atmosphere but with the CO2 profile amount doubled. A comparison of radiative fluxes between the two atmosphere states will show flux differences from the ground on up to the top of the atmosphere. The flux difference at the tropopause level is typically identified as the “instantaneous radiative forcing” (which for doubled CO2, happens to be about 4 W/m2). Since doubled CO2 decreased the outgoing LW flux to space, this is deemed to be a positive radiative forcing since the global surface temperature will need to increase to re-establish global energy balance. If no feedback effects were allowed, an increase of the global-mean surface temperature by about 1.2 C would re-establish global energy balance. In the presence of full atmospheric feedback effects, the global-mean surface temperature would need to increase by about 3 C before global energy balance was re-established. And by the way, the climate feedbacks are not prescribed, they are the result of the physical properties of water vapor, as they are modeled explicitly in a climate GCM.
A. Lacis Re:
Please read and understand Miskolczi before again mischaracterizing him.
Miskolczi actually does the detailed LBL calculations that you advocate:
Miskolczi also performs the LBL calculations with much finer spatial and frequency resolution, and in much greater detail than you have described in posts here. After he calculates the detail, he then performs a Planck-weighted global integration to evaluate the global optical depth. That now gives a manageable parameter to quantitatively track global absorption over time.
Did I understand your objections or what am I missing in what you so strongly disagree/advocate?
David,
The detail of Miskolczi’s line-by-line results is not the issue. It is how he uses his line-by-line modeling results to come to the erroneous conclusions about the greenhouse effect and the global warming due to increasing CO2. That is where the problem is.
Here is a simple test to see how really useful Miskolczi’s greenhouse method is. Can Miskolczi reproduce the well established 4 W/m2 radiative forcing for doubled CO2 with his methodology, and its equivalent 1.2 C global warming?
Years spent on case studies in th History of Science taught me that “well established” doesn’t provide a logical guarantee of being correct.
That it isn’t seen as a possibility that it could be otherwise by a community of scientists is the whole reason it is “well established”.
Yet the wailing and the gnashing of teeth over Trenberth’s “missing heat” cold be an indication that the atmospheric window may be wider open than has been previously thought. Even that it might also vary, a la Iris theory of Lindzen.
Anyone looking up at the ever varying cloudscape would conclude that variation might be the rule rather than the exception.
Yet the wailing and the gnashing of teeth over Trenberth’s “missing heat” cold be an indication that the atmospheric window may be wider open than has been previously thought.
This is an excellent point.
Even that it might also vary, a la Iris theory of Lindzen.
I like the way the Wikipedia article on the iris hypothesis says “However, there has been some relatively recent evidence potentially supporting the hypothesis” and then cites papers by Roy Spencer and Richard Lindzen.
Very noble of the foxes to volunteer for sentry duty at the hen house.
Heh, given the presence of the gatekeepers guard dogs on Wikipedias global warming section I’m surprised the foxes managed to sneak it in. ;)
Right, I reckon Connolley either dozed off or neglected to put that article on his watch list. Even alarmists like me can’t get past Connolley and Arritt.
Thanks for clarifying. As I understand your objection to Miskolczi, it is with the methodology and conclusions of his greenhouse planetary theory, not with the LBL evaluation of atmospheric profiles leading to the Planck-weighted Global Optical Depth.
As I understand Miskolczi, his method of evaluating the Global Optical Depth can be applied to any atmospheric profile, including doubled CO2 etc from which you can prescribe insolation and other parameters to evaluate conventional forcing methodology.
See my detailed response to you above. Please see Miskilczi’s (2010) detailed evaluation of numerous sensitivities. Of particular interest is his evaluation of actual Co2 and H2O sensitivities derived from the available 61 year empirical TIGR radiosonde data.
As to his overall model, how do you evaluate how well he has fit the actual optical absorption measurements to various atmospheric fluxes?
Do his simplified correlations between those fluxes reasonable approximations to the actual ratios of those flux curves?
How do you evaluate Bejan’s constructal law approach to modeling climate with thermodynamic optimization? See:
Thermodynamic optimization of global circulation and climate
INTERNATIONAL JOURNAL OF ENERGY RESEARCH
Int. J. Energy Res. 2005; 29:303–316 DOI: 10.1002/er.1058
Andy,
As I can see, your approach to the greenhouse problem is through Ramanathan’s G (= Su – OLR), or g (= G/Su) greenhouse functions. Empirically, it gives you the 396-239 = 157 (g=0.4) all-sky and 396-264=132 W/m2 (g=1/3) clear-sky factors, with about 33 (and 27) K greenhouse temperatures.
The question is how you get these numbers for G, or g (with OLR given) from the measured amounts and distributions of GHG’s and temperature. This is the task of radiative transfer calculations. The result will depend on the global average atmospheric absorbed LW, or on the surface transmitted (St, “Atmospheric Window”) radiation. According to the (monochromatic) Beer law, the global average frequency-integrated tau is a given (logarithmic) function of St/Su .
As we all want to have exact numbers for the greenhouse effect, we must calculate precisely the global average infrared absorption, or the “window”. Having this, one can establish a theoretically sound G(tau), or, if you like, a G(St) function.
That’s why approximate flux estimations are not acceptable, and that’s why I ask the radiative experts here to present their most accepted numbers for LW absorption, window and downward radiation.
When we aggree on the actual fluxes, we can step forward to the possible effects of future composition changes.
A. Lacis wrote: “The line-by-line codes require significant computer resources to operate, but they are the ones that give the most precise radiative transfer results.”
This sounds very reassuring. Few questions are however in order. Form what I learned, if atmosphere is hypothetically isothermal, what would be the “radiative forcing” from CO2 doubling? Zero, right?
Now, if atmosphere would go with the same lapse rate for all 45km up, what would be the radiative forcing from 2xCO2? Probably a lot; M. Allen says 80-98W/m2 :-) Or 50W/m2.
Further, if certain strongly-absorbing band emits from stratosphere, wouldn’t the forcing be negative?
What we see is that the forcing from CO2 doubling can be anything from negative to about 50W/m2, which fundamentally depends on the shape of vertical temperature profile. Therefore, the whole reassuring precision of RT codes comes down to how accurately do we know (or represent) the “global temperature profile”. So, my question is: how accurately do you know it?
Another question: is the resulting OLR linear with respect to T(z)?
There are more question…
Cheers,
– Al Tekhasski
A. Lacis wrote: “… reproduce the well established 4 W/m2 radiative forcing for doubled CO2”
That’s another good question. Do you mean it is well established by Myhre et.al (1998) where “three vertical profiles (a tropical profile and northern and southern hemisphere extratropical profiles) can represent global calculations sufficiently”?
I gave up on Miskolczi when I saw him using the factor of 2 for the ratio of average potential energy to average kinetic energy of the molecules of a planetary atmosphere. That’s valid for gravitationally bound bodies that collide very infrequently. For air molecules that assumption is so far from true as to be a joke. The mean free path of air molecules is around 70 microns near the Earth’s surface, and with that value the ratio is not 2 but 0.4.
Miskolczi is simply pulling the wool over people’s eyes by writing impressive sound rubbish.
And the mean free path at the altitud of the mid stratosphere is around 1m or so? So Miskolczi’s average is calculated over what altitude change?
You accuse Ferenc Miskolczi of deliberately trying to deceive us by “pulling the wool over people’s eyes”. That’s a very serious charge to level against a theoretical physicist. I hope you have good strong evidence, or you are going to look petty and vindictive in the eyes of many.
And the mean free path at the altitud of the mid stratosphere is around 1m or so? So Miskolczi’s average is calculated over what altitude change?
There are fewer joules in the mid stratosphere than in Queen Elizabeth’s crown.
Vaughan Pratt
On the virial theorem, see references below for Toft, Pacheo and Essenhigh.
Your problem is with classical thermodynamics. Search scholar.google.com for “virial theorem”.
When you use ad hominem diatribe, you’ve lost credibility as a scientist. Please address the scientific issues here, not partisan politics or yellow journalism.
David, Ferenc already put Vaughan straight here:
http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-19575
The missing word in the parentheses is intriguing. I wonder if it is ‘reviewed’ :)
David, Ferenc already put Vaughan straight here:
One good turn deserves another. I straightened Ferenc out here.
A few more rounds of this and astronomers will be using the two of us to measure the curvature of space.
You accuse Ferenc Miskolczi of deliberately trying to deceive us by “pulling the wool over people’s eyes”. That’s a very serious charge to level against a theoretical physicist. I hope you have good strong evidence, or you are going to look petty and vindictive in the eyes of many.
If you think the ratio is 2 then you’re the one looking stupid.
If I complained about your use of F = 3ma for Newton’s second law would you call me vindictive? You’re out to lunch, guy.
I don’t know, which is why you see the question marks in my comment. This blog is a wonderful opportunity for nonviolent interaction between scientists on both sides of the debate. Why not ask Miklos Zagoni, who is intimately acquainted with Miskolczi’s work, if he can help explain the apparent problem, rather than jumping in with both feet making unwarranted accusations?
Why not ask Miklos Zagoni, who is intimately acquainted with Miskolczi’s work, if he can help explain the apparent problem, rather than jumping in with both feet making unwarranted accusations?
What’s your basis for “unwarranted?” I watched Zagoni’s video a couple of months ago. He was simply parroting Miskolczi. You seem to be doing the same. Stop parroting and start thinking. This is junk physics.
Oh dear Vaughan.
I’m not “parroting” anyone. As I pointed out above, I ask questions and think about replies. I recommended you ask Miklos how Ferenc arrived at the ratio of energy you had an issue with. But instead, you seem content to make unsupported assertions about the quality of his work. You fear the consequences of his theory, so you attack details without exploring how the whole fits together.
Ah well. Be happy with whatever you believe.
I recommended you ask Miklos how Ferenc arrived at the ratio of energy you had an issue with. But instead, you seem content to make unsupported assertions about the quality of his work.
There’s nothing “unsupported” about it, see e.g. this paper which should confirm what I said (and there are even shorter proofs and moreover applicable to more general situations than considered by Toth).
Putting morons on pedestals like this only makes you a moron. Miskolczi and Zagoni are heroes only to climate deniers.
I’d be interested in a discussion of the virial theorem component to this. I’ve encountered another (unpublished) paper that addresses the virial theorem in context of the earth’s atmosphere that i found intriguing. Don’t ask me to defend this (I’m not up on this at all), but would be interested in a discussion on the relevance of the virial theorem.
Judy
On the virial theorem relative to Miskolczi & planetary greenhouse theories, see:
Viktor T. Toth, The virial theorem and planetary atmospheres
arXiv:1002.2980v2 [physics.gen-ph] 6 Mar 2010
http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.2980v2.pdf
He derives the atmospheric virial theorem for diatomic molecules in a homogeneous gravitational field valid for varying temperature, where the ratio of potential energy U to kinetic Energy K = gas constant R divided by the product of heat capacity cv times the molar mass Mn
U/K = R /(cV*Mn) Equation 34.
See also:
PACHECO A. F. ; SANUDO J. The virial theorem and the atmosphere, Geophysics and space physics ISSN 1124-1896, 2003, vol. 26, no3, pp. 311-316, 6 page(s) (4 ref.)
The most detailed extention of the virial theorem to a full thermodynamic model for a planetary atmosphere column with bas absorption that I know of is by:
Robert H. Essenhigh, Prediction of the Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere by Solution of the (S−S) Integral Equations of Transfer and Evaluation of the Potential for Profile Perturbation by Combustion Emissions, Energy Fuels, 2006, 20 (3), 1057-1067 • DOI: 10.1021/ef050276y
http://pubs.acs.org/doi/abs/10.1021/ef050276y
Essenhigh addresses water and CO2 absorption bands as well etc.
Miskolczi (2008) applied the classical virial theorem:
(Part the difficulty some readers have with Miskolczi 2008 is his use of astronomical language etc from other applications of the virial theorem. His statement: “the radiation pressure of the thermalized photons is the real cause of the greenhouse effect” got bloggers off onto the force of photons and satellites, rather than recognizing Miskolczi’s effort to explain the atmospheric pressure by application of the virial theorem, together with absorption of solar radiation to surface temperature. Need to check if there is a small difference in the virial coefficient versus the gas between Miskolczi, Toth, Pacheco, and Essenhigh.)
Victor Toth’s paper has been published:
The virial theorem and planetary atmospheres
Időjárás – Quarterly Journal of the Hungarian Meteorological Service (HMS), Vol. 114, No. 3, pp. 229-234
http://www.met.hu/download.php?id=2&vol=114&no=3&a=6
For virial connoisseurs see:
Lambert M. Surhone, Miriam T. Timpledon, Susan F. Marseken, Virial Theorem,
206 pages, Betascript Publishing (August 4, 2010) ISBN-10: 6131111472; ISBN-13: 978-6131111471
It would help if some reader could check this out and review what it has to say on the application to a planetary atmosphere with diatomic and multiatomic gases, per the issues on Toth, Essenhigh & Miskolczi.
For history buffs:
Henry T Eddy, An extension of the theorem of the virial and its application to the kinetic theory of gases, 1883
See a common lecture on the Virial theorem
http://burro.cwru.edu/Academics/Astr221/LifeCycle/jeans.html
For the astronomic applications of the virial theorem, see:
James Lequeux, The Interstellar Medium
ISBN 3-540-21326-0 Springer Berlin Heidelberg NewYork
http://astronomy.nju.edu.cn/~ygchen/courses/ISM/Lequeux2003.pdf
14.1.1 A Simple Form of the Virial Theorem
with No Magnetic Field nor External Pressure p 330 – p 323
Equations (14.1) to (14.11)
14.1.3 The General Form of the Virial Theorem
(This includes bulk velocity, density, pressure and gravitational potential as pertinent to a planetary atmosphere, -as well as magnetic field which may not be significant for planets.)
14.1.4 The Stability of the Virial Equilibrium
(Note the use of a polytropic equation of state.)
14.1.5 The Density Distribution in a Spherical Cloud
at Equilibrium
(Adapt for a gas around a planet with a given radius and mass.)
@curryja: I’d be interested in a discussion of the virial theorem component to this. I’ve encountered another (unpublished) paper that addresses the virial theorem in context of the earth’s atmosphere that i found intriguing. Don’t ask me to defend this (I’m not up on this at all), but would be interested in a discussion on the relevance of the virial theorem.
Would it be interesting enough to start up a separate thread on the virial theorem on your blog? Although I’ve been reluctant to be a guest on any other topics, that’s because the ratio of my time required for a guest post, divided by the expertise of others on that topic, has not been large enough so far.
In the case of the virial theorem, from what I’ve read so far in the literature my impression is that no one alive on the planet really understands it. A guest post in which I pretend to explain it might be the most effective way of pulling the real experts on the virial theorem out of the woodwork, if there are any. Viktor Toth could do this ok, but I imagine I could do it at least as well.
I would be delighted to be let off that hook if Viktor volunteered for that duty (I love nothing more than being let off hooks).
YES!!! please send me an email, lets start a thread on virial. I will send you an email also.
Judy, Nick Stokes, & Vaughan Pratt
Regarding the virial coefficient for the atmosphere
(2 vs 3/2 vs 5/2 for Kinetic Energy/Potential Energy) See the following stating another coefficient of 6/5 for hydrogen as a diatomic gas:
“The virial theorem which applies to a self-gravitating gas sphere in hydrostatic equilibrium, relates the thermal energy of a planet (or star) to its gravitational energy as follows:
alpha * Ei + Eg = 0
with alpha = 2 for a monoatomic ideal gas or a fully non-relativistic degenerate gas, and alpha = 6/5 for an ideal diatomic gas. Contributions arising from interactions between particles yield corrections to the ideal EOS (see Guillot 2005). THe case of alpha = 6/5 applies to the molecular hydrogen outer regional of a giant planet.”
Jonathan J.Fortney et al. ,Giant Planet Interior Structure and Thermal Evolution Invited chapter, in press for the Arizona Space Science Series book “Exoplanets” Ed. S. Saeger
arXiv:0911.3154v1
http://arxiv.org/PS_cache/arxiv/pdf/0911/0911.3154v1.pdf
Ref Guillot 2005 Annual Review of Earth and Planetary Sciences, 33, 493
How did they calculate their 6/5 for a diatomic gas? What steps and assumptions did they use?
David,
The first thing to note there is that the factor 2 carries the opposite sign. That is significant, because the gravitational energy referred to is the energy relative to infinity, not ground.
The factor 6/5 arises through the same logic as in Toth’s paper. Monatomic gases have just translational KE with 3 degrees of freedom. Diatomic gases have two extra dof of rotational KE, making 5. The ratio of PE to translational KE is still -2, but with equipartitioning, there is thus 5/3 times as much KE in total, and the ratio is -2 * 3/5= -6/5.
Poor Pratt, seems you do not know much about the virial concept. For the global average TIGR2 atmosphere the
P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0) You may compute it yourself if you are able to, but you may ask Viktor Toth about the outcome of our extended discussions about the topics (after I his paper).
Welcome Ferenc – I very much look forward to Mr Pratt engaging you directly on the substance of his remarks.
Um, I’ve just re-read this. Vaughan, I apologise if I used the wrong term to address you.
Welcome Ferenc – I very much look forward to Mr Pratt engaging you directly on the substance of his remarks.
Um, I’ve just re-read this. Vaughan, I apologise if I used the wrong term to address you.
Not a problem, at Harvard they would call me “Mr Pratt,” at MIT and Stanford “Vaughan.” So your first address would be fine for Harvard, while on your second you’ve inadvertently used the correct form of address for the only two institutions I’ve taught at for more than a decade each.
However I recently took a CPR course as part of the autonomous vehicle project I’m the Stanford faculty member on (Stanford has liability worries about the car lifts and heavy machinery we use), so feel free to call me “Dr Pratt” in case you need the Heimlich maneuver or cardiopulmonary rescuscitation. (Both my parents were medical doctors. If you’re lucky that’s hereditary, if not then the Good Samaritan law kicks in to render me innocent of your premature demise, so either way I’m safe even if you aren’t.)
Ferenc, welcome to the blog. Please could you answer a question for me?
Do you think that the convergence of the results on your stable value for Tau confirms the validity of the empirical radiosonde data? If your theory is correct, would it enable you to correct or assign error bars to the empirical data?
Thanks
Ferenc, thank you very much for stopping by to discuss your work.
My goodness Judith, you are attracting some top people here.
Long may it continue.
seems you do not know much about the virial concept.
Clearly one of us doesn’t, since we get such wildly different results.
For the global average TIGR2 atmosphere the
P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0)
Neglecting mountains (which increases K very slightly by postulating atmosphere in place of the mountains) your value for P is easily computed from the center of mass of a column of atmosphere, which on any planet is at the scale height of that planet’s atmosphere, suitably averaged as a function of temperature. If the centroid is at height h then P for that column is mgh where m = 10130 kg, g = 9.81, and h starts at 8500 at sea level and drops to 7000 or less at the troposphere depending on the temperature. A good average figure for h is around 7600, and so P = mgh = 10130*9.81*7600 = 755.3 megajoules. This is essentially what you got so we’re in excellent agreement there.
Now if Toth and I are getting 0.4 to your 2 then your figure for K must be about 20% of what we’d imagine it to be. Now the specific heat of air at constant pressure is 0.718 kJ/K/kg. Our column has mass m = 10130 kg/m2 as noted above so we have .718*10130 = 7.27 MJ/K/m2. In order to reach your figure of 376 MJ/m2 you would need the temperature of the atmosphere (suitably averaged) to be 376/7.27 = 51.7 K (°K, not Kinetic energy of course).
I don’t know how you calculated the KE of the Earth’s atmosphere, but at that temperature every constituent of it would be solid. Check your math. I would be more comfortable (literally!) with a KE of 1.885 GJ/m^2 corresponding to a typical atmospheric temperature of 250 K and then you’d get the 0.4 that Toth and I got.
The virial theorem’s ratio of 2 is exact for any collision-free gravitationally bound system of point particles. When collisions are frequent, as in the atmosphere where typical mean free path at sea level is 70 microns, or when the particles are large, as with a satellite orbiting Earth, the ratio changes significantly. With frequent collisions, air molecules quickly lose track of whatever orbit they were briefly on and their dynamics is completely different from that of a solitary air molecule orbiting an otherwise airless planet. And with a big particle like Earth one must define potential energy with respect to the Earth’s surface otherwise particles in a ground energy state have absurd PE’s, but in that case the KE of a satellite in orbit is a great many times its PE (imagine it in an orbit 1 meter about the surface of an airless spherical planet).
You now have me wondering what you think the virial theorem means.
Vaughan Pratt
Kindly do us all a favor by readying the actual papers detailing the virial theorem by Toth etc. in the references cited.
Please apply the Virial Theorem as derived by Toth.
No, I think you should read Toth’s paper. Vaughan’s arithmetic agrees with it, and not with Ferenc. The point of Toth’s paper is that the ratio PE/KE, where KE includes rotational energy, is 0.4, not 2 as Ferenc claims. If you restricted to translational KE, the ratio would be 2/3 (still not 2).
The paper you quote by Pachudo gives the same ratio as Toth. I’m surprised that you quote these results without noticing that they contradict FM’s claim.
You might also like to note Toth’s tactful aside
“whether or not the postulate was correctly applied in [1] is a question beyond the scope of the present paper”.
Indeed the biggest mystery is what the PE/KE ratio has to do with IR fluxes. This has never been explained.
Indeed the biggest mystery is what the PE/KE ratio has to do with IR fluxes. This has never been explained.
I’d formed the impression that FM was trying to gradually back away from that mystery. His problem is how to do so in a no-fault way. He’s not handling this very well on this blog.
Stop ascribing motive and insinuating unscientific behaviour!
We’ve seen more than enough of it over the last 20 years. Pack it in!
Stop ascribing motive and insinuating unscientific behaviour!
I was ascribing motive to FM? All I said was that he was trying to back away from his claimed applicability of the virial theorem, which even Zagoni has not been able to apply.
We’ve seen more than enough of it over the last 20 years. Pack it in!
What happened 20 years ago? I can only think of the George C. Marshall Institute starting up. Did you have something else in mind?
Nick Stokes & Vaughan Pratt
Thanks Nick for clarifying the issue. Mae culpa. I was reacting to language, not checking the substance.
In my post above giving references in response to Curry’s query on the virial theorem, I noted:
“Need to check if there is a small difference in the virial coefficient versus the gas between Miskolczi, Toth, Pacheco, and Essenhigh.”
Thanks for raising the issue of the PE/KE coefficient: “The point of Toth’s paper is that the ratio PE/KE, where KE includes rotational energy, is 0.4, not 2 as Ferenc claims.”
Re: “biggest mystery is what the PE/KE ratio has to do with IR fluxes. ”
I assume that may affect the atmospheric profile of pressure, temperature, density and composition. See Essenhigh above where he shows differences between relative pressure and relative density with elevation.
A full analysis to <0.01% variation would need to account for variations of heat capacity with temperature, with corresponding variations in composition, pressure, temperature, and gravity with elevation.
Nick Stokes & Vaughn Pratt
See my note above regarding a coefficient of 6/5 stated for diatomic hydrogen. http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-20326
Now the specific heat of air at constant pressure is 0.718 kJ/K/kg.
Sorry, I meant constant volume (the number I gave is correct). Constant pressure measures just kinetic energy, constant volume measures kinetic and potential energy. This is because work is done in the constant pressure case which puts the potential energy to work, but not in the constant volume case which leaves it in the system, like a wound-up spring. In the case of the atmosphere the work done at constant pressure is used to raise the air above, which then becomes potential energy mgh again.
Christopher Game posting about the virial theorem. It seems I am a bit late to post the following, but here it is.
We are here interested in Miskolczi’s kinetic-to-potential energy ratio between time averages of potential energy and of kinetic energy, namely
2 = .
The principle of equipartition of energy can be stated
The average kinetic energy to be associated with each degree of freedom for a system in thermodynamic equilibrium is kT / 2 per molecule.
The virial theorem of Clausius (1870, “On a mechanical theorem applicable to heat”, translated in Phil. Mag. Series 4, volume 40: 122-127) states on page 124 that
The mean vis viva of the system is equal to its virial.
The virial theorem is about a spatially bounded system, that is to say, a system for which all particles of matter will stay forever within some specified finite region of space.
Clausius allows a diversity of definitions of kinetic energy. For him, it was allowable to define a kinetic energy for any specified set of degrees of freedom of the system. We are here acutely aware that various writers use the permitted diversity of definitions of kinetic energy, and get a diversity of statements as a result.
The virial theorem of Clausius makes no mention of potential energy. Potential energy is about forces. Under certain conditions, the virial of Clausius turns out to be very simply related to a potential energy.
Clausius (1870) makes it clear that the terms of his proof may refer to all or to selected degrees of freedom of the system, as defined by Cartesian coordinates.
Because of its generality, the virial theorem can relate to a theoretical atmosphere of fixed constitution sitting on the surface of a planet, and how this is so is indicated in the original paper of Clausius (1870).
Remembering to be careful about appropriately specifying the degree of freedom of interest, and the potential energy of interest, the reader of this blog will find that Miskolczi’s formula for the atmosphere
2 = is correctly derivable from the virial theorem of Clausius. Much of the physical content of the formula can be seen in a simple model, to be found in various books, papers, and on the internet, as follows.
An elastic ball of mass m is dropped from rest at an altitude h, with g being constant over the altitude up to h (near enough). It will bounce on a perfectly elastic floor at altitude 0 at time T.
The ball’s velocity (positive upwards) at time t in [0,T] is – gt.
Its kinetic energy at time t is mg^2t^2 / 2 .
Its average kinetic energy over the time [0,T] , the time it takes to fall from altitude h to altitude 0, is
= Integral(0,T) (mg^2t^2 / 2 dt) / T .
= mg^2T^2 / 6 .
We recall that T = √(2h / g).
Thus the average kinetic energy over time [0,T] is
= mg^2(2h / g) / 6 = mgh / 3 .
Referred to altitude zero for the zero of potential energy,
the ball’s potential energy at time t is mgh − mg^2t^2 / 2 .
Its average potential energy over time [0,T] is
= mgh – Integral(0,T) (mg^2t^2 / 2 dt) / T
= 2 mgh / 3
Thus we have 2 = .
Christopher Game about the virial theorem again. Typographical problem. Please excuse.
I didn’t use the angle brackets safely, and the formula was lost.
I meant to write
2 average( kinetic energy) = average(potential energy)
I think it will be obvious how to put in the proper terms.
Christopher Game
Christopher,
Toth, in the paper cited frequently here, deals with exactly this example. But as he says, in a real gas, there are three degrees of translational motion, and the KE is equipartitioned. Your argument still works for the vertical component, but now the total KE is three times larger, and the ratio is 2:3, not 2:1.
He goes on to argue that for diatoms, rotational KE should be counted, so the ratio is 2:5.
Christopher Game replying to Nick Stokes’ post of December 8, 2010 at 7:34 am, about the virial theorem.
Dear Nick,
Thank you for your comment.
With all due respect, I am referring to the virial theorem of Clausius.
Clausius is quite clear, and one can check his proof, that for the purposes of his definition of vis viva, one can work with one degree of freedom at a time, in Cartesian coordinates. What we are here calling ‘the potential energy’ is dealt with by Clausius in terms of components of force, and that is to be matched against the degree(s) of freedom chosen for the vis viva. I think the proof used by Clausius is appropriate for the present problem. Clausius does not require the use of the total kinetic energy.
For his version of “the virial theorem”, Victor Toth cites, not Clausius, but Landau and Lifshitz. They give only a very brief account of the theorem and their proof is less general than the one that Clausius offered.
As I noted in my previous post, Clausius allows diverse choices for the definition of the vis viva, appropriate to the problem. And various choices of definition lead to various results. I think the choice made by Miskolczi, though different from the one you are considering in your comment, is appropriate for the present problem. Your choice might perhaps be relevant to a different problem.
Here we are interested only in the gravitational degree of freedom, and the appropriate component of vis viva has also just one degree of freedom. As allowed by Clausius, we are not interested in the other degrees of freedom. The vertically falling bouncing ball really does tell the main physics here.
I think you will agree with me about this when you check the method used by Clausius.
Yours sincerely,
Christopher Game
Christopher,
Before starting to discuss, which form is appropriate “to this problem” somebody should give a reason that some form of virial theorem is approprite. As far as I can see nobody has ever presented any reason for that, including Miskolczi.
To evaluate the radiation fluxes, we need to know the atmospheric profile of temperature, pressure and composition. The virial theorem gives a basis for modeling temperature and pressure vs elevation with gravity. Anyone have a better explanation?
No, but I don’t like that one. You’d have to explain how “The virial theorem gives a basis for modeling temperature and pressure vs elevation with gravity.“. I can’t see it. And as for “the atmospheric profile of temperature, pressure and composition“, that’s what this radiosonde database is supposed to tell you.
But the 2007 FM paper just plucks numbers out of the virial theory and puts them into IR flux equations. There’s nothing about the results mediated through atmospherical variables. But no other explanation either.
David is correct about the relevance of the virial theorem. Here’s the explanation he asked for.
In the search for Trenberth’s missing heat, one wants to know how much of the total energy absorbed by the planet goes into heating the ocean, the land, and the atmosphere.
For the atmosphere, if you assume that all the heating energy is converted into kinetic energy of the molecules, which are moving at some 450 m/sec, and calculate this energy from the elevation in temperature of the atmosphere, you will get an answer that is only 5/7 of the correct answer. This is because you neglected the increase in potential energy resulting from raising the center of mass of the atmosphere when it expands due to the warming.
Since this is a significant error, one might wonder why everyone neglects this increase in potential energy. The answer is that they don’t, it is simply hidden in the difference between the specific heat capacity of air at constant volume, vs. that at constant pressure. The first line of this table gives the former as 20.7643 joules per mole per degree K, and the latter as 29.07. Notice that 29.07*5/7 = 20.7643 in the table. (It is most likely that the latter was simply computed in this way from the former; one would hardly expect agreement with theory to six decimal places from observation, particularly since the former is given only to four places, and the composition of air is more variable than that of any particular constituent such as nitrogen or argon.)
Heating any region of the atmosphere, whether a cubic meter or a cubic kilometer, is done at constant pressure because the pressure is determined by the mass of air above the region, which is unchanged by heating, whether or not the heating is applied to the whole atmosphere or just the region in question. Hence the applicable heat capacity is the one for constant pressure.
But this choice automatically factors in the elevation in altitude of the air, since a gas heated at constant pressure must increase its volume. The work done in moving the boundary against the equal and opposite pressure outside the heated region is in this case converted to the potential energy of the raised atmosphere.
So a virial theorem is at work here, namely the one in Toth’s paper that gives the ratio of PE to KE as 2/5. But this is ordinarily buried in the distinction between constant pressure and constant volume, and so passes unnoticed. Toth makes this connection at the end of his paper.
But even if one were to neglect the potential energy and only account for the increase in kinetic energy of the atmosphere, the heat capacity of the atmosphere is equivalent to only 3 meters of ocean, whence the error from omitting PE is tiny and cannot possibly make a dent in the amount of missing heat.
I know of no other relevance of Toth’s virial theorem to the climate. In particular I cannot imagine any role for it in Miskolczi’s claimed effect that CO2-induced warming is offset by water-vapor-induced cooling.
Dear Pekka Pirilä,
I am addressing the problem of calculating the ratio of a kinetic to potential energy.
Yours sincerely, Christopher
Hi Christopher,
It has been a while. It seems to me that according to some of these analysis (Toth’s included) that a 5Kg parcel moving along a certain trajectory at a given velocity will have a different kinetic energy than a cannon ball of same mass and trajectory and temperature.
Of course it’s harder to convince oneself that the kinetic energy in lattice vibrations of the cannon ball is in any way relevant to the kinetic energy that we are interested in.
@CG: Of course it’s harder to convince oneself that the kinetic energy in lattice vibrations of the cannon ball is in any way relevant to the kinetic energy that we are interested in.
The joules coming from the Sun wind up in the cannon ball. However the “cannon ball” is really an N-atom molecule. Each atom has 3 DOF (degrees of freedom) whence the molecule in principle has 3N DOF. However quantum mechanics forbids certain DOFs as having too low energy, so for example the diatomic O2 with 6 DOF’s in principle only has 5 DOFs below around 500 K.
The relevance of the non-translational DOFs is that n watts of heat from the Sun distributes itself equally between all non-forbidden DOFs, whence the specific heat capacity of any given gas allows it to absorb more watts than would would expect from the translational DOFs alone.
In particular if it has 2 non-translational or bound DOFs then 5 watts from the Sun will distribute itself 1 watt to each of those 5 DOFs.
If the applicable virial theorem promises p joules of PE to every 1 joule of KE then you need to supply 1+p joules of energy to the system in order to raise its time-averaged kinetic energy by 1 joule. In the case of the atmosphere Toth has shown p = 0.4 (which I noticed independently of Toth but several months later so there is no question as to his priority).
The significance of this for global warming is that if the atmosphere gains K joules of kinetic energy when its temperature is raised 1 °C, 1.4K joules must have been supplied to achieve that effect since potential energy must consume the other 0.4K joules, namely by raising the average altitude of the atmosphere as a consequence of expanding due to the temperature rise.
Christopher Game replying to Vaughan Pratt’s post of December 9, 2010 at 3:31 am.
Dear Vaughan Pratt,
Thank you for your comment. I was referring specifically to the virial theorem of Clausius (1870). It is a rather general theorem, and can be used for many purposes. It makes no mention of potential energy, and uses a particular definition of vis viva for a specified degree of freedom, not quite the same as the modern definition of total kinetic energy. You are interested in its “significance for global warming”. I was referring to its use for the gravitational degree of freedom as allowed by Clausius.
Yours sincerely,
Christopher Game
I understand that the virial theorem relates one component of kinetic energy to the same component of potential energy. Thus one can e.g. relate the height (or density profile) of the atmosphere to its temperature profile.
I fail, however, to see any such connection between radiation and other variables that would justify the equations that Miskolczi has presented. For me those equations are just some random formulae without any given justification. (Years ago I was doing research in theoretical physics and I have also teached some aspects of thermodynamics. Thus a good justification should be understandable to me.)
Christopher Game replying to the post of Pekka Pirilä of December 9, 2010 at 2:41 pm.
Dear Pekka Pirilä,
Thank you for your comment.
Again I say that it would be a pity if the observation of empirical fact were ignored for lack of an accepted theoretical explanation.
Yours sincerely, Christopher Game
Christopher Game replying to Pekka Pirilä’s post of December 9, 2010 at 8:29am.
Dear Pekka Pirilä,
Thank you for your comment. I am glad it seems that we agree that one may consider one degree of freedom at a time. We are not alone in this. Chandrasekhar (1939) in “An Introduction to the Study of Stellar Structure” on page 51 writes as his equation (92)
2 T + Ω = 0 ,
where T denotes the kinetic energy of the particles and Ω denotes their potential energy. He is referring the potential energy to a zero at infinite diffusion, and is using Newton’s law of gravity, and this accounts for the sign difference of the ratio. He writes: “Equation (92) expresses what is generally called the ‘virial theorem’ “.
This is far as my post went.
You and Nick Stokes and I think others raise also the further question of the connection between this and Miskolczi’s empirical observation that Su = 2 Eu. This observation was made in figure 20 of Miskoczi and Mlynczak 2004. The fit between
the curve y(tG) = σ tG^4 / 2 = Su(tG) / 2, and the data
points plotted as y(tG) = Eu(tG) is not too bad, as you may see by looking at the figure. The data points are each from a single radiosonde ascent, from 228 sites scattered around the earth. Perhaps a little surprisingly, Miskolczi was not the first to notice a relationship like this. Balfour Stewart’s 1866 steel shell atmosphere gave the same result. It is simple to explain Balfour Stewart’s result, but not so simple to explain Miskoczi’s observation. I think that Miskolczi noticed the likeness of Chandrasekhar’s theoretical result (92) to the phenomenological formula Su = 2 Eu that was empirically indicated by the radiosonde data that he had analyzed.
One may distinguish between enquiry as to factual question of the goodness of fit between the data and Miskolczi’s empirical phenomenological formula, and as to the theoretical question of its physical explanation. It is not clear to me how far people distinguish these two questions. I have not seen an explicit attempt, based on other data, or another analysis of the same dataset, to challenge the empirical fact of goodness of fit. Perhaps you can enlighten me on that point?
The earth has only about half cloud cover, so Balfour Stewart’s theoretical explanation will hardly work for the earth. Perhaps you know of others who have done better? Perhaps your expertise in theoretical physics will enable you to better? It would be a pity to see the observation of empirical fact ignored for lack of an accepted theoretical explanation.
Yours sincerely, Christopher Game
There exists all kind of empirical regularities. Some of them are precise and some approximate. For me it is incomprehensible that somebody picks one formula from a completely different physical context and proposes without good justification that it provides the reason for the observation. All the present discussion on virial theorem has been related to kinetic energy and potential in gravitational field. It is for me completely obvious that these theories have absolutely no bearing on the behaviour of electromagnetic radiation or radiative energy transfer. These are obviously very different physical processes that follow their on physical laws. The fact that both have their connections to the thermal energy of the atmosphere does not change this fact.
Until somebody presents me a valid reason to reconsider the issue, my justment is that all these formulae of Miskolczi are complete nonsense and lack all physical basis. The fact that Miskolczi’s theory has been used in deriving results that contradict well known physics does not help.
Andy Lacis, Pekka Pirilä and I all view Miskolczi’s paper along the lines of Pekka’s succinct summary:
all these formulae of Miskolczi are complete nonsense and lack all physical basis. The fact that Miskolczi’s theory has been used in deriving results that contradict well known physics does not help.
But we’re obviously biased by virtue of being fans of the global warming team. Judging by those commenting here on Judith’s somewhat nicely organized website, the other team has at least as many fans.
Now when attending a football game or ice hockey match or cricket tournament, it has never occurred to me to question the enthusiasm of the two teams’ respective supporters in terms of their understanding of the principles of the sport in question. The only thing that matters is how strongly they feel about their adoptive teams.
By the same reasoning I see no need to do so here. We should judge the merits of the respective sides on the basis of their enthusiasm, not on whether they have even half a clue about the science, which is nothing more than a feeble attempt at social climbing by those with Asperger’s syndrome.
Henceforth anyone bringing up scientific rationales for anything should be judged as simply not getting it.
I may do so myself, which will make it clear I don’t get it.
Vaughan Pratt
“Putting morons on pedestals like this only makes you a moron.”
Please desist from ad hominem attacks and raise your performance to professional scientific conduct. You demean both yourself and science by such diatribe. Address the substance, don’t attack the messenger. Otherwise you are not welcome for your demeaning the discussion and wasting our time.
Happy to oblige. (I was trying to adapt the adage “arguing with idiots makes you an idiot” by replacing “arguing with” with “put on pedestal” but misremembered the epithet. Dern crickets.)
In less inflammatory language, my point in case you missed it was that whoever you put on a pedestal says something about you regardless of their ability. If they’re capable then you deserve some credit for recognizing and acknowledging that. If they’re not then you’ve shown yourself to be a poor judge of ability.
I’ll leave it to others to judge whether my point constitutes an ad hominem attack (Baa Humbug seemed to think so). If Miskolczi has indeed come up with an impossibly low kinetic energy for the atmosphere, by a factor of 5 as I claim, my point would then seem to apply, and I would then be disinclined to take seriously anyone who takes Miskolczi seriously. One could predict remarkable properties of any atmosphere with that little energy. If that counts as an ad hominem attack then that’s what I’ve just gone and done.
Vaughan,
As I said to Fred Moolten, I keep an open mind on all the competing theories, because they all have their assumptions, uncertainties, lacunae and insights. Also, because I’m not an expert in the field, I’m cautious about throwing in my lot with anyone’ particular theory, including my own.
My higher academic qualification and training is in assessing these theories by looking at their conceptual basis, methodology, data gathering practice, logical consistency and a number of other factors. That is why I’m interested in delving into the work of Ferenc Miskolczi, along with a specific interest in trying to find out if the radiosonde data might be better than previously thought. It could be for example, that something valuable can come from ferenc’s work, even if he turns out to be wrong in some specific detail.
My higher academic qualification and training is in assessing these theories by looking at their conceptual basis, methodology, data gathering practice, logical consistency and a number of other factors
Oh, well then we’re on level ground here since that’s my area too. Physics is just something I used to do many decades ago before I found these other things more interesting.
It could be for example, that something valuable can come from ferenc’s work, even if he turns out to be wrong in some specific detail.
Lots of luck with that. Usually this only happens when the worker can keep the details straight. As Flaubert said long ago, Le bon Dieu est dans le detail.
True, but nonetheless, even if a fatal flaw is found which falsifies a theory, and the jury is still hearing evidence in Miskolczi’s case as far as I can tell, it is always possible that a new technique or method used in some supporting aspect of a paper can contribute something which may be valuable elsewhere.
Which is why I wasn’t impressed by your;
“I gave up once I spotted what I thought was an error”
By the way, my previous vocation was engineering in the fluid dynamics field, so I’m sure we’ll be able to argue well into the future. ;)
Which is why I wasn’t impressed by your;
“I gave up once I spotted what I thought was an error”
I wasn’t either, which is why I returned to Misckolczi’s paper to see whether that was the only problem and whether there might be something more fundamentally wrong with his conclusion, that more CO2 is offset by less water vapor whence global warming couldn’t happen on account of rising CO2. The problem with this line of reasoning is that if there is also less flow of water vapor into the atmosphere, that reduces the 80 w/m² of evaporative cooling, raising the temperature. Although Miskolczi mentions this he does nothing to show the flow is not reduced. That seems to me a more substantive flaw in the paper than quibbling over potential energy.
De l’oeuvre de Flaubert, I’m affraid I missed an enormous part. Moreover, my skills in the field of radiative physics are so low that I’m ashamed to send a public letter here (not to mention Judith’s demand for technical relevance…) Yet I’m quite sure Dr. Vaughan Pratt made at least one basic mistake here, for we Frenchies say:
“Le diable est dans les détails”.
Laissons donc le bon dieu à sa place. ;)
The quote is only attributed to him. (Mises said it for sure.)
What Flaubert is sure to have said (to George Sand) is that
> Est-ce que le bon Dieu l’a jamais dite, son opinion ?
which means, roughly, what Sam just said.
Interestingly, Flaubert said that to justify his realism, i.e. his urge to describe without judging.
May we all have Flaubert’s wisdom.
For all I know the French may have gotten this from the English, who in a moment of devout atheism may have paraphrased “God is in the details,” which came from von Mises, who may have gotten the idea from Flaubert.
Or it may have been “in the air” at the time, which is not unheard of.
What goes around comes back with someone else’s lipstick on its collar.
Vaughan,
Do you really mean the Austrian economist von Mises or the German born architect Mies van der Rohe mentioned in your link?
Interesting sequence there: willard writes “Mises” and without thinking my subconscious puts “von” in front instead of correcting it to “Mies” and putting “van” after. In my original post I was going to attribute it to Ludwig Mies (Mies van der Rohe) but then it occurred to me to ask the Oracle at Google whether it was original with him, and I learned for the first time about Flaubert’s alleged priority.
Sam’s four-word conclusion, “probably we’ll never know” follows the old style of giving the proof before enunciating that which is to be proved. The new style, theorem before proof, saves the bother of reading the proof when you feel confident you could easily prove it yourself should the need ever arise, and more succinctly to boot.
The question of the number of gods seems uppermost on many people’s minds. From what I’ve read it appears to be generally believed to be a very small perfect square, but more than that seems hard to say. In future I’ll punt on the question and say “Sam is in the details,” of which there can be no doubt.
‘Sam is in the details‘
I love that one! Thanks a lot.
Sounds like welcome. :) Isn’t it a nice compliment to get from a scientist? By the way, I just had a quick look at your CV, really impressed. Particularly the logical stuff. Wow.
Now, if you please, the conclusion was mainly some kind of humour, despite my wish to keep coherent with cool logic here… You’re right, though: except from that gentle story about an imaginary quotation, the object partly remains to be explicited. That’s why you can consider the 4-words was a foreword.
Dr. Pratt, you wrote:
“God is in the details,” which came from von Mises, who may have gotten the idea from Flaubert
Seems a remarkably piece of history research… Rarely heard of such a rigorous work.
For a start, talking about details, I would say that, in your formula, both commas are in excess.
Secondly, what von Mises are you talking about? Richard Edler von Mises (1883 – 1953), scientist (fluids mechanics, aerodynamics, statistics)? Ludwig Von Mises (1881 – 1973), economist? Another one? Anyway, It’s a shame I couldn’t find any von Mises who happened to write down “God is in the details”, “Devil is in the details” or anything of the kind, nor even be told to having said so.
Besides, the very source you quoted only tell us about a certain Ludwig Mies van der Rohe (1886–1969). Is it worth noting that the difference also makes a pair of details?…
As for Flaubert’s words, you too have noticed that nobody has ever been able to find the sentence in his writings. The hell… (or that version supposed to get inspiration from Heaven).
Moreover, had Flaubert ever pronounced the mysterious formula, it seems to me highly doubtful that he’d have chosen the word “God” instead of “the Devil”. As for Flaubert’s use of the “good God” term in a similar sentence, that would have been even more amazing, knowing what were his views. Flaubert’s opinions regarding religions, “God” and “the Devil” were longly expressed in his writings, and widely commented since. You’ll quickly find a lot of delectable ones on the web.
In the first place, I thought better than disturbing this scientific debate much longer with that – we all know it could last forever (‘Les voies du seigneur sont impénétrables’… aren’t they?).
But next I thought it was probably worth drawing at least a coarse snapshot here.
I’d say the very idea of a “good God” was simply absurd to Flaubert. And so was that of a God being up to no good.
Any possibly God would have no imperfection Himself, the simple fact to be stressed by desire being one, of course.
Nor is God playing with men. See what Flaubert wrote when he was 17 (my own translations, very sorry): “One often spoke about Providence and celestial kindness; I hardly see any reason to believe in it. God who would have fun tempting men to see up to which point they can suffer, wouldn’t He be as cruelly stupid as a child which, knowing as the cockchafer will die some day, tears off its wings first, then the legs, then the head?”
So you can be sure Flaubert’s thought is merely ironical even at the first degree whenever he used the “good God” term (like in the sentence Willard quoted, which I indeed think is the most relevant, “tout bien pesé”). In other words: in Flaubert’s mind, “good God” precisely indicates… a non-existent god – (“This word which one calls God, what a highly niaise and cruelly buffoon thing! ” (1838).
Could be one of those many idols the detested religions intend to serve with one of those hateful dogmas: “I’ll soon have finished with my readings on magnetism, philosophy and the religion. What a heap of silly things! […] Such pride in any dogma! – 1879)”; “Priests: Castration recommended. Sleep with their housemaid and have children with them, whom they call their nephews. Now, some of them are good, all the same. ” (Dictionary of the generally accepted ideas). See also the many scenes of horror prevailing in Salammbô (crucifixion of men and animals, terrible diseases, carnages, and in particular the atrocious one Flaubert pleasantly called “the grill of the kids”, where the Carthaginians throw their small children in the belly burning of Moloch)…
As for what possible thing deserving the name of God in Flaubert’s mind, it would precisely be invisible in the details… to men.
The following is a larger quotation (for precision stake) than the one usually made of a famous passage (correspondance with George Sand): “”As for my “lacks of conviction”, alas! the convictions choke me. I burst with angers and sunken indignations. But in the ideal view I have about Art, I believe that one the artist should show nothing of his person and not more appear in his work that God does in nature.”
Anyway, wouldn’t it seem amazing that God was said to be in the details, if not in a context putting God in the whole, in the first place? I’d add: needless to be Flaubert to avoid such a strange position.
Remains the Devil (who will of course finish the story with a great laugh…) And of him, contrary to God, I’m quite sure Flaubert’s would have expected to be in the details. Yet we’re still waiting for the evidence… so here we could finish saying: probably we’ll never know. ;)
Can I interest anyone in a knock-down argument that global warming is happening?
The reason we can’t see it happening is that there are many contributors to global temperature besides CO2. Obviously there are the seasons on their highly predictable 12-month basis, but there are also the much less predictable El Nino events and episodes sporadically spaced anywhere from 2 to 7 years apart. Then there are the solar cycles on more like a 10-year period, also quite variable though not as much as El Nino.
There are furthermore the completely unpredictable major volcanoes of the past couple of centuries, a couple of dozen at least, each of which tends to depress the temperature by up to 0.4 °C for 2 to 10 years depending on its VEI number.
Last but not least, there is the Atlantic Multidecadal Oscillation or AMO. This appears to be running on a relatively regular period of around 65 years, and can be seen centuries back by looking for example at tree rings.
Now anthropogenic CO2 supposedly began in earnest back in the 18th century. Today we’re putting around 30 GtC (gigatonnes of carbon) into the atmosphere, around 40% of which remains there, with nature sucking back the remaining 60%, a constant that has not changed in over a century. This amount has been increasing in proportion to the population, which as Malthus pointed out a while back is growing exponentially.
But so is our technology. Hence even if it takes 60-80 years to double our population, it takes something like half that to double our CO2 output, or around 30-40 years.
Ok, so let’s look at all these time constants. The 65 years for the AM oscillation dwarfs everything except the CO2 growth, which has been going on for centuries.
Let’s now digress into mathematics land for a bit. If you smooth a sine wave of period p with a moving average of exactly width p, you flatten it absolutely perfectly (try it). If it’s not exact then traces remain.
Furthermore all frequency components of period less than p/4 or so also vanish almost entirely no matter what their period.
So: here’s how to look at the climate. Smooth it with a moving average as close to the period of the AMO as you can get. This will kill the AMO as argued above. And it will also kill all those other contributors to variations in global temperature, none of which have a time constant more than a quarter of the AMO. (Solar cycles are about 1/6, ENSO events and episodes even less, and of course the seasonal variations are 1/65 of that.)
This would be heavy lifting were it not for UK programmer Paul Clark’s wonderful website Wood for Trees. When you click on this link, the 785-month period of the AMO will have been filled in for you, and you can see for yourself what happens to the global climate when smoothed in a way calculated to kill not only the AMO but everything else below it, by the above reasoning. (The phrase “killAMO” is all you need to remember for this URL, which can be reached as http://tinyurl.com/killAMO .)
Now I claim that this curve is very closely following Arrhenius’s logarithmic dependency of Earth’s surface temperature on CO2 level, under the assumption that nature is contributing a fixed 280 ppmv and we’re adding an amount that was 1 ppmv back in 1790 and has been doubling every 32.5 years. (These numbers are not mine but those of Hofmann et al in a recent paper but if this is behind a paywall for you, you can more easily access Hofmann’s earlier poster session presentation.)
All that remains is to specify the climate sensitivity, and I’m willing to go out on a limb and say that for the instantaneous observed flavor of climate sensitivity (out of the very wide range of possibilities here), it is 1.95 °C per doubling of CO2. (The IPCC has other definitions involving waiting 20 years in the case of transient climate response, or infinity years in the case of equilibrium climate sensitivity, etc. Here we’re not even waiting one nanosecond. The IPCC definitions get you closer to the magic number of 3 degrees per doubling of CO2 depending on which definition you go for.)
Unfortunately Paul Clark’s website is not flexible enough to allow you to enter a formula. This is unfortunate because my own website shows that theory and practice are incredibly close. This is a far more accurate fit than one is accustomed to seeing in climate science, but this is how science makes progress.
The key to this knockdown argument is the regularity of the 65-year-period AMO cycle and the much shorter time constants of all other relevant factors. Any doubt about this can be dispelled by using some other number than 785 for the WoodForTreees smoothing at http:tinyurl.com/killamo.
From IPPC4
“For example, in a model comparison study of six climate models of intermediate complexity, Brovkin et al. (2006) concluded that land-use changes contributed to a decrease in global mean annual temperature in the range of 0.13–0.25°C, mainly during the 19th century and the first half of the 20th century, which is in line with conclusions from other studies, such as Matthews et al. (2003). ”
Deforestation in tropical zones, the most common form of deforestation recently, does not have a cooling effect but rather a warming effect. https://e-reports-ext.llnl.gov/pdf/324200.pdf
There is also urbanization to take into account.
All together, it is quite possible that by correcting early land use changes and later land use changes you can change the difference in temperature from the 19th century to the present by up to 0.3C. Chronology matters.
If the proper adjustments have been made regarding the known land use changes this argument would be invalid. I am not aware of a study that would indicate by the percentage of adjustments higher vs lower that it has been properly accounted for.
I understand this is off topic andso leave it to the moderator if she would like to eliminate the comment or not.
I should clarify that this is only true regarding hypotheses involving a very long time period to reach equilibrium. The equilibrium temperature would be above the current temperature so you affect the rates of warming much more than the actual temperature achieved.
Vaughan, I assume that ‘lb’ is a typo for ‘ln’?
At first glance the fit looks impressive but you have to be careful about fitting.
The smoothed Hadcrut curve is only changing slowly so in terms of its Taylor expansion it can be described by a quadratic term (it is easy to check that it can be fitted just as well by a quadratic as by your function) so it has 3 ‘degrees of freedom’. One of these you have chosen one with your choice of 1.95, which as you admit comes from a wide range of sensitivity values. Also you have fitted the additive constant in your graph to get the curves to match up (fair enough, it is a plot of anomalies). So of the three parameters in the smoothed data, you have chosen two of them to fit, which makes the fit not quite so impressive.
I assume that ‘lb’ is a typo for ‘ln’?
Two lines before the formula I wrote “using binary rather than natural logs, lb rather than ln,”. This has been a standard abbreviation for log base 2 for I think at least a couple of decades. If used ln I would then have to multiply the result by ln(2) to convert to units of degrees per doubling instead of degrees per multiplication by e, which is what ln gives. The latter is of course more natural mathematically but it’s not what people use when talking about climate sensitivity.
The smoothed Hadcrut curve is only changing slowly so in terms of its Taylor expansion it can be described by a quadratic term (it is easy to check that it can be fitted just as well by a quadratic as by your function) so it has 3 ‘degrees of freedom’
Ah, thanks for that, I realize now that I should have drawn more attention to what happens with even more aggressive smoothing. If you replace 65-year smoothing with 80-year smoothing you get a curve that would require a fourth-degree polynomial to get as a good a fit as I got with 60 years.
So of the three parameters in the smoothed data, you have chosen two of them to fit, which makes the fit not quite so impressive.
Not true. First, I had no control over the choice of curve, which composes the Arrhenius logarithmic dependency of temperature on CO2 with the Hofmann dependency of CO2 on year, call this the Arrhenius-Hofmann law. (My syntax for Hofmann’s function slightly simplifies how Hofmann wrote it, but semantically, i.e. numerically, it is the same function.)
If I’d had the freedom to pick a polynomial or a Bessel function or something then your point about having 3 degrees of freedom would be a good one. However both these functions are in the literature for exactly this purpose, namely modeling CO2 and temperature. I did not choose them because they gave a good fit, I chose them because theory says that’s what these dependencies should be.
Since we agree about anomalies the one remaining parameter I had was the multiplicative constant corresponding to “instantaneous observed climate sensitivity” which can be expected to be on the low side compared to either equilibrium climate sensitivity or transient climate response as defined by the IPCC.
Now I had previously determined that 1.8 gave the best fit of the Arrhenius-Hoffman law to the unsmoothed HADCRUT data, with of course a horrible fit because the latter fluctuates so crazily, but it is the best fit and so I’ve been going with it.
65-year smoothing completely obliterates all the other contributors to climate change (though 80-year smoothing puts back a chunk of the AMO as I said earlier, though nothing else), but it also transfers some of the high second-order-derivative on the right of the temperature curve over to the left, which the slight increase from 1.8 to 1.95 was to compensate for.
So I really didn’t have any free parameters, other than the exact amount needed to compensate for the smoothing that artificially makes the left of the curve seem to increase faster than it actually does.
If I’d had not only two free parameters but also the freedom to pick any type of curve I wanted such as a polynomial, then as you say it wouldn’t be so impressive. However that would miss the further point that with 80-year smoothing you don’t get anywhere near as good a match to the Arrhenius-Hofmann law. That there exists any size window yielding a log-of-raised-exponential curve can be seen to be something of a surprise when you consider that neither 80-year nor 50-year smoothing do so.
I stupidly wrote: major volcanoes of the past couple of centuries, a couple of dozen at least, each of which tends to depress the temperature by up to 0.4 °C
Another darned cricket behind the chair keeping me awake all night, causing me to slip a decimal point. Should have been 0.04 °C. (Krakatoa and Mt Pelee seemed to be closer to 0.06 but plenty of volcanoes can easily cool the climate by one or two hundredths of a degree, easily observable in the HADCRUT temperature record for most significant volcanoes after subtracting the AMO, global warming, solar cycles, and everything shorter.)
Hi Proffesor Pratt, I’m putting on my skeptic hat for this, mainly due to being unconvinced that +2C is necessarily a *bad* thing. Please be kind.
Your graph would seem to show that GHE works as theorized. What it doesn’t show is the human footprint.
If you were to correlate your graph with human population and/or acreage under plow or some other *reliable* historical data then it could or might or maybe show anthropogenic cause. The idea that there’s a smooth upswing when human technology runs in fits and starts and is interesting, especially when the count of anthros at the left side of the graph is N and exponentially higher at the other. How exactly DOES one impute anthropological cause again?
Moreover, your graph doesn’t say much about technology, which is always the bogeyman. Data on coal fires? Trains? Anything? One could just as easily claim human population exhalation and every other guy has a fire and the numbers would look the same. To prove that this is a clean anthropological signal, wouldn’t we need to see the corresponding tech and outputs in GTonne, etc. ??
Would you mind please explaining how the human footprint part works?
Thanks
Would you mind please explaining how the human footprint part works?
That’s in Hofmann’s papers (the singly-authored poster session version and the journal version with two coauthors). Hofmann explains that his formula for CO2, which replaces the quadratics and cubics that NOAA had previously used, was based on considerations of exponentially exploding population deploying exponentially improving technology. His doubling period of 32.5 years is a reasonable match to a doubling period of 65 years for each of population growth and technology growth.
We are currently adding 30 gigatonnes of CO2 to the atmosphere each year, of which nature is removing some 17-18 gigatonnes, kind of a leaky-bucket effect. The rest stays up there, which we know because we have monitoring stations like the one at Mauna Loa that keep tabs on the CO2 level. All these numbers are known fairly accurately.
The logarithmic dependence of temperature on CO2 has been known since Arrhenius first derived it in 1896.
One could just as easily claim human population exhalation
Good point. Around 1700 the human population had reached 1/10 of what it is today, a level at which human exhalation was breaking even with volcanic CO2. Human exhalation is now an order of magnitude more than volcanic CO2. However that’s less than ten percent of all human CO2 production from power stations, vehicles, etc.
mainly due to being unconvinced that +2C is necessarily a *bad* thing.
I’m not objecting to the rise, merely interested in improving our ability to predict it. I think +2C would be a neat experiment, the last time temperatures were that high was many millions of years ago. If it does really bad things to the environment we (that is, our great-grandchildren) might need to get really creative really fast, but meanwhile let’s do the experiment. Also drop ocean pH to remove Antarctic krill and copepods, who ordered them?
Capitalism and communism were neat experiments, today we know capitalism works better than communism. This was somewhat less obvious in the 19th century. At least communism Russian-style, it doesn’t look like China-style communism has been disproven yet, but then is it really communism?
Posted by TRC Curtin on another thread:
“What is missing from all statements on growth rates of CO2 emissions (by the IEA et al. and the IPCC) is any statistical analysis of the uptakes of those large gross increases in CO2 emissions (other than the misleading and error strewn section in AR4 WG1 Ch.7:516). These provide a large positive social externality by fuelling the concomitant increases in world food production over the period since 1960 when the growth in CO2 emissions began to rise.
Thus although gross emissions have grown by over 3% pa since 2000, the growth of the atmospheric concentration of CO2 has been only 0.29% p.a. (1959 to 2009) and still only 0.29% p.a. between October 1958 and October 2010 (these growth rates are absent from WG1 Ch.7). The growth rates of [CO2] from October 1990 to now and from October 2000 to now are 0.2961 and 0.2966 respectively, not a very terrifying increase when one has to go to the 4th decimal point to find it, but to acknowledge this would not have got one an air fare to Cancun.
These are hardly runaway growth rates, and fall well (by a factor of 3) below the IPCC projections of 1% p.a. for this century. The fortunate truth is that higher growth of emissions is matched almost exactly by higher growth of uptakes of [CO2] emissions, because of the benevolent effect of the rising partial pressure of [CO2] on biospheric uptakes thereof (never mentioned by AR4).
You will of course NEVER see these LN growth rates of [CO2] in any IPCC report or in any other work by climate scientists.”
These are hardly runaway growth rates, and fall well (by a factor of 3) below the IPCC projections of 1% p.a. for this century. The fortunate truth is that higher growth of emissions is matched almost exactly by higher growth of uptakes of [CO2] emissions, because of the benevolent effect of the rising partial pressure of [CO2] on biospheric uptakes thereof (never mentioned by AR4).
Actually nature was taking back 60% a century ago and is still doing so today. The remaining 40% continues to accumulate in the atmosphere as it has been doing all along, easily confirmed by the Keeling curve data, which shows an exponentially growing rise superimposed on the natural level of 280 ppmv. 60% is not “almost exactly” in my book, and the estimate of 40% remaining in the atmosphere is well borne out by the data.
Thanks, Professor. I have a few comments to make, after which I may have a followup question.
Hofmann explains that his formula for CO2, which replaces the quadratics and cubics that NOAA had previously used, was based on considerations of exponentially exploding population deploying exponentially improving technology.
This is the interesting part, which is essentially an equation based on correlation and an assumption, not necessarily something determined via historical data, i.e. hard data evidence, etc. It would seem to also have some presumptions about how long CO2 hangs in the atmosphere.
During the early 1900s there really wasn’t much in the way of carbon emitting tech in the world. The Occident was using trains and burning coal, sure. In the Orient though… not so much. Man didn’t really have widespread worldwide adoption of major carbon tech (read: enough cars to even cause a blip) though until nearly 1930, and this was still mostly a western world phenomenon. (And I’d argue that you’d at least need 1950’s levels of cars to be able to have any effect at all, only due to the numbers.)
And yet temps rose, as did presumably CO2. Now what’s interesting here that this rise was already occuring BEFORE the adoption of automobiles as we see today, and so on. If you look at this particular hockey stick —
http://en.wikipedia.org/wiki/File:Extrapolated_world_population_history.png
It shows what I’m referring to. There’s a correlation of human population to temp rise to CO2. And yet… the technology we have to emit carbon didn’t really start ratcheting exponentially until after WWII. A tech explosion, if you will. There’s no “knee” that one would expect to see on the graphical representation of CO2 as the result of all of this rapid carbon release. It just keeps rising, nice and slow.
What I get from looking at the big picture here is that the correlation of CO2 and temp pre-1930 doesn’t imply anthropogenic cause based solely on CO2 release given the lack of a knee when human CO2 emissions from (energy and vehicle) technology adoption really kicked in. It could be argued that pre-1930 anthropogenic influence includes land use and deforestation, i.e. changing the nature of CO2 sinks, but it doesn’t seem that correlating hypothetical formula derived CO2 emissions and temp records is worth much prior to the modern era. In fact the data by itself is ambiguous; you could just as easily use it to say that pre-modern era CO2 rise was reaction to temp rise.
I’m very much interested in the notion of the A in AGW being solely tied to CO2 emission when clearly CO2 and temp was rising before CO2 emission was at a level that could be detectable. I find this doubly interesting given the data showing that MWP temps were close to that of today (or above, depending on whose work you trust.) Clearly there’s a lot that isn’t understood.
Yes, professor, I know this is very un-PC for one who gets GHE and thinks that we’re adding to GHG’s. I get the physics part. And I’m aware that we’re running an open ended experiment re emission. Yes I agree that we need to e.g. adopt nuclear energy etc and decommission coal plants.
So, my followup query is as follows: it seems that Hoffman’s formulae are incorrect and imply causation unsupported by fact — there’s no slow adoption of technology as implied. There ought to be a modern era CO2 knee *if* the A in AGW is based on the modern era explosion in emissions, and there’s no knee. Why?
Quick followup note:
Just to be ridiculously clear, the assertion that the climate was sensitive enough in 1890 to show temp rise following CO2 also says there was no sink of the extra CO2 (otherwise, why would it rise at all?)
At the time of massive CO2 belching starting in the 1940s’-50’s this same “sensitive” climate unable to sink CO2 in 1900 would still be unable to sink. CO2 should have gone right through the roof, as would he temp.
Didn’t happen. Tres confusing. The climate is sensitive or it is not. But there was no linear rise in human tech and emission of CO2.
Just to be ridiculously clear, the assertion that the climate was sensitive enough in 1890 to show temp rise following CO2 also says there was no sink of the extra CO2 (otherwise, why would it rise at all?)
(Can’t remember if I replied to this.) Who’s asserting that? Although CO2 was raising the temperature in 1890, by an amount computable from Hofmann’s formula, it was not doing so discernibly because the swings due to natural causes such as AMO, solar cycles, and volcanoes were much larger swings. Today CO2 has outpaced all these natural causes.
During the early 1900s there really wasn’t much in the way of carbon emitting tech in the world. The Occident was using trains and burning coal, sure. In the Orient though… not so much. Man didn’t really have widespread worldwide adoption of major carbon tech (read: enough cars to even cause a blip) though until nearly 1930, and this was still mostly a western world phenomenon. (And I’d argue that you’d at least need 1950′s levels of cars to be able to have any effect at all, only due to the numbers.)
I fully agree. How much less CO2 did you have in mind for 1900? 10% of what it is today? That’s what Hofmann’s formula gives.
If you think it was even less than 10% then you may be underestimating the impact of coal-burning steam locomotives, which were popular world-wide: the first locomotive in India was in 1853, in Brazil 1881 or earlier, Australia 1854, South Africa 1858, etc. etc.. Everyone used them: without automobiles, rail was king. The transcontinental railroad was the Internet of 1869, connecting the two coasts of the US and paying for the school I teach at. My wife’s book club is reading Michael Crichton’s The Great Train Robbery about the theft in 1855 of £12,000 worth of gold bars from a train operated by an English train company founded in 1836.
You also didn’t mention electric power stations, which were largely coal powered in those days and were introduced by Edison and Johnson in 1882. By the early 20th century electricity had caught on big time around the world. Today electricity accounts for some 5 times the CO2 contributed by automobiles. In 1900 it was obviously much more than automobiles.
And you didn’t mention the transition from sail to steam, which happened early in the 19th century. Ships today produce twice the CO2 of airplanes, and obviously far more than that ratio in 1920 and infinitely more in 1902.
There is also human-exhaled CO2, which worldwide today is only about 60% of automobile-exhaled CO2 but in 1900 was obviously a far bigger percentage.
And there is one cow for every five people, and cows belch a lot of methane, which has many times the global warming potential of CO2.
Rice paddies produce even more methane, and predate even steam ships.
So I don’t think 10% of today’s greenhouse-induced warming is an unreasonable figure for what humans were responsible for in 1900.
And yet temps rose, as did presumably CO2. Now what’s interesting here that this rise was already occuring BEFORE the adoption of automobiles as we see today, and so on. If you look at this particular hockey stick –
What you’re seeing there is a correlate of the Atlantic Multidecadal Oscillation, which dwarfed global warming prior to mid-20th-century. In 1860 it was raising global temperature 8 times as fast as CO2. In 1892 CO2 had grown very slightly but the AMO meanwhile was on a downward swing that the CO2 reduced by only about 20%. In 1925 CO2 warming was at twice the rate it had been in 1860, which therefore added 25% to the AMO rise.
Not until 1995 did CO2 raise the temperature at the same rate as the AMO. From now on CO2 is going to dominate the AMO swings assuming business as usual.
It is interesting to consider what would happen if we continued to add 30 gigatonnes of CO2 a year to the atmosphere. The CO2-induced rise in temperature would slow down and eventually become stationary, with the CO2 stopping at perhaps 500 ppmv. That’s because the system would then be in equilibrium. Well before then the AMO would have regained its title as world champion multidecadal temperature changer.
Fortunately for those of us interested in seeing the outcome of this very interesting experiment, this isn’t going to happen. With business as usual CO2 will continue to be added to the atmosphere at the same exponentially increasing rate as over the last two centuries, pushing it to 60 gigatonnes of CO2 a year by 2045. (In 1975 we were only adding around 15 gigatonnes of CO2 a year.)
Would add that by 1900, two of the largest fossil-fuel-based fortunes, Carnegie’s and Rockefeller’s, were solidly in place. Those homes that were not being illuminated with natural gas lighting (made from coal,) were being illuminated with Rockefeller’s kerosene. He had a booming global lighting business by the 1870s. As for Carnegie’s steel, how did he make it? So there was a pretty significant bloom in fossil-fuel CO2 before 1900.
According to Encyclopedia Britannica (link in Wikipedia) (based on 1911 data) the world coal production was very close to 1000 million tons in 1905. In 2009 the world coal production was 6940 million tons, oil production 3820 million tons, and natural gas production 2700 mtoe (million tons oil equivalent).
In 1905 oil and gas were negligible compared to coal. Thus the annual CO2 emissions from fossil fuels were in 1905 8-9% of their present level.
In 1905 oil and gas were negligible compared to coal. Thus the annual CO2 emissions from fossil fuels were in 1905 8-9% of their present level.
Add some for slash and burn, exhalation from humans and their livestock, methane from rice paddies which degrades to CO2, and that should get us reasonably close to Hofmann’s formula, which gives total anthropogenic CO2 as being 10.7% of its level today.
” At least communism Russian-style, it doesn’t look like China-style communism has been disproven yet, but then is it really communism”
Who controls and uses the guns?
Vaughan, firstly I am on the side of AGW and certainly have also long supported the idea of the log CO2 temperature rise. My only thought about this knock-down AMO argument is that you give the AMO too much credit. My own sense of things is that 1910-1940 rises faster than this log because of a solar increase at that time, and the 1950-1975 flattening is due to global dimming (aka aerosol haze expansion near industrial/urbanized areas due to increasing oil/fossil burning). I don’t think the AMO amplitude is much compared to these other factors that give the illusion of a 60-year cycle in the last century. The last 30 years is behaving free of these influences and is parallel to a growth given by 2.5 degrees per doubling.
Very much appreciate your feedback, Jim, as it will steer me towards what needs more emphasis or more clarification if and when I come to write up some of these ideas.
My own sense of things is that 1910-1940 rises faster than this log because of a solar increase at that time
Can you estimate the contribution of this increase to global warming over that period? That would be interesting to see. If it’s large enough I need to take it into account. It does seem to be sufficiently sustained that 12-year smoothing isn’t enough to erase it.
and the 1950-1975 flattening is due to global dimming (aka aerosol haze expansion near industrial/urbanized areas due to increasing oil/fossil burning).
This question of whether it’s aerosols or the AMO downswing would make a fascinating panel session. I would enthusiastically promote the latter. (But I’m always enthusiastic so one would have to apply the applicable discount.) I just recently bought J. Robert Mondt’s “The History and Technology of Emission Control Since the 1960s” to get better calibrated on this.
I estimate that the RAF’s putting Dresden, Pforzheim, and Hamburg into the tropopause, plus those interminable air sorties by all sides during WW2, had the cooling power of three Krakatoas. One Krakatoa per German city perhaps.
I don’t think the AMO amplitude is much compared to these other factors that give the illusion of a 60-year cycle in the last century.
I don’t put much trust in estimates based on 30-year windows of the temperature record. I much prefer every parameter to be estimated from the full 160 year HADCRUT global history. The NH record goes back a couple of centuries further and it would be interesting to coordinate that with the 160 year global record for an even more robust estimate.
Currently I estimate the AMO amplitude in 1925 at around 0.125 °C, meaning a range of 0.25 °C, and rolling off gradually on either side, being around .08 in 1870 and 1980. The r^2 for this fit is a compelling 0.02, rising to 0.024 if you don’t let it roll off suggesting the roll-off is significant. Not only is the roll-off a better fit but it’s consistent with the disappearance of the AMO signal in the 17th century inferred from tree ring data.
The last 30 years is behaving free of these influences
That’s the CO2 speaking. ;)
You have to remove the CO2 to see it because the CO2 is so steep by then.
and is parallel to a growth given by 2.5 degrees per doubling.
Using Hofmann’s doubling time of 32.5 years for anthropogenic CO2, from a base of 280, the current 390 ppm level should be over 1000 ppmv by 2100, which is 1.39 times the current level. 2.5*lb(1.39) is a rise of 3.6 degrees over this century. Is that what you’re expecting, or do you expect less CO2 than that in 2100?
I’m projecting +2C in 2100 but considerably more if this warming releases a significant amount of methane before then. I’m neither a pessimist nor an optimist when it comes to global warming, I’m just an uncertaintist.
I’m not wedded to any of this, if my perspectives shift so may my estimates of these parameters.
I’m not enthusiastic about introducing more parameters, but methane considerations will certainly force at least one more. Anyone here with a model that has a clue about likely methane emissions in 2030? (Just asking, I’d love it if there were.)
If you gents are interested in the solar contribution, you might consider the cumulative nature of the retention of solar energy in and dissipation of energy from the oceans (which controls atmospheric temperature in the main), and have a look at this post on my blog.
http://tallbloke.wordpress.com/2010/07/21/nailing-the-solar-activity-global-temperature-divergence-lie/
have a look at this post on my blog.
On your indicated blog, tallbloke, you write “what a load of rubbish.”
Had I written that, global warming deniers would be all over it in a flash and Willard would be agonizing about how I’ll never live that down.
Something should be done about the hypocrisy in this debate.
At the very least you could have written “what a load of rubbish (pardon my French)” as an exculpatory measure.
> Something should be done about the hypocrisy in this debate.
Speaking of which:
> I wonder why tallbloke is commenting on this blog, after accusing me of dishonesty on his own.
Source: http://scienceofdoom.com/2010/12/04/do-trenberth-and-kiehl-understand-the-first-law-of-thermodynamics-part-three-and-a-half-%E2%80%93-the-creation-of-energy/#comment-8015
Vaughan, I agree with your CO2 formula. Mine goes 280+90*exp[(year-2000)/48] which also gets to 1000 ppm at 2100. Using 2.5 C per doubling this gives 2100 warmer than 2000 by 3.5 C. Like I said, the gradient fits the last 30 years very well.
Regarding solar effects in 1910-1940, I estimated this is +0.2 C, and for aerosols 1950-75 -0.4 C. This gives 20th century attribution as 0.7 C total = 0.9 C due to CO2 + 0.2 C due to solar – 0.4 due to aerosols.
The solar estimate comes from the TSI estimate on climate4you, but has to assume a reasonable positive feedback to get from 0.2 W/m2 to 0.2 C, but somewhat similar to what is required to explain the Little Ice Age with the same TSI estimate.
Vaughan, I agree with your CO2 formula. Mine goes 280+90*exp[(year-2000)/48]
Ah, excellent, thanks for that! (Actually it’s not mine, it’s ESRL’s David Hofmann’s at NOAA ESRL Boulder, who came up with it I think a couple of years ago.) Your formula is extremely close to his, here’s yours minus his at the quarter-century marks.
1950 1.235
1975 1.446
2000 1.355
2025 .4444
2050 -2.38
2075 -9.35
2100 -24.8 (Hofmann is 1027.65 then)
Those differences are insignificant for predictive purposes, and moreover are a fine fit to past history. But as a purely academic question the differences disappear essentially completely when you change your 90 to 89 and 48 to 47.
As it happens I do have a formula for CO2, namely 260 + exp([(y – 1718.5)/60]. I came up with this before seeing Hofmann’s and switching to his. Mine fits the Keeling curve more exactly especially in the middle. Its derivative is also a better fit to the slope; in particular the derivative of your and Hofmann’s formula at 2010 predicts a rise of 2.3 ppmv between July 2009 and July 2010 while mine predicts only 2.1 ppmv. The latter is much closer to what we actually observed.
I have no evidence for CO2 being 260 in July 1718 (the meaning of those numbers) than the goodness of fit to the Keeling curve, which when extrapolated backwards as an exponential seems to asymptote more to 260 than 280. Absent any more compelling reason for 260 I figured I’d just switch to Hofmann’s formula since I didn’t want to undermine my uses for the formula with questionable choices of parameters.
Mine incidentally predicts only 840 ppmv for 2100, which I suppose makes me an AGW denier in the same sense that reducing the ocean’s pH from 8.1 to 8.0 makes it acidic.
I think 840 is close to the A2 scenario used by IPCC which is their most extreme one. Our others are pessimistic compared to this.
I think 840 is close to the A2 scenario used by IPCC which is their most extreme one. Our others are pessimistic compared to this.
The IPCC is in an unenviable position. The slightest error brings a hail of protest. The science and politics of global warming live on opposite sides of a widening ice crevasse, while the IPCC stands awkwardly with one foot on each side. The scientists can afford to err on the side of pessimism, the politicians optimism.
One wants to throw the IPCC a skyhook. They cope by hedging their bets. Don’t expect the IPCC to pick the scientifically most probable number when there’s a wide selection, they will prefer the most expedient for the circumstances.
The only way they can make everyone happy is to make no one happy.
We can only hope that the running down of oil burning due to reducing resources is not compensated by an increase in coal, gas, shale oil, etc., fossil-fuel burning, otherwise we are headed for 1000 ppm by 2100.
Allow me to throw one small fly in the ointment. If we had data and could do a similar temp series as killAMO from 1780-1870 I suspect we would get a very similar slope. This is simply based on historical, geological, and archeological evidence that NH glaciers and polar ice were retreating faster during the1800s (ref: John Muir) than they are today. This period was well before significant influence from anthropogenic CO2.
Here is my question: Given similar warming trends, What caused the rapid warming of the 1800s? If not CO2, then what?
If we had data and could do a similar temp series as killAMO from 1780-1870 I suspect we would get a very similar slope. This is simply based on historical, geological, and archeological evidence that NH glaciers and polar ice were retreating faster during the1800s (ref: John Muir) than they are today
(killAMO is what to append to tinyurl.com/ to see the graph in question.)
But the curve that the smoothed fits so well is not merely a slope, it bends up, and moreover in a way consistent with it having the form log(1+exp(t)) for suitable scales of time and temperature. This curve asymptotes to a straight line of slope 1, which in more familiar units corresponds a few centuries hence to a steady rise of 1 °C every 18 years (assuming business as usual meaning unlimited oil and continued population growth).
The odds of the 60-year-smoothed record for 1780-1870 fitting log(1+exp(t)) that well are zip. If it did it would strongly imply an exponential mechanism of comparable magnitude to global warming, which would be extraordinary.
Nice dance around the question. A key question it is when attempting to determine if our current warming is “unprecedented” and if indeed atmospheric CO2 is the primary culprit. Allow me to restate for clarification:
So if the observed NH ice meltdown was indeed faster in the 1800s than current NH ice meltdown both in terms of volume and extent, as indicated by historical record, why the rapid meltdown then? Do those same unknown climate forcings exist today? If not CO2, then what?
A key question it is when attempting to determine if our current warming is “unprecedented”
By “unprecedented” I mean hotter. The 10-year-smoothed HADCRUT record shows no date prior to 1930 that was warmer than any date after 1930. Furthermore the temperature today is 0.65 °C hotter than in 1880, which was the hottest temperature reached at any time prior to 1930.
So if the observed NH ice meltdown was indeed faster in the 1800s than current NH ice meltdown both in terms of volume and extent, as indicated by historical record, why the rapid meltdown then? Do those same unknown climate forcings exist today? If not CO2, then what?
What are you talking about? The Northwest Passage has been famously impassable for 500 years, ever since Henry VII sent John Cabot in search of a way through it in 1497.
Here’s some relevant reading. If you have equally reputable sources that contradict these stories as flatly as you claim I’d be very interested in seeing them.
European Space Agency News 2007
BBC News 2007
Kathryn Westcott, BBC News, 2007
Joshua Keating, Foreign Policy (need to register but the account is free)
Vaughan Pratt
1) Re: “By “unprecedented” I mean hotter.”
Yet the caveat: “hotter than any prior temperature in the
the 10-year-smoothed HADCRUT record”
See definitions for: unprecedented
e.g. Webster’s 1913:
Macmillan
Unprecedented does not have the same scientific meaning as your caveat.
The global Medieval Warm Period would qualify as a precedent.
See: The Medieval Warm Period – a global phenomenon, unprecedented warming, or unprecedented data manipulation?
See also the Vikings settling in Greenland.
“By the year 1300 more than 3,000 colonists lived on 300 farms scattered along the west coast of Greenland (Schaefer, 1997.)”
2) Re:”The Northwest Passage has been famously impassable for 500 years, ever since Henry VII sent John Cabot in search of a way through it in 1497.”
False!
e.g., see articles on “Northwest passage” at WUWT
It was sailed in 1853 by Robert McClure, the HMS Investigator
Norwegian explorer Roald Amundsen traversed the NW passage between 1903 to 1906.
etc.
Regards
Unprecedented does not have the same scientific meaning as your caveat.
Quite right, David, I freely admit that I was merely following the example set for us by the president two weeks ago. Mea culpa, henceforth I vow to faithfully serve the dictionary instead of the president. You have your priorities right.
But with the scientific definition why should you and I have to quarrel over whether Easterbrook was telling the truth when we can go further back to a time where we can amicably agree over a beer that the present level is far from unprecedented?
Rejoice, we are on our way back to the ice-free temperatures that obtained before the Azolla event 49 million years ago. During the 800,000 years of that event, CO2 plummeted from a tropical 3.5‰ (3500 ppmv) to a bone-chilling 0.65‰, a decline of 3.6 ppmv per millennium.
To the best of our understanding of geology, that sustained rate of decline was unprecedented in your—sorry, the dictionary’s—scientific sense.
Today CO2 is rising at a rate of 2100 ppmv per millennium. Comparing that with the scientifically unprecedented 3.6 ppmv per millennium of 49 million year ago, I call for scientific recognition of a new world record for planet Earth, of the fastest rate of climb of CO2 in the planet’s 4.5 billion year history.
Ferns were the previous record holder. Humans have proved themselves superior to ferns. And it only took us 49 million years.
My wife the botanist has been using the Internet to monitor the fern traffic. She suspects they’re plotting a rematch. If they can break our record she’s figured that any such whiplash event will break our collective necks.
Those numbers don’t look right I think you might have dropped a zero somewhere
Those numbers don’t look right I think you might have dropped a zero somewhere
1% = .01, 1‰ = .001, i.e. parts per thousand. I prefer ‰ to % because it groups digits in threes. I’ve often seen people convert 389 ppmv to .389%, that mistake is harder to make when using ‰ .
The decline was indeed from 3500 ppmv to 650 ppmv, look it up.
I didn’t notice you were using parts/thousand
Still dancing I see. Perhaps we should clarify a few terms. Warming implies rate of temp increase. Hotter implies higher measured temperature. Rapid meltdown implies rate of ice mass loss.
Clearly we have been in a step warming trend since emerging from the LIA circa 1800. One does not need advanced degrees in climatology to understand that in a step warming trend over a period of 200 years it will likely be “hotter” towards the end of the warming period. Very little of this trend has been attributed to AGW. What exactly did cause that first 150 years of warming if not CO2? We also expect there to be much less NH ice mass after this 200 year warming trend. As the earth warms, ice melts. No surprises.
Re: RATE of ice mass loss… Your links regarding the current cryosphere simply do not address the question of 19th century rapid rate of ice mass loss, or it’s causes at all. It was greater between 1850-1890 and briefly 1940-1950 than it is today. Relevant reading as to 1800 ice mass loss and historical temp record? Why yes I believe we do:
Historical evidence:
http://www.nps.gov/glba/historyculture/people.htm
Paleo evidence:
http://westinstenv.org/wp-content/postimage/Lappi_Greenland_ice_core_10000yrs.jpg
Re: The Northwest passage… It has been choked with ice from the LIA for the last 600 years. Of course no one has been sailing through there. Perhaps this year, after 200 years of melting ice, we will discover additional archeological evidence of Viking explorers who were navigating these high arctic waters 1000 years ago during the MWP.
Yes indeed. I do understand and agree with the physics of radiative transfer but there are still many flies in the AGW ointment.
Still dancing I see. […] Clearly we have been in a step warming trend since emerging from the LIA circa 1800. One does not need advanced degrees in climatology to understand that in a step warming trend over a period of 200 years it will likely be “hotter” towards the end of the warming period. Very little of this trend has been attributed to AGW. What exactly did cause that first 150 years of warming if not CO2?
What are you talking about, ivpo? One glance at the gold standard for global land-sea temperature, the HADCRUT3 record, for the 45-year period 1875-1920 with 16-year smoothing, shows that the temperature was plummeting during the period CO2 was having no influence.
Seems to me you’re the one who’s dancing fancy numbers in front of us that don’t hold up under examination.
(Only those who’ve been following my killAMO stuff will see the trick I’m pulling here. Fight fire with fire.)
Oops, sorry, forgot to give the link to the HADCRUT3 record for 1875-1920.
So at the end of all that dancing, all those scientific links, you still have no explanation for the extraordinary NH ice melt off during the 1800s. Nor can you differentiate the cause of the 1800s melt off from our current ice melt off. We don’t really know why. And there is no sound evidence that those same forcings are not in effect today. I think you made my point Vaughn.
I actually believe your killAMO smoothing has merit but it is still very much open to misinterpretation. It does demonstrate long term warming. It does not tie that warming to CO2 until we can isolate and quantify all other causes for long term warming (such as the rapid NH ice melt off during the 1800s).
you still have no explanation for the extraordinary NH ice melt off during the 1800s.
You may have missed my answer the other day to this. I cannot explain what did not happen.
It was the quality of the hindcast that got me over the line on this one :)
Hi Vaughan,
fresh start for us?
A few observations about your ‘knockdown argument’, in no particular order:
1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.
2) The recovery of global temperature from the little ice age started around 1700
3) Even if the match between co2 and temperature were good (it isn’t). Correlation is not causation.
4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.
5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.
6) My own solar model matches the temperature data better, without the need for heavy smoothing of data.
7) You haven’t considered longer term cycles such as the ones producing the Holocene climatic optimum, the Mycean, Roman, Medieval and Modern warm periods. These give you the real reason for the rise in temperature from the low point of the little ice age to now, though we don’t yet fully understand the mechanism, bt we’re working on it.
8) I’ll stop here, because I reached the smiley number.
Hi Vaughan,
fresh start for us?
Deal.
1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.
Since 1870 is 13 years before my graph begins, how is this relevant here?
3) Even if the match between co2 and temperature were good (it isn’t).
Define “good.” Are you looking for an accuracy of one part in a thousand, or what? I would have thought any normal person would have seen my match as fantastic. I could hardly believe it myself when it saw it.
Correlation is not causation.
I never claimed otherwise. Maybe the temperature is driving up the CO2. Or maybe Leibniz’s monads are at work here. (Remember them?)
4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.
What are you talking about? You seem wedded to the concept that CO2 cannot raise temperature. Do you imagine either Miskolczi or Zagoni believes that?
5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.
I have two problems with this. You can fix the first one by correcting the Wikipedia article, which says that the law “states that there is a logarithmic dependence between the transmission (or transmissivity), T, of light through a substance and the product of the absorption coefficient of the substance, α, and the distance the light travels through the material (i.e. the path length), ℓ.”
For the second one, gases don’t follow Beer Lambert, radiation does. Beer Lambert is applicable to any material through which radiation might pass, whether solid, liquid, gas, or plasma.
6) My own solar model matches the temperature data better, without the need for heavy smoothing of data.
Fantastic. Your point?
7) You haven’t considered longer term cycles such as the ones producing the Holocene climatic optimum, the Mycean, Roman, Medieval and Modern warm periods.
Excellent point. Which of these are hotter than today?
These give you the real reason for the rise in temperature from the low point of the little ice age to now, though we don’t yet fully understand the mechanism, bt we’re working on it.
Good to know you’re working on it. Let me know how it turns out. (I’m not holding my breath.)
Pratt, you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation? By the way, I do not believe, but I know, and I can prove that under the conditions on the Earth the atmospheric absorption of the IR radiation is not increasing. The CO2 greenhouse effect does not operate the way you (or the IPCC) thinks.
Pleasure meeting you here on JC’s blog, Ferenc. Hopefully you’re sufficiently used to unkind words from others as not to mind mine, for which I can offer condolences, though the only apology I can offer is that we Australians can be disconcertingly blunt at times.
you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation?
Your “try to come up with” implies that the world is waiting with bated breath for someone to show that CO2 can elevate global temperature. I’d explain it except Tyndall already did so a century and a half ago and it would be presumptuous of me to try to add anything to Tyndall’s explanation at this late date.
Those who’ve used the HITRAN data to estimate the impact of CO2 on climate more accurately, taking pressure broadening as a function of altitude into account, have added something worthwhile. If I think of something equally worthwhile at some point I’ll write it up. (I’ve been meaning to write up my thoughts on TOA equilibrium for some time now, and was pleased to see Chris Colose expressing similar thoughts along those lines on Judith’s blog, though I was bothered by his reluctance to take any credit for them, instead attributing them to “every good textbook” which AFAIK is not the case.)
Now, I have a question for you. Regarding your argument that the heating effect of increasing CO2 is offset by a decrease in the flow of water vapor into the atmosphere, I would be very interested in two things.
1. An argument worded for those like me with only half Einstein’s IQ as to how that would happen.
2. An explanation of why reduced water vapor flow would cool rather than heat. Figure 7 of the famous 1997 paper by Kiehl and Trenberth shows more loss of heat by “evapotranspiration” than by net radiation, namely 78 vs. 390-324 = 66, in units of W/m^2. In other words the same mechanism by which a laptop’s heatpipe carries heat from its CPU to its enclosure is used by evaporation to carry heat from Earth’s surface and dump it in clouds, thereby bypassing the considerable amount of CO2 between the ground and the clouds, and this mechanism removes even more heat from the Earth’s surface than net radiation! Any impairment of that flow will tend to heat the surface (but cool the clouds).
It is certainly the case that the *level* of atmospheric water vapor regulates heat, by virtue of water vapor consisting of triatomic molecules and therefore being a greenhouse gas. However flux and level are to a certain extent independent: you can lower the flow of water vapor from the ground to the clouds without changing the level of atmospheric water vapor simply by raining less. The water vapor then continues to heat the Earth as before, but now you’ve lost the cooling benefit of the heat pipe.
Your problem, Ferenc, is that you have very complicated explanations that no one can follow, perhaps not even you. Granted climate science is rocket science, but rocket science starts with the idea that you can increase the momentum and hence velocity of a rocket by ejecting a little burnt fuel with equal and opposite momentum. Climate science needs to start with equally simple ideas, and you’re not helping.
I’m an Australian and I can distinguish between being blunt and being rude. We especially frown upon playing the man – not the ball and pushing in the back is always penalized heavily.
Lift your game mate
Should that be lift your game Pontin!!
lol The 2 decade domination was bound to end
Since we’re into concern troll territory, it would be interesting to know how to interpret this one:
> Poor Pratt […]
Let’s not forget this one too:
> You may compute it yourself **if you are able to** […]
Source: http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-19575
Since these two come from a short post with three sentences or so and that the longest one is an appeal from authority, that’s not bad a ratio, don’t you think?
Being coy does not help to sound anything else but rude, here. Quite the contrary.
***
In any case, it’s not about lifting the game, but picking an efficient strategy. Vaughan Pratt picked to be open and state his feelings candidly. This is fair game when speaking among friends. This is fairly normal considering the scientific background.
This strategy will play against him here. It’s not a friendly discussion. It’s not even a scientific one, at least not for most readers, I surmise. Let’s not forget that these words might get read and repeated for ever and ever.
Decorum matters. The strategy to pick should be a little more closed. For the chessplayers: think 1. Nf3 with Kramnik’s repertoire, not 1. e4 with Shirov’s.
@willard This strategy will play against him here. It’s not a friendly discussion. It’s not even a scientific one, at least not for most readers, I surmise.
David Hagen astutely observed (and BH complained in like vein) that I had “attacked the messenger” (by whom I suppose DH could have meant either himself or Miskolczi, or both) when I objected to his putting Miskolczi on a pedestal with the reasoning that doing so dragged him down to Miskolczi’s level. While not pleading either guilty or not guilty, I interpreted Andy Lacis’s comment to DH,
Your unmitigated faith in Ferenc nonsense does not speak well of your own understanding of radiative transfer.
as essentially the same “attack,” minus my gratuitous “morons” epithet. Since no such objection was raised to Andy’s comment, I am led to wonder whether it was really my trash-talking that bothered them rather than this alleged ad hominem attack.
But just because Andy and I are in agreement on the quality of Miskolczi’s work doesn’t make us right. For unassailable evidence of “Ferenc nonsense” we need go no further than the two numbers Miskolczi offered yesterday.
For the global average TIGR2 atmosphere the
P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0) You may compute it yourself if you are able to
(These two numbers are of course PE = 754 and KE = 376 whether expressed in megajoules per square meter or joules per square millimeter. I tend to think in terms of the latter, and to include “E” for disambiguation when using ASCII. Had Unicode been an option I’d have used the respective Khmer symbols ព and គ for these two consonants if the morons programmers at Redmond hadn’t made them so ridiculously tiny.)
One infers from Miskolczi’s second sentence a commendable passion for the sort of attention to detail that analyzing 2000 layers of atmosphere must entail. God is in the details, after all. Miskolczi’s thought that I might be incapable of mustering such passion is right on the money: my brain scrambles at the mere mention of even 1000 layers.
But unless you belong like me to the small sect that worships the back of an envelope, God is not where I scribbled PE = mgh = 10130*9.81*7600 = 755.3 MJ/m2 where m = 10130 kg is the mass of a 1 m^2 air column, g = 9.81 is the acceleration due to gravity, and h = 7600 is a reasonable estimate of the altitude of the center of mass of a column of air, which a little calculus shows is the same thing as the scale height of the atmosphere (integrate x exp(-x) from 0 to infinity and beyond). While I may well be unable to duplicate the many thousands of calculations Miskolczi must have needed to arrive at PE = 754 megajoules from 2000 layers of TIGR2 atmosphere, third grade must have been the last time I could not multiply three numbers together, and the outcome in this case gave me no cause to doubt Miskolczi’s imputed Herculean labors in his computation of PE.
Room remained on the envelope for two more multiplications: KE = 0.718*10.13*250 = 1818 MJ/m2 where 0.718 is the constant-volume specific heat capacity of air, 10.13 is the mass in tonnes of a square meter column of air, and 250 K (see why we needed KE?) is a rough guess at the average temperature of the atmosphere.
This is about five times what Miskolczi calculated.
Multiplying my figure by the 510 square megameters of the planet’s surface gives 927 zettajoules, just under one yottajoule as the total kinetic energy of Earth’s atmosphere.
With Miskolczi’s number we get only 192 zettajoules.
Hmm, maybe there’s an error in my math. Let’s try a completely different back-of-the-envelope way. At a room temperature of 300 K, air molecules move at an RMS velocity of 517 m/s (and a mean velocity of 422 m/s but for energy we want RMS, and the Maxwell-Boltzmann distribution makes this quite different). Since much of the atmosphere is colder than this, a more likely RMS velocity averaged over the whole atmosphere would be something like 465 m/s or 1040 mph, twice the speed of a jet plane. The atmosphere has mass m = 5140 teratonnes, allowing us to compute the translational portion of KE as ½mv² = 0.5*5140*465² = 557 zettajoules. But translation is only 3 DOF, air molecules have two more DOFs so we should scale this up by 5/3 giving 5*557/3 = 928 zettajoules.
Ok, I admit it, I cheated when I “guessed” 465 m/s for the typical RMS velocity of air molecules. But to get Miskolczi’s 192 zettajoules the velocity would have to be 257 m/s or 474 mph. If air molecules slowed to that speed they’d be overtaken by jet planes and curl up on the ground in a pile of frozen oxygen, nitrogen, and humiliation.
What kind of climate scientist would discount the energy of the Earth’s atmosphere to that extent?
Ok, you say, so Miskolczi merely overlooked a factor of 5 in some tremendously complicated calculation of the kinetic energy of the atmosphere. So what else is new, people make these sorts of mistakes all the time in complicated calculations. If that’s the only mistake Miskolczi ever made he’s well ahead of the game.
Except that (i) a climatologist really does need to be able to calculate the energy of the atmosphere more accurately than that, and (ii) Miskolczi has been claiming this for years, even when the mistake is pointed out to him. Instead he has been trying to convince Toth that the error is on Toth’s side, not Miskolczi’s. And that for a paper that Toth wrote eight months ago and that has now been accepted for publication.
By my standards I think Andy was very kind to limit himself to “Ferenc nonsense.” Being me I would have used stronger language like “idiot” or “moron.” (Hmm, come to think of it, I did.)
Let’s not forget that these words might get read and repeated for ever and ever.
I wish. Just so long as their meaning is not changed by misquoting them or taking them out of context. ;)
I like learning new stuff, and for that reason I prefer being proved wrong over right. One learns nothing from being proved right, I’m not invincible and am always happy to be vinced. On the other hand being contradicted is not the same thing as being proved wrong. But one also learns nothing from being proved wrong when one is deliberately wrong. (“I’m shocked, shocked to find that lying about climate science is going on in here.” “Your Heartland grant, sir.”)
Decorum matters. The strategy to pick should be a little more closed. For the chessplayers: think 1. Nf3 with Kramnik’s repertoire, not 1. e4 with Shirov’s.
Can’t be chess or we could have ended this vicious little game long ago with either one of the threefold repetition rule or the fifty-move rule (no material progress after fifty moves).
Unfortunately wordpress turns out not to offer the overstrike capability. Please read the first word of “morons programmers” in my preceding comment as having been overstruck.
Vaughan,
Just look at what you wrote!
> While I may well be unable to duplicate the many thousands of calculations Miskolczi must have needed to arrive at PE = 754 megajoules from 2000 layers of TIGR2 atmosphere, third grade must have been the last time I could not multiply three numbers together, and the outcome in this case gave me no cause to doubt Miskolczi’s imputed Herculean labors in his computation of PE.
This is WAY better than saying that FM is a moron, don’t you think? Style! Zest! Gusto! A really entertaining rejoinder to his low jab, in my most humble opinion.
If only I had a professor like that when I was younger, I would too worship the back of the envelope! Too late for me, I now prefer the back of a napkin:
http://www.thebackofthenapkin.com/
More seriously, here is how your trash-talking get recycled into editorials:
http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-19979
So now “scientists are mean.” This is good news, if we’re to compare to “scientists are not even wrong” or “scientists are endoctrinators.” Still, this means that your back of the envelope shows numbers that can’t be contested. The only way out is to play the victim: see how scientists treat us, mere mortal!
Please do not let that way out.
Hoping to see more and more back-of-the-envelope doodles,
Best regards,
w
PS: The chess analogy works better if we separate the bouts. It’s not impossible to make a career in repeating the same openings, over and over again. Imagine a tournament, or a series of tournament, not a single game of chess… Besides, if one repeats oneself too much, he becomes predictable and loses, unless one’s simply driving by to cheerlead and leave one’s business card with
Apolloon it ;-)Vaughan Pratt is nothing but fun. It would be an honor to be called a moron by Vaughan Pratt. If only I could rise to that level.
Willard, why do people harp on Aristotle’s fallacies? To me they’re quaint and all, but just the cowboy in me, I’m bringin’ my ad hominem attacks and my tu quoques to a bar fight. This appears to be a dust up.
> It would be an honor to be called a moron by Vaughan Pratt.
Agreed.
Nonetheless, one must then pick up on the editorializing that ensues. Just below here, for instance:
http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-20271
Or elswhere, not far from here:
http://judithcurry.com/2010/12/06/education-versus-indoctrination-part-ii/#comment-20019
There are many other places. In fact, simply put this into your G spot:
site: judithcurry.com arrogance
Yes, even Judith is using that trick.
Damn rhetorics!
Vaughan Pratt |
Re: “as essentially the same “attack,” minus my gratuitous “morons” epithet. Since no such objection was raised to Andy’s comment, I am led to wonder whether it was really my trash-talking that bothered them rather than this alleged ad hominem attack.”
I did raise an objection to A. Lacis.
BOTH your “trash-talking” AND your ad hominem attacks are unbefitting of professional scientific conduct. I address again to you what I said to A. Lacis
Please raise your conduct to professional scientific discourse rather than waste our time and distort science.
I acknowledged I misunderstood the core of your concerns over the coefficient in the virial theorem.
Please read up on how astronomy applies the classic virial theorem to calculate gas pressure, temperature and density versus radius. e.g., Advanced Astrophysics, by Nebojsa Duric Section 2.4.2 p 35
> Please raise your conduct to professional scientific discourse **rather than waste our time and distort science**. [Our emphasis]
How professional and befitting.
Vaughan Pratt
Re: “I like learning new stuff,” – My compliments.
Re: “But just because Andy and I are in agreement on the quality of Miskolczi’s work doesn’t make us right.” Agreed:
The one who states his case first seems right,
until the other comes and examines him. Proverbs 18:17 ESV
Re: “ and for that reason I prefer being proved wrong over right.”
OK per your desire:
Re: “Unless you belong like me to the small sect that worships the back of an envelope,” That is too small an object to worship.
The danger of worshiping your envelope is in missing critical big picture details. You observe: “This is about five times what Miskolczi calculated.”
Your error appears to be in calculating the conventional TOTAL thermal energy that you thought Miskolczi’ had calculated – RATHER than the single degree of freedom vertical component of the kinetic thermal internal energy that Miskolczi had actually calculated.
See further comments on my post: of Aug 16, 2011
Professional courtesy includes first asking the author to first see if I made an error, before trumpeting “his “error”. Miskolczi and Toth communicated back/forth with each other and colleagues and concluded that they agreed with each other’s virial theorem for a diatomic gas within an algebraic conversion factor. So If i have made an error, please clarify and link to the evidence & solution.
Best wishes on our continued learning.
“However flux and level are to a certain extent independent: you can lower the flow of water vapor from the ground to the clouds without changing the level of atmospheric water vapor simply by raining less.”
And this works the other way too. It’s possible to have less water vapour resident in the atmosphere without lowering the flow or precipitation.
And your point about Miskolczi’s theory was?
And this works the other way too. It’s possible to have less water vapour resident in the atmosphere without lowering the flow or precipitation. And your point about Miskolczi’s theory was?
That he didn’t say which.
(Don’t complain that I set you up, you did it to yourself.)
Given the context of his theory, he doesn’t need to spell out which.
Except to you apparently. ;)
You are the one claiming that your observation constitutes disproof, I am merely pointing out that it doesn’t.
Given the context of his theory, he doesn’t need to spell out which.
It would appear you’re not following. If it’s one then Earth’s surface cools, if it’s the other it warms. Why do you believe he doesn’t need to spell out which?
1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.
Since 1870 is 13 years before my graph begins, how is this relevant here?
In your original post you said:
“Now anthropogenic CO2 supposedly began in earnest back in the 18th century.”
This is potentially misleading. You could say that human emission of co2 began in earnest with the start of the industrial revolution in C18th Europe, but it’s not thought the atmospheric level was much affected by humans until the late C19th or even early C20th. So the problem for your explanation of temperature rise is accounting for the rising temperature from circa 1700 to circa 1880. What do you propose? Longer cycles with as yet unknown causation? (I won’t hold my breath for your explanation), or solar activity? Or something else?
Let’s do these one at a time so the posts don’t get too long.
the problem for your explanation of temperature rise is accounting for the rising temperature from circa 1700 to circa 1880.
How is this a problem? If you believe Arrhenius’s logarithmic dependence of the temperature of the Earth’s surface on atmospheric CO2, and Hofmann’s raised-exponential dependence of atmospheric CO2 on year, then a simple calculation at a couple of years, say 1800 and 1900, confirms the impression of those who doubt, as you correctly say, that “the atmospheric level was much affected by humans until the late C19th or even early C20th”.
Using n = 280 ppmv for the natural base (the part of Hofmann’s formula that raises the exponential), o = 1790 as the onset year in which Hofmann says human CO2 remaining in the atmosphere reached 1 ppmv, and d = 32.5 as the number of years it then took to double to 2 ppmv, all due to Hofmann, and using binary rather than natural logs, lb rather than ln, for Arrhenius’s formula so that the answer comes out in units of degrees per doubling rather than degrees per increase by a factor of e, we have lb(n + 2^((y-o)/d)) = 8.136 and 8.182 for y = 1800 and 1900 respectively. That’s an increase of .046 during the whole of the 19th century. If we use a climate sensitivity of 1.8, which is what’s needed for this formula to account for the temperature rise during the 20th century, then the rise during the 19th century would have been 1.8*0.046 = .083 °C, of which 0.021 °C would (assuming this formula) have been in the first half of that century and .062 °C in the second half.
Your impression of what people either observed or believed is confirmed by the theory.
The same formula applied to the period from 1957 to 1990 predicts a rise in temperature of 0.28 °C. Consulting the 12-year (144-month) smoothed HADCRUT3 temperature curve at WoodForTrees.org, we see a rise of exactly 0.28 °C.
Coincidence? Well, let’s back up to an intermediate period: 1892 to 1925. The formula promises a rise of .08 °C. Consulting the same smoothed data confirms that the rise was exactly that.
One more try: 1957 should be 0.15 °C hotter than 1925. Bingo! On the nose yet again.
Caveat: these dates are at (or near, pace Judy) zero crossings of the Atlantic Multidecadal Oscillation or AMO. Other dates don’t match the formula as well unless the AMO is incorporated into the formula.
One should also take into account the larger volcanoes, along with the extensive aerosols created when the RAF transplanted entire cities like Hamburg, Pforzheim, and Dresden into the atmosphere during World War II, not to mention the emissions from the interminable air sorties, tanks, etc., in order to get truly accurate results.
World War I on the other hand consisted largely of 70 million infantry running around and a few bricks being thrown from planes, whose aerosols had no impact on climate whatsoever while the combined heavy breathing of the infantry may have raised the CO2 a tiny amount. (World War II might be described as World War I “done right” in the view of its instigators, with the benefit of the great advances in military technology in the intervening quarter century.)
El Nino, solar cycles, seasonal fluctuations and other short term events, episodes, and phenomena can also be neglected because the 144-month smoothing completely removes their influence from the temperature record. This is not to understate their influence, which is large, and noticeably more traumatic by virtue of happening more suddenly!
All very interesting, so co2 rise does follow temperature rise quite closely, once we remove the rising and falling of the AMO, the solar cycles and other natural oscillations . Presumably it’s residence time flattens out these shorter fluctuations?
Ok onto number two.
2) The recovery of global temperature from the little ice age started around 1700
No reply.
Number three:
3) Even if the match between co2 and temperature were good (it isn’t). Correlation is not causation.
You agreed to this, which is great.
4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.
No reply, as Ferenc Miskolczi so kindly pointed out to you.
5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.
I see S.o.D. has refuted this claim, which I picked up off Jeff Glassman. No doubt there are gritty details tucked in here, but I’m happy to leave it for now.
Look at:
Vaughan Pratt | December 7, 2010 at 12:48 pm
you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation?
May I point out the obvious; that radiative forcing cannot be measured with current technology. So a Michelson/Morley type event has not occurred, and is unlikely to occur into the indefinite future. Neither side of this theortical argument can prove their case.
However, the IPCC cannot use the “scientific method” to prove it is right. By the same token, the opposing arguments cannot prove that the IPCC is wrong. It is just that, with billions of dollars at stake, it seems to me that we need to wait for proof that the IPCC is right. Which is what most of our politicians have NOT done.
Jim Cripwell
Regards to “proving”, there are methods to check. Scientists are now checking how well IPCC projections match subsequent temperatures etc. – They show increasing divergence.
Miskolczi (2010) above is a method of evaluating if the Global Optical Depth is increasing as expected from IPCC models. His results suggest not.
It will also help to move IPCC to adopt stringent Principles for Scientific Forecasting for Public Policy. See
http://www.forecastingprinciples.com/index.php?option=com_content&task=view&id=26&Itemid=129/index.html
David, Thank you for the support. The reason I think that this is important is that the next discussion is with respect to the rise in global temperature as a result of the change of radiative forcing, without feedbacks. Here the lack of experimental data is even more definite; it is impossible to measure this temperature rise with our atmosphere. The whole IPCC estimation of climate sensitivity has no experimental basis whatsoever. Yet, somehow, we are supposed to genuflect and pretend that the science is solid.
People wondering whether climate science just doesn’t understand the basics might wonder whether Jeff Glassman on December 6, 2010 at 6:37 pm is pointing out some clear flaws.
I’ll pick one claim which is easily tested:
Have a read of 6.3.4 – 6.3.5 of the IPCC Third Assessment Report (2001) – downloadable from http://www.ipcc.ch:
Here is the start of 6.3.5:
The paper that the IPCC refers to – New Estimates of Radiative Forcing due to well-mixed Greenhouse Gases by Myhre et al, GRL (1998) has same graph and the logarithmic expression – you can see these in CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers.
And you can read the whole paper for yourself.
Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.”
The IPCC curve in the 2001 TAR report follows the values established by Myhre et al. Myhre et al simply use the radiative transfer equations to calculate the difference between 300ppm and 1000ppm in CO2. Then they plot these values on a graph. They make no claim that this represents an equation which can be extended from zero to infinite concentrations.
Myhre et al and the IPCC following them are not making some blanket claim about logarithmic forcing. They are doing calculations with the radiative transfer equations over specific concentrations of CO2 and plotting the numbers on a graph.
Beer-Lambert law of absorption:
The radiative transfer equations do use the well-known Beer-Lambert law of absorption along with the well-known equations of emission. You can see this explained in basics and with some maths in Theory and Experiment – Atmospheric Radiation.
So the claim: “the Beer-Lambert Law, which IPCC never mentions nor uses” is missing the point of what the IPCC does. You won’t find the equations of radiative transfer there either. You will hardly find an equation at all. But you will find many papers cited which use the relevant equations.
In fact, in publishing results from people who use this Beer Lambert law, the IPCC does use it.
So when people write comments like this it indicates a sad disinterest in understanding the subject they appear so passionate about.
I recommend people to read the relevant sections of the IPCC report and the Myhre paper. Well worth the effort.
scienceofdoom, thanks for this lucid clarification.
Seconded.
This is a far too good opportunity to miss, for a bit of indulgence.
Change in the Arctic temperature has a resonance response in the ‘global climate elasticity’.
Geomagnetic field made up of the Earth’s field 99.9% and 0.1% solar, doesn’t get affected by AMOs, PDOs, ENSOs, CO2s, water vapours, ozone holes or any other of many climatic parameters.
And the enigmatic Arctic temperature has a ‘superb’ correlation with the geomagnetic field.
http://www.vukcevic.talktalk.net/NFC1.htm
Solve the Arctic, the rest will fall into place.
Fascinating work Vukcevic.
I noticed the periodicity in the curve of around 74 years, which matches well with the cycle Harald Yndestad found in the north atlantic which also coincides with lunar nodal cycles linked with long term tides.
http://tallbloke.wordpress.com/2009/11/30/the-moon-is-linked-to-long-term-atlantic-changes/
thanks SoD.
Your site and your style is a welcomed relief. I especially appreciated you discussions of the Myhre paper
Perhaps the most critical scientific question relevant to future climate behavior as a function of CO2 concentration involves “climate sensitivity” – the temperature response to changing CO2, typically expressed in terms of a CO2 doubling. This combines both the initial forcing as determined with the aid of the radiative transfer principles discussed in this thread, and the feedbacks initiated by the temperature response to the forcing. In addition to the Planck response (a negative feedback dictated by the Stefan-Boltzmann equation but part of the calculations implicit in the response to the forcing alone ), the most salient, at least on relatively short timescales, are the water vapor, lapse rate, the ice-albedo, and cloud feedbacks. The last of these has been a subject of some controvery but has generally been estimated as positive, the ice-albedo feedback is generally considered positive but quantitatively modest, the lapse rate feedback is negative, and the water vapor feedback is generally thought to be the dominant positive feedback responsible for significant amplification of the temperature response to CO2. This is positive response of water vapor is expected based on the increase in water vapor generated by the warming of liquid water.
Expectations aside, and despite the above, the sign of the water vapor feedback has been challenged. A negative feedback due to declining water vapor in response to CO2-mediated warming would have major implications for climate behavior. Dr. Curry has stated that she plans a new thread on climate sensitivity, and so extensive discussion might be withheld until that appears. However, water vapor is relevant to some upthread commentary asserting a high degree of climate self-stabilization based on negative water vapor feedbacks. In anticipation of the future, I think it’s worth pointing out here that substantial observational data bear on this question. These data include satellite measurements demonstrating that tropospheric water vapor at all levels is increasing in relation to increasing temperatures. In addition, longer term data are available from radiosonde measurements. Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful. At this point, the convergence of evidence strongly supports a positive water vapor feedback capable of amplifying rather than diminishing the initial effects of CO2 changes alone. The validity of theories dependent on a countervailing negative response (a decline in atmospheric water) cannot be excluded with absolute certainy, but appears to be highly improbable. I’ll provide more data and extensive references in the relevant upcoming thread.
Hi Fred,
I see plenty of uncertainty so i keep an open mind on all theories in play, plus I have one of my own, which is that humidity might not be dancing to co2’s tune either positively or negatively, but might be dancing to the beat of a different drum.
Nothing conclusive yet, but I made this plot of the NCEP reanalysis of the radiosonde data for specific humidity at the 300mb level, around the height where most radiation to space occurs, and solar activity.
http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png
I’ve been touting it around in the hope someone might have something worthwhile to say about it, so please do.
I’ll discuss the NCEP-NCAR reanalysis in the upcoming thread. It’s an outlier, with the other reanalyses, plus the satellite data, all showing increasing specific humidity. A major problem was changing instrumentation, which improved (i.e., shortened) the response time of the instruments, so that more recent data based on measurements at a high altitude were increasingly less contaminated with residual measurements from lower, wetter altitudes.
I think we can agree that the issue isn’t settled with absolute certainty, but attempts to conclude that humidity has not been increasing will have to overcome a rather large body of evidence to the contrary, particularly as satellite-based trends have begun to supplement the radiosonde data.
Thanks Fred,
My feeling is we should try to salvage what we can from the radiosonde data since it goes back twice as far as satellite data. Speaking of which, can you point me to any nice sites with easily accessible satellite data for such things as specific humidity?
Thanks again.
You’ll have to forgive me for not replying in detail, but I will try to review more of the references a bit later. In the meantime, you might check out some of the references in AR4 WG1, Chapter 3. There’s nothing there since early 2007, but the chapter does include some interesting text and references, including the brightness temperature data comparing O2 signals with water vapor signals, based on the relatively unchanging atmospheric O2 concentration as a baseline.
Any time data on this or any other unsettled topic are cited, it’s important to ask whether the data cited are inclusive, or whether they omit important sources that imply a different interpretation. To the best of my knowledge, the NCEP/NCAR reanalysis is the only source of extensively sampled data that conflicts with the other sources (I’m referring to observational data, not theoretical arguments, although these of course tend to go mainly with increasing humidity). If there are other important sources of observational data reinforcing the NCEP/NCAR conclusions, I hope someone will call attention to them.
Fred,
This is why I asked Ferenc Miskolczi whether he believed his analysis of the radiosonde data which found a constant value for Tau confirmed the validity of the data. I hope he calls back to reply.
Global rainfall records are hard to come by, but Australia has seen a decline in precip since 1970.
My feeling is we should try to salvage what we can from the radiosonde data since it goes back twice as far as satellite data.
I agree with tallbloke on this point.
Fred Moulton
Thanks for the overview. You note:
1) Per this thread, any comments on the confidence/uncertainty on those radiation evaluations?
2) You note increasing water vapor. Yet Miskolczi (2010) above applies the available radiosonde data showing decreasing water vapor. That appears to be one major issue over his finding of stable global optical depth. Similar declining humidity trends were reported by:
Garth Paltridge & Albert Arking & Michael Pook
Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data Theor Appl Climatol 4 February 2009, DOI 10.1007/s00704-009-0117-x
Gilbert gives theoretical basis to support this:
William C. Gilbert THE THERMODYNAMIC RELATIONSHIP BETWEEN SURFACE TEMPERATURE AND WATER VAPOR CONCENTRATION IN THE TROPOSPHERE
Energy & Environment Vol. 21 No 4, 2010
http://www.eike-klima-energie.eu/uploads/media/EE_21-4_paradigm_shift_output_limited_3_Mb.pdf#page=105
Appreciate any references addressing those differences. (With the main discussion for Curry’s future post on water vapor.)
PS Judy – feel free to move these last two to a different post.
Dear David and Fred,
Let me repeat here: It is often said that with increasing CO2 the water vapor amount must decrease to support Ferenc’s constant. No, not the amount of GHG’s but their global absorbing capacity determines the greenhouse effect. The stability of tau can be maintained by changes in the distribution of water vapor and temperatures too. If there is a physical contraint on it (as Ferenc states), the system has just enough degrees of freedom to accomodate itself to this limit, and will ‘know’ which one is the energetically more ‘reasonable’ way (satisfying all the necessary minimum and maximum principles) to choose.
Thanks,
Miklos
Theoretically, I agree that water vapor could be redistributed so as to reduce optical thickness despite rising overall levels. Hower, the distribution of increased humidity includes the mid to upper troposphere where the greenhouse effect is most powerful, and the increased optical thickness (tau) is already evident from direct measurements relating water vapor to OLR. There will be more discussion of this in the upcoming climate sensitivity thread, but the link I cited in my first comment in this thread (not quite halfway down from the top) provides one piece of evidence.
At this point, the climate sensitivity issue revolves mainly around the cloud feedback values (generally thought to be positive, but controversial). The positive water vapor feedback appears to be on solid ground and unlikely to be overturned, in my view, for reasons I’ve stated above.
SoD wrote:
“Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.”
Actually, you are talking about wrong paper, it contains nearly no information nor details. Better look at their previous paper, with some details on the method:
http://folk.uio.no/gunnarmy/paper/myhre_jgr97.pdf
Then pay attention to Figure2. I don’t know where did they get that the “choice of tropopause level” makes only difference of 10%. What I see from their Figure2 that depending on where do you stop calculating (and which profile is used, polar or tropical), the CO2 forcing may differ by 300%. That’ why I asked the question, to what extent it was “established” that global CO2 forcing is 3.7W/m2 (which, BTW, implies the accuracy of determination of better than 3%).
Al – Not sure where you’re getting your 300%. For example, the OBIR mls curve in Fig. 2 of Myrhe and Stordal (1997) has a an irradiance increase of 0.11 W/m2 at an altitude of 8 km for a 5 ppm increase in CO2, and an irradiance increase of 0.10 W/m2 at an altitude of 20 km, a difference of 10% rather than 300%. 8 km is a minimum tropopause height, and 20 km is a maximum tropopause height, so values within that altitude range are the only ones that matter.
In practice, plausible choices for global-mean tropopause height do not have nearly so broad a range, so the sensitivity of CO2 radiative forcing to tropopause height choice is much less than 10%. Myrhe and Stordal’s 10% refers to the sensitivity with CFC’s and other low-concentration Tyndall gases, not with CO2.
Finally, this is not a physical uncertainty, but an uncertainty associated with an arbitrary definitional choice. The “tropopause” is an arbitrary concept with multiple definitions, so the precise value of radiative forcing at the tropopause will depend upon what definition one chooses for the tropopause. None of this affects the vertical profile of altered irradiance, the physical response of the atmosphere to the altered irradiance, nor any model-simulated response to the altered irradiance.
??? For polar profile, the “forcing” (difference in two OLRs) is less than 0.035W/m2 for a distant observer. If you stop calculations at 10km for tropics, you have 0.115W/m2. This is a ratio of 3.28, or 328%. Or 228%, whatever.
What do you mean “not a physical uncertainty”? The planet gets some SW radiation, then it emits some LW to outer space, as a distant observer would measure. To get a steady state, the OLR must have certain magnitude, all regardless of your definitions. If composition is altered, new OLR will be established, and the system must react to re-establish the balance. How it could be an arbitrary concept if system needs to react either to 0.035W/m2, or to 0.115W/m2? I understand that you can throw in various mix of profiles (three of them :-)), and it will lead to a narrower range of possible forcing, but stopping calculations at 8km is not justifiable when it is known that the rest of atmosphere would cut this imbalance in half, as Figure2 suggests.
Al Tekhasski said on December 7, 2010 at 6:14 pm
“SoD wrote:
“Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.”
Actually, you are talking about wrong paper, it contains nearly no information nor details. Better look at their previous paper, with some details on the method..”
How can it be the “wrong paper”?
It is the paper cited by the IPCC for the update of the “famous” logarithmic expression. It doesn’t explain the radiative transfer equations because these are so well-known that it is not necessary to repeat them. The paper does contain references for the line by line and band models.
Imagine coming across a paper about gravitation calculations 50 years and 5,000 papers after the gravitational formula was discovered. People in the field don’t need to derive the formula, or even repeat it. .
Where did they get that the choice of tropopause definition makes only 10% difference?
From Greenhouse gas radiative forcing: Effects of averaging and inhomogeneities in trace gas distribution Freckleton et al, Q. J. R. Meteorol. Soc. (1998).
And as John N-G correctly says on December 8, 2010 at 1:52 am:
” The “tropopause” is an arbitrary concept with multiple definitions, so the precise value of radiative forcing at the tropopause will depend upon what definition one chooses for the tropopause.”
Picture it like this.
Suppose you live some distance north of New York City. Some people define the center of New York City as the Empire State Building. Some define it as the location of City Hall.
Suppose the distance from your house to New York City is 50 miles with the center defined as the Empire State Building. If someone wants to know how far from your house to New York City with the center defined as City Hall, they just have to add 3 miles.
The choice of “center of NY” is arbitrary. The distance from your house to the Empire State Building is always the same. The distance from your house to the City Hall is always the same.
I dont know if others feel this way, but I cannot see how these theoretical discussions can ever get us anywhere. The proponents of CAGW will always want to believe that there are sound theoretical reasons for believing that a high value for a change of radiative forcing exists, when CO2 doubles. Skeptics will want to believe the opposite. Without any observed data, I cannot see how the argument can be resolved. And this, of course, is the weakenss of the IPCC approach. Without hard measured data, they can never use the “scientific method” show that CAGW is real.
I dont know if others feel this way, but I cannot see how these theoretical discussions can ever get us anywhere. The proponents of CAGW will always want to believe that there are sound theoretical reasons for believing that a high value for a change of radiative forcing exists, when CO2 doubles. Skeptics will want to believe the opposite. Without any observed data, I cannot see how the argument can be resolved. And this, of course, is the weakenss of the IPCC approach. Without hard measured data, they can never use the “scientific method” show that CAGW is real.
My feeling exactly, as I’ve said repeatedly on Judy’s blog.
Only when one sees the temperature increasing logarithmically with CO2 level can one possibly begin to believe all these cockamamie theories that it “ought to.”
What impresses me is the number of people who will deny seeing exactly that when it’s pointed out to them. Their response is, “Oh, that could be any curve,” without actually offering an alternative curve to the logarithmic one.
Deniers are wedged into denier mode, data will not change their minds no matter how good the fit to theory.
Vaughan Pratt:
You assert:
“Deniers are wedged into denier mode, data will not change their minds no matter how good the fit to theory”.
But
Catastrophists are wedged into catastrophist mode, data will not change their minds no matter how bad the fit to theory.
So, your point is?
Richard
Nothing you will read in this thread will change your opinion one jot.
When I was a young staff engineer, long before personal computers or even remote terminals, I worked on a large campus of Hughes Aircraft Company. We had a wildly popular football pool of a dozen or so games per week, handicapped with point spreads. I had a computer in my office that I used for my picks, which I posted on my office door midweek. People from different buildings would gather at my door, pads and pencils in hand, to get the computer picks. I didn’t tell them that I had used a random number generator.
Myhre, et al. (1998) did the same thing, making the picks look more genuine by graphing some with lines and some as if they were data points for the lines. To make it more convincing, they labeled a couple of curves “approximation” and a couple of them “fit”, as if to say approximation to data or fit to data. Gunnar Myhre was an IPCC Lead Author for both the TAR and AR4.
The equation scienceofdoom gives on his blog is ΔF =K*ln (C/C0), where K = 5.35 W/m^2 and C0 = 278 ppm. It is the bottom curve labeled “IPCC type fit to BBM results” with an rms fit of 1.08E-14 W/m^2, digitized uniformly from end-to-end with 15 points.
The curve “IPCC type fit to NBM results” is for all practical purposes the same as the NBM results, which is logarithmic with K = 5.744 and C0 = 280.02, with an rms error = 2.46E-16 over 13 points.
The curve “IPCC (1990) approximation” has K = 6.338, C0 = 278.555, with an rms error = 8.88E-16 over 16 points.
The curve “Hansen (1998) approximation” has K = 6.165, C0 = 288.328, with an rms error = 7.74E-15 over 19 points.
The data set “BBM results” has K = 5.541, C0 = 281.537, with an rms error of 6.427E-15 over all 11 legible points.
The data set “NBM TREX atmosphere” has K = 5.744, C0 = 280024, with an rms error of 2.463E-16 over all 13 legible points.
In summary, these are all models of the failed conjecture that RF depends on the logarithm of the CO2 concentration. Who among the authors stands first to take the credit? Hansen? Scienceofdoom says,
“Myhre et al and the IPCC following them are not making some blanket claim about logarithmic forcing.”
Yes, they are. The message is just disguised with ornaments.
Sod says,
“They make no claim that this represents an equation which can be extended from zero to infinite concentrations.”
Of course not — not explicitly. The result is obvious, and besides that observation makes the claim of the logarithm quite silly.
Using the least IPCC curve, the one for which sod gives an equation, when the concentration gets to 1.21E16, the RF is equal to the entire radiation absorbed on the surface, 168 W/m^2. IPCC, AR4, FAQ 1.1, Figure 1.1. When the concentration gets to 1.61E30, the RF is 342 W/m^2, IPCC’s TSI estimate. Id.
Of course, the most the CO2 concentration can be is 100%, or 1E6 parts per million, but the log equation doesn’t know that! And furthermore, the most the RF from CO2 can be is less than the ratio of the total of all CO2 absorption bands to the total band of blackbody radiation, and much less than one. But the log equation doesn’t know that either. Clearly the logarithm has limited value. How should the relation behave?
Beer and Lambert took these and other considerations into account when they developed their separate theories that jointly proved to be a law. The band ratio is but one conservative, upper bound to the saturation effect, which can be calculated from the Beer-Lambert Law, line-by-line or band-by-band, as one might have the need and patience for accuracy and resolution.
The logarithm model makes CO2 concentration proportional to the exponent of absorption. The Beer-Lambert Law makes absorption proportional to the exponent of CO2 concentration.
None of Myhre’s traces comprise measurements, not even the output of the computer models, which will surprise some. You can depend on these models, the NBM and BBM, to have been tuned to produce satisfactory results in the eyes of the modeler, and in no way double blind simulations. Just like GCMs.
Certainly, no tests have ever been conducted over the range of the graphs, 300 to 1000 ppmv. No theory supports the logarithm function, the long held, essential but false conjecture of AGW proponents. The applicable physics, the Beer-Lambert Law, is not shown by Myhre, et al., of course. And if any investigator ever made estimates qualitatively different than the logarithm, IPCC would not have selected his results for publication, nor referenced a work that did. He probably couldn’t have been published anyway, but if he did, the folks at IPCC would deem the journal not to have been peer-reviewed so the journal and its papers could be shunned.
Likewise, when challenged with physics, sod turns ad hominem:
“So when people write comments like this it indicates a sad disinterest in understanding the subject they appear so passionate about.”
“scienceofdoom, thanks for this lucid clarification.”
Jeff – You are arguing the equivalent of “the global temperature cannot possibly be increasing linearly at the rate of 0.1 C/decade, because that would mean that 2881 years ago the mean global temperature would have to have been below absolute zero“.
Make that “28801 years ago”.
Jeff,
Scientistofdoom, Chris Colose, Vaughan Pratt and several other ‘proper scientists’ on this blog use petty insult and high handed sarcasm quite freely, as if it somehow their right as ‘proper scientists’ when talking to ‘mere sceptics’. Many of whom are equally well or better qualified to discuss the topics Judith has been posting about…
Here is Jeff Glassman’s first paragraph:
> When I was a young staff engineer, long before personal computers or even remote terminals, I worked on a large campus of Hughes Aircraft Company. We had a wildly popular football pool of a dozen or so games per week, handicapped with point spreads. I had a computer in my office that I used for my picks, which I posted on my office door midweek. People from different buildings would gather at my door, pads and pencils in hand, to get the computer picks. I didn’t tell them that I had used a random number generator.
Let’s wonder what is the purpose of this story.
Scientistofdoom, Chris Colose, Vaughan Pratt and several other ‘proper scientists’ on this blog use petty insult and high handed sarcasm quite freely, as if it somehow their right as ‘proper scientists’ when talking to ‘mere sceptics’
I’ll never live that down. ;)
But do I want to? I try my derndest to treat “mere sceptics” with the utmost kindness and discretion. But we’re not dealing with mere sceptics here, but bad ones.
There are two ways to be a bad sceptic. One is to insist that scientific facts are not subject to democratic voting since they are immutable. The other is to acknowledge the need for scientific consensus but to insist that facts run for re-election periodically, just as we make senators run for office periodically, as a measure of protection against their becoming “inconvenient.”
Scientific facts are neither of these, they are more like Supreme Court justices who must run the gamut of Congress but are appointed if at all for life. Theories are proposed, sceptically examined, and eventually rejected or accepted.
Any subsequent successful impeachment is likely to lead to a Nobel prize.
The former kind of sceptic states their version of the facts and when it is pointed out that this is not the scientific consensus objects that science is not a voting matter.
The latter kind forces re-elections in order to vote on the matter so they can “kick the bums out,” namely the incorrect laws of science, and replace them what they insist are the true laws. To this end they form their own contrarian party which they populate somehow with enough scientists proclaiming the new order to create the appearance of a majority opinion. There is a wide range of types of such scientists: TV weathermen, the nonexpert, Wall Street Journal subscribers, those who haven’t kept up with the literature, the confused, the willfully wrong, the uncredentialed, the dead, the pretend scientists, the fictional, and so on. And some (only a few I hope) have discovered that the trick that worked so well for getting good grades, to give the answer the professor wanted in preference to the one in the text when there’s a conflict, generalizes to research funding . (I should stress that the converse is definitely false: there are those who take oil industry money who nevertheless insist that anthropogenic global warming is a serious concern—but which of the 84 on that last list claims this and if so how did they end up on that list?)
I view both kinds as being mean to science. I therefore have no compunctions about being mean to them.
But not everyone who does not believe in global warming is necessarily a sceptic. They might simply not understand the material and would like it explained to them. Scientists (those in the teaching profession anyway) should not be mean to students having difficulties, they should be friendly and helpful and insightful and patient and understanding and give everyone A’s (erm, not the last)
But students and others who willfully feign ignorance, or insist on contradicting you with data you know from years of experience is completely wrong, count for me as sceptics. They are obstructing science, and then scientists start to act more like the police when dealing with someone who’s obstructing justice. The more egregious the offence, the meaner a scientist can get in defending what that scientist perceives to be the truth. Not all scientists perhaps, but certainly me, I can get quite upset about that sort of behaviour.
I do try to be tolerant when I’m not 100% sure which of these two types I’m dealing with, whether interested students having difficulties or willful obstructers of science. In the case of Arfur Bryant, with whom I had many exchanges elsewhere on Judy’s blog, I tried to remain patient with what I came to suspect was his pretense at being the former kind. We reciprocated each other’s politeness, but eventually PDA suggested I was wasting my time on him, just about the time I’d come to that conclusion myself.
Ferenc Miskolczi is a very different case from Arfur Bryant. Unlike Bryant, who claimed ignorance in the subject, Miskolczi claimed competence when his paper suggested the opposite. His misapplication of the virial theorem was (for me) only the first hint of this.
Andy Lacis found enough errors to satisfy himself that FM’s paper did not hold water. The big problem I had was Miskolczi’s failure to address the independence of the rate of the water cycle and the amount of water vapor held in the atmosphere. You might have a huge amount of vapor going from the ground to the clouds and immediately precipitating back down leaving virtually no water vapor up there, or it might hang around up there for a long time piling up like cars in a massive traffic jam and seriously blocking infrared. Or only a small trickle might go up, but again you have the choice of how long it hangs around.
The flow is relevant because it acts like a laptop’s heat pipe, which transfers heat from the CPU to the enclosure via evaporation at the CPU which passes along the pipe to the enclosure where it condenses. In the atmosphere the roles of CPU and enclosure are played by the Earth’s surface and the clouds respectively. Less flow means less cooling.
But the level of water vapor in the atmosphere is also relevant because it’s a greenhouse gas. A lower level means more cooling.
Miskolczi claimed to have shown that more CO2 would be offset by less water vapor. But without also calculating how this would impact the rate of flow no conclusion can be drawn about the impact of the “Miskolczi effect” on global warming. This is because if the flow is also reduced then you lose the 78 W/m^2 heat pipe labeled “evapotranspiration” in the famous Figure 7 of Kiehl and Trenberth’s 1997 paper. That’s the biggest loss of heat from the Earth’s surface, the second biggest being the net 390 – 324 = 66 watts of difference between direct radiation up and back radiation down.
A different kind of problem is that the claimed effect was simply not believable, which neither the paper nor http://www.youtube.com/watch?v=Ykgg9m-7FK4“>Miklos Zagoni’s you-tube explanation addresses. Usually when you have an unbelievable result you have an obligation to offer a digestible reason to believe it.
Unfortunately climate science is not (in my view) able to raise that kind of objection to Miskolczi’s paper without being accused of the same thing. If the point of hours or months of computer modelling is to increase confidence that global warming is happening, it’s not working for those who look at the last 160 years of crazily fluctuating temperature and say “seeing is believing.” The significant fluctuations in 19th century temperature appear to them inconsistent with the offered explanations of global warming. The only ones not put off by those fluctuations are those already convinced of global warming. This needs to be fixed.
But I digress. Getting back to the main point, after this I did not feel inclined to treat Miskolczi kindly. But I see in retrospect that when I wrote that “putting morons on pedestals makes you a moron” (paraphrasing “arguing with idiots makes you an idiot”) I should have expanded on it with “and putting obstructionists on pedestals makes you an obstructionist” so as to offer a wider selection. I didn’t actually commit to either of these.
Admittedly this is a bit like Dr. Kervorkian loaning out the choice of his Thanatron or a loaded revolver. But then Kervorkian did not go to jail until he was caught injecting someone himself. I did not actually say either Miskolczi or Hagen was a moron, I left it up to them to complete the reasoning as they saw fit.
One reasonable completion would be “putting someone who believes the Clausius virial theorem applies here on a pedestal makes you such a believer.”
This comment (with some background) would make an interesting guest post!
All we need is to find the appropriate blog for that…
B-)
Al – You seem to be combining a misinterpretation with an exaggeration.
Fig. 2 in Myhre and Stordal shows three sets of curves, one for a standard tropical sounding, one for a standard midlatitude sounding, and one for a standard polar sounding. If you want to see how the CO2 irradiance change differs across the globe, you might compare the result from the polar sounding with the result from the tropical sounding. This is the calculation you’ve done, except you’ve exaggerated the difference by comparing the irradiance at one altitude (60 km) for the pole with a different altitude (10 km) for the tropics. In reality, the tropopause in the tropics is higher than the tropopause at the pole, so a more apt comparison would be the irradiance at 20 km in the tropics (0.10 W/m2) with the irradiance at 8 km at the pole (0.072 W/m2), a difference of 30-50%.
But that’s not the issue, and that’s where the misinterpretation comes in. Nobody is going to compute the radiative forcing using the coldest temperature profile possible, and nobody is going to compute the radiative forcing using the warmest temperature profile possible. Instead, you’re going to use the global mean temperature profile (if you’re being crude), or use a range of temperature profiles that together come reasonably close to the actual temperature structure. Freckleton et al. (1998) (their Fig. 2) showed that a weighted average of three profiles gets you to within 1-2% of what you’d get with the full range of atmospheric conditions. So that’s what Myrhe et al. (1998) did.
There is indeed an exaggeration, to draw your attention. I was trying to illustrate the possible range of magnitudes of the “forcing” effect. As I already said, you can use a mix of profiles, and the range of forcings can be smaller.
Yet you need to explain your “unphysical” hint (slip?). I assert that the integration must be always done from surface to infinity, because this is the actual range where the final balance (steady state) is achieved for the system. There is no “definitional” choice. Granted, you can stop calculations at a certain height if you can show that the curve reaches some asymptote and does not change anymore. The Figure2 of Myhre and Stordal (1997) shows that if you continue to calculate way past the tropopause (whenever it could be), the calculated “forcing” gets smaller by about half. The effect of your “selection of tropopause” is exaggeration by 100%. Yet AGW proponents keep saying that 3.7W/m2 is “well established”. Figure2 shows that this is baloney.
Please explain why do you stop calculations at “tropopause” while Fig2 shows that it leads to 2X inflation of the estimate (even if we assume that the RT calculations were done correctly, which also can be questioned).
Al – Indeed such a seemingly strange choice requires justification. Here’s the scoop:
From the stratosphere on up, atmospheric temperatures are pretty well determined by the combination of radiation absorption/emission and the horizontal and vertical motions of the air. In contrast, the troposphere’s temperature structure is pretty strongly determined by exchange of heat with the ground and oceans in combination with the small-scale and large-scale motions that redistribute heat under the constraints of dry and moist adiabatic lapse rates.
Because of this difference, everything we know, observe, and simulate about the stratosphere shows that it adjusts fairly quickly to a radiative perturbation (on the order of a few months). However, the tropopause takes much longer, in particular because the oceans are very slow to respond to radiative forcing changes.
Myhre and Stordal (1997) in Fig. 2 show the instantaneous calculated changes in downward irradiance due to a CO2 change. As you’ve pointed out, the irradiance change is largest at the tropopause and is smaller higher up in the atmosphere. This means that the stratosphere would have a radiative imbalance, radiating away more energy than it’s absorbing. As a result, it will cool quickly, eventually equilibrating over a few months when the decrease of upward irradiance at the top of the atmosphere has become equal to the decrease of upward irradiance at the tropopause.
So, all in all, the IPCC figured it would be simpler to consider the radiative forcing after the stratosphere equilibrates rather than before the stratosphere equilibrates, since that’s what’s ultimately going to determine what happens in the troposphere. Here’s what they say about it.
Footnote 1: Myhre et al. (1998) refer to the instantaneous radiative change at the tropopause as “instantaneous” and the radiative change after the stratosphere equilibrates as “adjusted”.
Footnote 2: The substantial decrease in instantaneous irradiance change from the tropopause to the top of the atmosphere due to a change in CO2, which is what caused all this discussion, is, as far as I know, shared only by O3 among Tyndall gases, and is thus one of the fingerprint elements for climate change attribution.
Thanks for the reply, I missed it in the noise. I am certainly familiar with IPCC/Hansen’s definition of “radiative forcings”, and I expected AGW proponents to bring it in. This definition begs few additional questions.
(1) You said that stratosphere will “cool quickly, eventually equilibrating over a few months”. Given results of some radiative models I saw, the state of stratosphere has “cooling rate” of about one degree C per day. Therefore, without some compensating heat fluxes it would completely cool down to absolute zero in “few months”. It does not. Therefore, the concept of stratosphere “cooling quickly” and “readjusting to radiative equilibrium” does not exactly fit observations, would you agree?
(2) When IPCC says “stratospheric temperatures to readjust to radiative equilibrium”, does it mean literally that they assume no substantial convection up there? This would be very odd, because we know that the CO2 got “well mixed” everywhere, more or less. The question would be, how CO2 could ever get into stratosphere if moleculat diffusion would take about 100,000 years to get the CO2 across 20km layer of motionless air? Would you agree that the time and mechanism of temperature adjustment would require some substantial account for “stratospheric dynamics” that was neglected so far?
(3) You say, “the troposphere’s temperature structure is pretty strongly determined by exchange of heat with the ground and oceans”. While it sounds very reasonable on the surface, experience shows that surface-atmosphere system has a pretty fast response to direct radiative imbalances, say when clouds come by, or seasons of year change. Therefore, the concept of extremely slow response of surface-troposphere system (“typically decades”) must be also quite a stretch, would you agree?
(4) IPCC defines: “In the context of climate change, the term forcing is restricted to changes in the radiation balance of the surface-troposphere system imposed by external factors, with no changes in stratospheric dynamics, without any surface and tropospheric feedbacks in operation … , and with no dynamically-induced changes in the amount and distribution of atmospheric water.”
So, everything is fixed in troposphere (including temperature) except CO2 concentration. The forcing therefore is a discrepancy between instant change in concentration and underlying temperature. This forcing is expected to last “typically decades”, correct?
Now, how would you physically bring in a CO2 jump into entire atmosphere? One would assume that the mechanism of turbulent mixing of convectively-stirred air is an essential mean to propagate the surface-injected CO2 into tropopause and above. Fine. This means that the new state of the system was created, where the temperature now is deviated from the new state of equilibrium, and must adjust. The temperature at emission height is now a perturbation. And must last “typically decades” to force and sustain process of global warming. Is this correct description?
From Freckleton (1998) abstract :
“By comparison with calculations at a high horizontal resolution, it is shown that the use of a single global mean profile results in global mean radiative forcing errors of several percent for CO2 and chlorofluorocarbon CFC-12 (CCI2F2); the error is reduced by an order of magnitude or more if three profiles are used, one each representing the tropics and the southern and northern extratropics.” (Sorry, the article is behind a paywall)
So, they had ONE “high horizontal resolution” MODEL of atmosphere, and they calculated “forcing from 2xCo2”, which is ONE NUMBER. Then they have THREE numbers from three standartized atmospheric MODELS. Then by mixing the three numbers with three fudge coefficients, they got close to the “high resolution” number within a percent. Fantastic. (Sorry, the article is behind a paywall, can’t do a more detailed “review”)
I think I can do better than that, I could mix the three numbers to match the “high-resolution” forcing number to zero percent, with 20 zeros after the decimal point. [ I am sure they tried to fit to several GH gases at once, but the whole idea of parametric fudging does not fly in first place].
Is this how the entire radiative forcing science operates, and how the confidence was “established” and AGW foundation was built? Who said that their first model has “actual temperature structure”? Few radiosondes at handful of convenient locations launched twice a day? Sorry, this doesn’t sound serious.
Sorry about the paywall. They didn’t mix them arbitrarily, they weighted them by the area of the globe represented by each.
And which profile was representative for polar atmospheres? Tropical, or else?
I guess the HITRAN database does not handle continuum and far wing absorption particularly well, as it must have problems with weak absorption lines as well. Typical path lengths in the real atmosphere can be as long as several kilometers. On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity. Absorption at frequencies in these bands (like the so called atmospheric window) is not easily measured in the lab, because cells used in spectroscopy have limited path lengths (several dozens of meters max). Therefore the database containing values derived from actual measurements is insufficient for algorithmic determination of atmospheric absorption/emission, one also needs extrapolation based on hardly understood models of continuum and far wing behavior. It is not a straightforward task to verify these models with in-situ atmospheric measurements, as the trace gas contents of the atmosphere is highly variable and it is neither controlled nor measured properly with sufficient spatio-temporal resolution.
Even some of the so called “well mixed” gases (like carbon dioxide) are not really well mixed. In the boundary layer, close to the vegetation canopy CO2 concentration can be anywhere between 300 and 600 ppm, depending on season, time of day, insolation, etc. Active plant life continuously recreates this irregularity, which is then carried away by convection and winds. Turbulent mixing and diffusion needs considerable time and distance to smooth it out and bring concentration back to its average value.
In case of water contents, humidity of an air parcel is even more dependent on its history (time and temperature of last saturation). There is strong indication atmospheric distribution of water is fractal-like over a scale of many orders of magnitude (from meters to thousands of kilometers). Fractal dimension of this distribution along isentropic surfaces tends to decrease with increasing latitude. It is close to 2 in the tropics, but drops well below 1 in polar regions. In other words it is transformed by advection from an almost space-filling tropical distribution through a stringy one at mid-latitudes to granulous patches at poles.
As dependence of transmittance on concentration of absorber is highly non-linear, average concentration alone does not determine atmospheric absorptivity at a particular wavelength, finer details of the distribution (like higher moments) have to be given as well. However, these are neither measured nor modeled (because spatial resolution of computational climate models is far too coarse). Even with an absorber of high average atmospheric concentration, if there are see-through holes in its distribution, average optical depth can be rather low (you can see through a wire fence easily, while a thin metal plate made of the same amount of stuff blocks view entirely).
So no, I do not have much confidence in radiative transfer models. The principles behind them are sound, but the application is lacking.
On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity. Absorption at frequencies in these bands (like the so called atmospheric window) is not easily measured in the lab,
That’s very interesting, Berényi. What would you estimate as the uncertainty in total CO2 forcing as a function of this uncertainty?
In the boundary layer, close to the vegetation canopy CO2 concentration can be anywhere between 300 and 600 ppm, depending on season, time of day, insolation, etc.
What should we infer from this? That the Keeling curve underestimates the impact of CO2 on global warming? Are you trying to scare us?
There is strong indication atmospheric distribution of water is fractal-like over a scale of many orders of magnitude (from meters to thousands of kilometers). Fractal dimension of this distribution along isentropic surfaces tends to decrease with increasing latitude.
Also very interesting. How much of the impact of this effect on global temperature would you attribute to human influence? Global climate and anthropogenic global climate are not the same thing. Global climate has been going on for, what, 4.5 billion years? Anthropogenic global climate can only be compared with that on a log scale: roughly ten orders of magnitude.
People, please get some perspective here.
You write “On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity.”
I think you mean “close to zero” or “not close to unity”. Optical depth is unity, when the radiation is fully absorbed or scattered.
(I was waiting for Berényi Péter to answer this and then forgot all about it until just now.)
My own answer would be that unit optical depth is where the OLR is changing most rapidly as a function of number of doublings of the absorber (e.g. CO2)
and is therefore the most important density.
To see this, let n be the number of doublings with n = 0 chosen arbitrarily (e.g. for the CO2 level in 1915 say). Hence for general n, optical depth τ = k*2^n where k is whatever the optical thickness is at 0 doublings. Hence (assuming unit surface radiation) OLR = exp(-τ) (definition of optical depth) = exp(-k*2^n).
We want to know for what value of n (and hence optical depth) the OLR is changing most quickly. So we take the derivative of this twice and obtain – k*ln(2)*(ln(2) − k*ln(2)*2^n)*exp(n ln(2) − k 2^n). But this vanishes when 1 − k*2^n = 0 or k*2^n = 1 (and hence n = lb(1/k)). But τ = k*2^n, so the second derivative vanishes when τ = 1, the desired result.
The same result would have obtained had we counted triplings of absorber instead of doublings: we would instead have 3^n and ln(3) in place of 2^n and ln(2) everywhere, but in the end we would obtain τ = k*3^n = 1.
(The naive thing would have been to do this more directly as a function of optical depth itself, but one would then find that the OLR changes most quickly when the optical depth is zero. This is ok when considering absolute levels of CO2, but not when considering dependence of temperature on optical depth, the “climatological point of view” Berényi referred to, which calls for a second exponential.)
I’m not sure where Pekka is getting his definition of optical depth from, but ordinarily it is synonymous with optical thickness and is defined with natural logs, as distinct from optical density which is the same thing but customarily with decimal logs. Absorbance is yet another name for the concept, preferred over optical density by the IUPAC, and does not commit to a particular base of log so one says (or at least the IUPAC recommends) decadic or Napieran absorbance when not clear from context.
A material that is fully absorbing or scattering the radiation has infinite optical depth, one that allows it all to pass has zero optical depth. Optical depth is additive, so that radiation passing through depth d1 and then d2 is said to have passed through depth d1 + d2. It is a dimensionless quantity.
For a given wavelength ν of radiation and absorbance of the atmosphere at that wavelength, an optical depth of 1 means that the fraction of photons of that wavelength leaving Earth’s surface vertically and reaching outer space is 1/e. As we saw above this is the depth where the number of escaping photons of that wavelength is most sensitive to changes in the logarithm of absorbance.
Decreasing absorbance drives this fraction from 1/e up to 1, where the optical depth vanishes. The closer optical depth gets to 0, the less the impact of a given percentage change in the logarithm of absorbance (but the more the impact when working directly with absorbance itself).
Conversely increasing absorbance drives this fraction from 1/e down to 0, where the optical depth tends to infinity. The closer optical depth gets to infinity, again the less the impact of a given percentage change in absorptivity, but simply because the change in number of escaping photons is negligible, it makes no difference at this end whether we’re working directly with absorbance or with its logarithm.
Jeff,
You are barking up the wrong tree. Radiative forcings are well defined by both off-line radiative transfer models, an by those that are used in climate GCMs. Radiation is being computed explicitly, and does not rely on logarithmic formulas. Any logarithmic behavior that you see in model output is the result of what the radiation models produce in response to changing greenhouse gas amounts, not a constraint.
Take a look at some examples that I posted earlier on Roger Pielke Sr’s blog
http://pielkeclimatesci.wordpress.com/2010/11/23/atmospheric-co2-thermostat-continued-dialog-by-andy-lacis/
We also include multiple scattering effects in our GCM radiation modeling. These go beyond what Beer-Lambert exponential extinction is designed to represent.
Dr. Lacis
Not being a scientist but a casual observer, I wander how would you reconcile within the CO2 hypothesis:
– 1920-45 gentle rise in CO2 with sharpest rise in temperature
– 1945-80 cooling period at the time of the steepest rise of CO2 in the recorded history.
As far as I can see it, is not possible for the CO2 hypothesis to become accepted theory, if out of the 150 years of reliable records, the hypothesis it is not supported by 30% of the time.
http://www.vukcevic.talktalk.net/CO2-Arc.htm
re. above linked graph: Any geomagnetic hypothesis despite its good correlation, for time being, is not a contender without a viable quantifiable mechanism.
Define “the CO2 hypothesis”.
Hi Dr. Nielsen-Gammon
That would be something (axiom possible, but hypothesis or even a theory not so sure). I did say : ‘Not being a scientist but a casual observer’. If I were to do that, I would be compounding one possible error with a much greater one.
I may have naively assumed that Dr. Lacis might be able to deal my less than precisely articulated question, but your opinion would be also welcome and appreciated.
In comparing the time periods 1920-45 with 1950-2000, you should take a look at Fig. 5 and Fig. 8 of Hansen et al. (2007) “Climate simulations for 1880-2003 wi GISS ModelE”. The pdf is available from GISS webpage http://pubs.giss.nasa.gov/abstracts/2007/
There are other forcings besides the GHG increases that need to be included, especially the strong volcanic eruptions of Agung, El Chicon, and Pinatubo that provided storn negative forcing in the 1960-1990 time period.
Dr. Lacis
Thank you for your prompt reply. I will certainly look into suggested alternatives again. A science theory to stand rigorous tests of time, when sufficient quantity and quality of data is available, requires of those formulating such theories to be explicitly precise on every single exception, however irritating they may be.
Else, hypothesis is just that a hypothesis and it will deprived of the respect and acceptance of a theory.
I am also looking forward to possible further clarification by Dr. Nielsen-Gammon.
Thanks again.
p.s It was 1945-1980 I referred to, it is a very different proposition to 1950-2000 period you quoted; as the scientist aware of the exactness required for the case you present, I am sure you would agree.
Dr. Lacis
By time Agung volcano erupted in1963 temperature was falling for 15 years or longer, and was already at its trough, while you would agree El Chichón- 1982 and Pinatubo -1991 were outside of the period I referred to (1945-1980), the temperature was already rising.
The period I referred to is clearly marked on the graph
http://www.vukcevic.talktalk.net/CO2-Arc.htm
which you may have not taken opportunity to take a look at.
Dr Lacis if we (i.e. our generation) are to build credible climate science than its foundations must be solid and indisputable.
Thank you for your time, I can assure you it was not wasted, in my case it has widen my perspective on soundness of the state of the arguments presented.
By time Agung volcano erupted in1963 temperature was falling for 15 years or longer, and was already at its trough,
And your point?
The Atlantic Multidecadal Oscillation explains this very well, Milivoje.
I suppose my statement of the CO2 hypothesis would be something like:
CO2 and other non-condensing Tyndall gases, whose concentrations are increasing rapidly to man’s influence, have become one of the strongest forcing agents on the global climate, and further increases in concentration will be large enough to cause a further increase of several degrees C within the century.
CO2 and other non-condensing Tyndall gases,
Here is why I believe “greenhouse gas” is the correct name.
1. It’s the standard name today.
2. Greenhouses perform two functions: retaining the contained air, and trapping outgoing longwave radiation. Earth does the same thing, using gravity in place of walls and greenhouse gases in place of glass. The analogy for the former should be obvious (without gravity Earth would be a lot colder); for the latter, glass is at least triatomic (SiO2 for example), like greenhouse gases (H2O, CO2, O3, CH4, etc.). Salt windows do not trap infrared (shorter than 17 microns), being diatomic (NaCl), like O2 and N2.
The question of whether greenhouses exhibited the greenhouse effect was denied by W.Wood in the Feb. 1909 issue of Phil.Mag., responded to in the July issue by Charles Greely Abbot, director of the Smithsonian Observatory, who ox Wood was goring. 65 years later the same question was debated strenuously in two journals in 1974, with the outcome being more or less consistent with what I wrote above minus my point about gravity. More on this by googling “The atmospheric science community seems to be divided” (must be in quotes) for Craig Bohren’s perspective on this debate.
It is misleading to call solid NaCl diatomic. It is an ionic crystal formed from individual atoms (or ions) without any grouping to diatomic molecules.
In ionic crystals larger scale exitations – phonons – control the interaction with infrared radiation. It is, however, true that ionic crystals are transparent to a part of the IR radiation. NaCl absorbs strongly above 17 um and lighter LiF above 7 um. The reflection gets strong at even longer wavelengths (>30 um) and is therefore not so important.
Normal glass is also transparent to shorter wavelengths of infrared, but the limiting wavelenth is typically 2-4 um and varies depending on the type of the glass. This limit is so low that glass is indeed an efficient barrier for IR radiation.
Thanks for clarifying that, Pekka. Sounds like we’re in perfect agreement.
I have a couple of NaCl windows at home that I picked up for fifty bucks each, they’re great fun to play with. If you put a patch of black insulating tape on a saucepan (without which the silvery surface doesn’t radiate much) and boil water in it, a $10 infrared thermometer (way cheaper than the salt windows) will register close to 100 °C. When you put a sheet of glass between the thermometer and the saucepan the temperature plummets 60-70 degrees. But when you put a salt window between them there is hardly any difference.
Hey, I’m just a retired computer scientist having fun doing the stuff I was trained for in college half a century ago before I discovered the joy of computing.
Incidentally, in connection with the molecular structure of NaCl, Arrhenius’s logarithmic dependency of the Earth’s surface temperature on atmospheric CO2 level was not his only “disruptive” contribution. Back when it was assumed that NaCl in solution consisted of diatomic molecules floating around among the water molecules, Arrhenius argued that they dissociated into Na⁺ and Cl⁻ ions, as an explanation of why salt lowers the freezing point of water. British chemist Henry Armstrong disagreed, arguing instead that NaCl associated with water to form more complex molecules. That’s the simplified version, the longer version is more complicated.
As Pekka points out, the NaCl molecules lose their identity as such in the crystalline form. One can still pair them up, but not uniquely: there are six possible (global) pairings, one for each of the six Cl neighbors of each Na atom (or vice versa), since rock salt forms a face-centered cubic lattice. Pairing one Na with one Cl determines all remaining pairings (assuming no dislocations).
A. Lacis wrote: “Radiative forcings are well defined by both off-line radiative transfer models, an by those that are used in climate GCMs. Radiation is being computed explicitly”
It is like saying that since my calculators have very accurate algorithms to calculate exponents and logarithmic functions explicitly with 20-digit accuracy, now I can calculate anything with same accuracy, be it ocean heat content, or annually-averaged CO2 flux across ocean surface, etc. Or global OLR. Don’t you see a big lapse of logic in your statements?
David Hagen, 12.6.10 7:29 pm, 7:31 pm
Miskolczi (2010) is about tau_sub_a, the Greenhouse-Gas Optical Thickness. He says,
>> The relevant physical quantity necessary for the computation of the accurate atmospheric absorption is the true greenhouse-gas optical thickness . The definition and the numerical computation of this quantity for a layered spherical refractive atmosphere may be found in Miskolczi [4]. P. 244.
Miskolczi [4] is Miskolczi, F.M. “Greenhouse effect in semi-transparent planetary atmospheres”, J. Hungarian Met. Serv., v. 111, no. 1, 2007, pp. 1-40. Miskolczi (2010) relies on [4] on pp. 244, 248 (2), 253 (2), and 259. He also includes as [11], Miskolczi, F.M. and M.G. Mlynczak, “The greenhouse effect and the spectral decomposition of the clear-sky terrestrial radiation”, J. Hungarian Met. Serv., v. 108, no. 4, 2004, pp. 209-251, but with no citations in the paper.
In response to a reader’s invitation, I recently reviewed [4] and [11] jointly. The review can be read at IPCC’s Fatal Errors in response to a comment on 1/14/10. Google for Miskolczi at http://www.rocketscientistsjournal.com. My conclusions include that the author used a definition of greenhouse effect that was different than IPCC’s, that he tried to fit data from closed-loop real world records in an open-loop model, that he used satellite radiation measurements mistakenly as a transfer function, and that he forced his transfer function arranged inside a control loop to do the work of the entire control loop. He concludes,
>>>> The theoretically predicted greenhouse effect in the clear atmosphere is in perfect agreement with simulation results and measurements. [11], Miskolczi (2004), p. 209.
To which I responded,
>>Just as a matter of science, Miskolczi goes too far. An axiom of science in my schema is that every measurement has an error. A more concrete observation is that his greenhouse effect is for a clear atmosphere, meaning cloudless, but he cannot possibly have had such measurements.
The concluding exchange reads as follows:
>>>>[I]t is difficult to imagine any water vapor feedback mechanism to operate on global scale. [4], Miskolczi 2007, p. 23.
>>>>On global scale, however, there can not be any direct water vapor feedback mechanism, working against the total energy balance requirement of the system. [4], Miskolczi 2007, p. 35.
>>There is precisely a water vapor feedback mechanism in the real climate. Miskolczi’s work has been productive. It has discovered the existence of the powerful, negative, water vapor feedback. Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity. In the end, Miskolczi, and hence Zágoni, share a fatal error with IPCC. The fatal result is independent of the mathematics. One cannot accurately fit an open loop model to closed loop data.
Water vapor feedback works through cloud albedo to be the most powerful feedback in climate. It is positive and fast because of the burn off effect to amplify solar variations. It is negative because warming increases humidity, and slow because of the high heat capacity of surface waters. This negative feedback regulates the global average surface temperature to mitigate warming from any cause. It has not been discovered by IPCC.
I conclude that [4], Miskolczi (2007), is an essential foundation of Miskolczi (2010), so the latter inherits fatal errors from its parent.
Specifically you invited me to examine Figure 10 and sections 3 and 4 (probably pp. 257-260) in Miskolczi (2010). Miskolczi is here testing a model that says greenhouse absorption should be proportional to layer thickness, and that layer thickness should increase optical thickness. He says,
>>To investigate the proposed constancy with time of the true greenhouse gas optical thickness, we now simply compute tau_sub_a every year and check the annual variation for possible trends. In Fig. 10 we present the variation in the optical thickness and in the atmospheric flux absorption coefficient in the last 61 years.
From which he observes,
>>The correlation between tau_sub_a and the top altitude is rather weak.
He leaves to the reader the task of visual correlation. I would not venture a guess about the correlation between the two records in Figure 10. However, they could be made to appear much more correlated by adjusting the vertical scales to emphasize that effect. This is what IPCC does. Correlation is mathematical, and he should compute it.
Miskolczi concludes
>> In other word, GCMs or other climate models, using a no-feedback optical thickness change for their initial CO2 sensitivity estimates, they already start with a minimum of 200% error (overestimate) just in Δtau_sub_a.
Besides visual correlation, a candidate for the most common error in data analysis is to take differences of a noisy signal, then attempt to fit a function to the differences. Correlating one record with another is wholly analogous to fitting a function to a record. This error is found everyday in economics, and in fields like drug and food studies where the investigator attempts to find a best fit probability density. Engineers quickly learn not to design a circuit to differentiate a noisy signal. Taking differences amplifies the noise, and attenuates the signal. One mathematical manifestation of the problem is that a sample probability density, the density histogram, is not guaranteed to converge to the true population density as the number of samples or cells increases. Not so, the probability distribution! Spectral densities will not converge, but spectra will. A well-behaved spectrum often has an impossible spectral density, as, for example, whenever line spectral components are involved. The better technique then is to fit a function to the total signal, to the cumulative probability, or to the spectrum, and then if needed or for human consumption, to differentiate (take differences of) the best fit function.
My advice to all is always be suspicious of analysis from data that are differences, anomalies, or densities.
By taking differences of the optical thickness and of atmospheric absorption data, Miskolczi is differentiating noisy signals. As discussed above, he should first fit analytical functions to each data record, such as power series or orthogonal series, such as Fourier series. In the best of worlds, these might reveal an analytic relation between the signals. Regardless, he can next detrend the signals with parts of his analytical functions to perform a full correlation analysis, providing a scatter diagram and graphical and numerical demonstrations of linear predictors and of data correlation. After this is done, his paper might be ripe for conclusions.
Jeff Glass”man
re: “By taking differences of the optical thickness and of atmospheric absorption data, Miskolczi is differentiating noisy signals. As discussed above, he should first fit analytical functions to each data record,”
Please clarify where you see Miskolczi “taking differences in optical thickness” or “differentiating noisy signals.”
I think you have misinterpreted his papers.
I understood him to actually have “first fit analytical functions to each data record,” – e.g. of the atmospheric profile based on TIGR radiosonde data.
Then he calculates the optical absorption for each of 150 layers.
Next he INTEGRATES this (adds) not differentiates (substracts) – to get the global optical depth.
His correlations are fitting parameters to the observed radiosonde data processed to give optical absorption. The differences he takes are after taking these parameters, or after finding and rounding the correlations between the fluxes.
Jeff Glassman
re Cloud feedback. You note: “Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity.”
Do you have any way to clearly quantify this? Or papers supporting it?
e.g. Roy Spencer critiques conventional assumptions that clouds dissipate with warming giving a positive feedback.
A Lacis, 12/8/10, 11:49 am
Wrong tree?
What I said was “Radiative forcing in a limited sense applies radiative transfer, but it is not the same.” How can I parse what you have written to see what the wrong tree is?
The core, the heart, the essence of the AGW model is the existence of a climate sensitivity parameter, in one form or another. It is the Holy Grail of AGW. Sometimes it’s the transient CSP, sometimes the equilibrium CSP, and sometimes just the vanilla CSP. Sometimes it is represented by λ, and sometimes not. IPCC says,
>>The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. It is not a projection but is defined as the global average surface warming following a doubling of carbon dioxide concentrations. It is likely to be in the range 2ºC to 4.5ºC with a best estimate of about 3ºC, and is very unlikely to be less than 1.5ºC. Values substantially higher than 4.5ºC cannot be excluded, but agreement of models with observations is not as good for those values. Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty. {8.6, 9.6, Box 10.2} AR4, SPM, p. 12.
This puts the parameter as a response to an RF. In another expression, IPCC turns the relation around a bit, saying
>>The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid. These formulae are based on global RF calculations where clouds, stratospheric adjustment and solar absorption are included, and give an RF of +3.7 W m–2 for a doubling in the CO2 mixing ratio. (The formula used for the CO2 RF calculation in this chapter is the IPCC (1990) expression as revised in the TAR. Note that for CO2, RF increases logarithmically with mixing ratio.) 4AR ¶2.3.1 Atmospheric Carbon Dioxide, p. 140.
Of course RF increases logarithmically with mixing ratio! The concept that RF(2C) = a constant + RF(C) is a functional equation, and its unique solution is the logarithm function, and the base is irrelevant. Generally, the solution to y(kx) = constant + y(x) is the logarithm, and in the AGW world, the standard k is a doubling of x, the concentration of CO2, almost always. The constant is the climate sensitivity parameter when k = 2.
Safe to say, all the major computer models used by IPCC produce a constant climate sensitivity parameter. A table of the models studied in the Coupled Carbon Cycle-Climate Model Intercomparison Project (C4MIP) with their transient climate sensitivity parameter is AR4, Table 7.4, p. 535. IPCC says,
>>The equilibrium climate sensitivity estimates from the latest model version used by modelling groups have increased (e.g., CCSM3 vs CSM1.0, ECHAM5/MPI-OM vs ECHAM3/LSG, IPSL-CM4 vs IPSL-CM2, MRI-CGCM2.3.2 vs MRI2, UKMO-HadGEM1 vs UKMO-HadCM3), decreased (e.g., CSIRO-MK3.0 vs CSIRO-MK2, GFDL-CM2.0 vs GFDL_ R30_c, GISS-EH and GISS-ER vs GISS2, MIROC3.2(hires) and MIROC3.2(medres) vs CCSR/NIES2) or remained roughly unchanged (e.g., CGCM3.1(T47) vs CGCM1, GFDLCM2.1 vs GFDL_R30_c) compared to the TAR. In some models, changes in climate sensitivity are primarily ascribed to changes in the cloud parametrization or in the representation of cloud-radiative properties (e.g., CCSM3, MRI-CGCM2.3.2, MIROC3.2(medres) and MIROC3.2(hires)). However, in most models the change in climate sensitivity cannot be attributed to a specific change in the model. 4AR, ¶8.6.2.2, p. 630.
Safe to say, every climate model produces a climate sensitivity parameter. You don’t have read esoteric papers on absorption to find the logarithm dependence. The very notion that such a thing as this constant exists is the same as assuming that radiative forcing is proportional to the logarithm of the gas concentration. Further, recognizing that the effect of CO2 is the absorption of infrared lost from the surface, we have the key underlying assumption to all of AGW that the absorption of IR by CO2 is proportional to the logarithm of the CO2 concentration.
No matter how these models might have been mechanized, whether computing a radiation transfer or not, whether mechanizing the atmosphere as one or many layers, whether making actual computations or parameterizing, they produce the logarithm dependence. Like Captain Kirk, IPCC said, “Make it so.”
The assumption is false. That is the wrong tree up which I am barking.
Jeff,
I think you would get to understand radiative transfer, and radiative transfer effects and issues, a whole lot better if you took the time to read radiative transfer papers from the published literature (e.g., JGR, GRL, J Climate,or IPCC), or checked out the posted background information on blogs like Real Climate, Chris Colose, or Roger Pielke, Sr, instead of spending time perusing such papers as those by Miskolczi dealing with his mistaken interpretation of the greenhouse effect.
There is good reason why Miskolczi’s papers are not getting published in the mainstream scientific literature. These journals try very hard not to publish material that they know to be erroneous.
No matter how these models might have been mechanized, whether computing a radiation transfer or not, whether mechanizing the atmosphere as one or many layers, whether making actual computations or parameterizing, they produce the logarithm dependence.
Jeff, you can forget completely about models. Observation of the temperature after subtracting the 65-year AMO shows an impressively accurate fit to the logarithm dependence when the Hofmann formula 280 + 2^((y-1790)/32.5) is used for CO2.
Not only does CO2 have a measurable effect on climate, that effect is without any shadow of doubt logarithmic. The fit is far too good to have any other explanation.
If you don’t believe this, how do you account for the fact that the HADCRUT temperature record with 10-year smoothing is now 0.65 °C above any temperature attained prior to 1930? This even applies to 1880, the highest temperature between 1850 and 1930. Is God holding a soldering iron over us, has the Sun very suddenly gotten far hotter than in the last 3 million years, has the Devil decided the Apocalypse is nigh and is slowly boiling us like frogs in a pot, or what?
Willard 12/8/10 12:07
You need wonder no more! The answer to the purpose of the (true) little parable is in the very next paragraph. Do read on!
Myhre, et al. added labels and symbols to their graph, the one that charmed scientistofdoom, to give the appearance that climate researchers were approximating data and fitting curves to data. They represented the output of a couple of computer models as data. They did this because people with under-developed science literacy put great stock in the output of computers. This bridges from football poolers to IPCC’s audience.
Computer models are grand and essential. What is missing in these examples is the notion that computer models, like all other scientific models, must make significant predictions which then survive the tests of validation. This is the scientific process of advancing from hypothesis to theory.
My contention is that only theories can be used ethically for public policy. IPCC fails this test.
Jeff Glassman,
The answer to what you do with your paragraph lies in the sentence that immediately follows it:
> Myhre, et al. (1998) did the same thing, making the picks look more genuine by graphing some with lines and some as if they were data points for the lines.
So Myhre’s “pick” amounts to using a number generator. As far as I can see, there are two ways to interpret this. Either it’s meant litterally, in which case it really looks like a caricature. Or it’s meant as a way to express sarcasm, i.e. Myhre’s pick is no better than a random choice. My personal impression is that you are expressing sarcasm, the figure of speech that Tallbloke was condemning.
I underlined this story of yours (which I liked, btw) to show that caricature or sarcasm is common down here. Complaining that scientists are the one that indulge in that kind of trick, here and in general, amounts cherry-picking. The habit is more general than that. Style matters, mostly, as far as I am concerned. As long as one is interested to invest some time to entertain the gallery in a most pleasant way, I don’t mind much.
Hi Jeff,
A fair bit of this analysis is over my head, but I wondered if you could just clarify this statement for me:
“Miskolczi’s work has been productive. It has discovered the existence of the powerful, negative, water vapor feedback. Specific humidity is proportional to surface temperature”
Proportional to surface temperature at which pressure level?
I assume we are talking about Miskolczi’s analysis of the radiosonde data?
Rather than respond individually, I’d like to make a few points relevant to several comments above.
1. One of the striking elements of Judith Curry’s post is the linked references to multiple sources demonstrating the exellent correspondence between radiative transfer calculations and actual observations of IR flux as seen from both ground-based and TOA vantage points. This correspondence is based initially on line-by-line calculations utilizing the HITRAN database. Models based on band-averaging are less accurate but still perform well. Empirically, therefore, the radiative transfer equations have serve an important purpose in representing the actual responses of upward and downward IR to real world conditions.
2. It is universally acknowledged that the roughly logarithmic relationship between changes in CO2 concentration and forcing applies only within a range of concentrations, but that range encompasses the concentrations of relevance to past and projected future CO2 scenarios. It does not necessarily appy to other greenhouse gases, although water appears to behave in a roughly similar manner.
As far as I know, that logarithmic relationship can’t be deduced via any simple formula. Rather, it represents the shape of the curve of absorption coefficients as they decline from the center of the 15 um absorption maximum into the wings. As CO2 concentrations rise, the maximum change in absorption moves further and further into the wings, and since the absorption coefficients there are less than at the center, the effect of rising CO2 assumes a roughly logarithmic rather than linear curve.
3. A point was made earlier about the difficulty of laboratory determination of absorption coefficients relevant to atmospheric concentrations where the tau=1 relationship holds. Not being a spectroscopist, I can’t give an informed opinion on this, but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.
Fred,
Concerning your point 2. One example that leads to the logarithmic relationship is a strong absorption line with exponential tail. For this example it is possible to derive the result analytically.
I do not try to claim that this a correct model, but the derivation may help in understanding how broadening of a saturating absorption peak leads to the logarithmic relationship.
Pekka Pirilla – Maybe I’m misinterpreting your point, but the main CO2 absorption band centered around 15 um contains hundreds of individual lines, representing the multitude of quantum transitions singly and in combination that CO2 can undergo. The 15 um line represents a vibration/rotation transition. As one moves in either direction away from 15 um, the lines are weaker, because the probability of a match between photon energy and the energy needed for that transition declines. As a result, IR in those wavelengths must encounter more CO2 molecules in order to find a match. Absorption is so efficient at 15 um that more CO2 makes little difference at that wavelength (surface warming is a function of lapse rate, but the lapse rate at the high altitude for 15 um emissions is close to zero). In the wings, however (e.g., 13 um or 17 um), more CO2 means more absorption and greater warming. The logarithmic relationship appears to reflect the fact that increasing CO2 more and more involves absorption wavelengths of lower and lower efficiency – those further and further from 15 um.
Note that we are talking about the breadth of the absorption band with its many lines. The term “broadening” generally refers to the increasing width of individual lines in response to increases in pressure or temperature.
Finally, the absorption within a single line (i.e., monochromatic absorption) follows the Beer-Lambert law of exponential decay as a function of path length, but this is not the source of the logarithmic relationship we are discussing. Indeed, in the atmosphere, absorption is followed by emission (up, down, or sideways), followed by further absorption and so on, which is why the radiative transfer differential equations rather than a simple absorption-only paradigm must be used.
I may not have addressed your point, but I’m hoping to clarify what happens in the atmosphere for individuals unfamiliar with the spectral range of absorption or emission involving greenhouse gas molecules.
Fred Molten,
The simple mathematical example that I was referring to applies to a situation where the absorption is fully saturated in the center of the band and the tails have an exponential form. For this kind of absorption peak applying Beer-Lambert law to the tails, gives as an analytical result the logarithmic relationship between concentration and transmission through the atmosphere.
The fact that the logarithmic relationship is approximately valid also in the LBL-models may be interpreted to tell that weaker and weaker absorption becomes effective at approximately similar relative rate than in exponential tails.
Pekka,
I believe that the existence of an abundance of lines distributed widely across a range many orders in magnitude of their strength would by themselves tend to give rise to a logarithmic type response to increasing concentrations over a wide range. This may be similar, more or less important than the side band effect, I simply do not know.
Alex
Alex,
The result should be the same if the strenth distribution of the lines is suitable. My intuition tells that it should be such that the PDF of the logarithm of the line strengths is flat over the relevant range.
Absorption is so efficient at 15 um that more CO2 makes little difference at that wavelength (surface warming is a function of lapse rate, but the lapse rate at the high altitude for 15 um emissions is close to zero).
If you led the world’s theoretical physicists out to a courtyard and shot them all, I believe you would seriously set physics back.
I cannot say the same for climate science. The ratio of theorizing to observation is totally out of hand.
Admittedly Fred Moolten’s theorizing is bordering on the crackpot. However it seems to me that even highly respected theoretical climate scientists are undermining the credibility of their field with calculations that underestimate the environment’s complexity.
Theoretical economists have a similar problem. It’s a good question whether the economy or the climate is computationally more intractable in that regard. They’re both incredibly complicated systems that theorists love to oversimplify.
Fred Moolton
Appreciate your clarifications. You note above:
“Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful.”
I can see how absorption can vary with altitude as concentration changes. e.g. Essenhigh calculates for 2.5% water vapor vs 0.04%. (I can see how altitude variations would the relative absorption heating and the temperature lapse rate – and in turn adjust clouds.)
However, the Beer Lambert absorption shows the log of Io/I to change as the produce of concentration and depth.
Question:
As long as that total concentration x depth remains constant, does the total absorption change depending on how the concentration is distributed?
Actually, for a single absorber and no emission, very little. I published a proof of that once. The total Beer’s Law absorption is pretty much dependent on the mass of absorbing material along the ray, however distributed.
However, it’s different with competing absorbers and thermal emission happening as well.
David,
Water vapor absorption is line absorption with the water vapor line widths being linearly proportional to atmospheric pressure (P/Po). This makes water vapor a less efficient absorber with decreasing pressure. Thus, the same amount of water vapor near the tropopause will absorb a lot less solar radiation than if that same amount of water vapor was at ground level.
Also, since water vapor absorption is line absorption, it therefore does not follow the Beer-Lambert law, except on a monochromatic basis. To get the atmospheric absorption right in the presence of strongly varying absorption with wavelength, you need to either do line by line calculations, or use a correlated k-distribution approach.
A. Lewis
“This makes water vapor a less efficient absorber with decreasing pressure.”
Thanks Andy, that is a clear physical reason for the difference.
Re: “To get the atmospheric absorption right . . . you need to either do line by line calculations, or use a correlated k-distribution approach.”
1) I would welcome any comments/references you might have as to the relative accuracy of LBL vs k-distribution calculations, especially any good reviews.
2) I found:
Intercomparison of radiative forcing calculations of stratospheric water vapour and contrails
GUNNAR MYHRE et al. Meteorologische Zeitschrift, Vol. 18, No. 6, 585-596 (December 2009) DOI 10.1127/0941-2948/2009/0411
http://www.igf.fuw.edu.pl/meteo/stacja/publikacje/Myhre2009.pdf
These differences are much larger than the 1% level agreement you noted above. Is this primarily due to trying to evaluate contrails?
Are these reflective of the difficulty in modeling clouds absorption/ reflection vs water vapor?
3) By contrast I found:
An improved treatment of overlapping absorption bands based on the correlated k distribution model for thermal infrared radiative transfer calculations, Shi et al. Journal of Quantitative Spectroscopy and Radiative Transfer
Volume 110, Issue 8, May 2009, Pages 435-451
Compare: Chou, Ming-Dah, Kyu-Tae Lee, Si-Chee Tsay, Qiang Fu, 1999: Parameterization for Cloud Longwave Scattering for Use in Atmospheric Models. J. Climate, 12, 159–169.
That appears much better than an 1% error.
4) Would you consider Miskolczi’s LBL using 3459 spectral ranges to provide sufficient resolution “to get atmospheric absorption right” for water vapor absorption in each of his 150 vertical segments, assuming the HITRAN data base etc.?
>but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.<
Are you suggesting this has NOT been done ? If it hasn't, which I really doubt, then I am dumbfounded – another one of my assumptions shot to pieces
tallbloke 12/8/10 1:58 pm
Miskolczi (2007) said in his abstract,
>>Simulation results show that the Earth maintains a controlled greenhouse effect with a global average optical depth kept close to this critical value.
For this, and without validating his analysis, I applaud him. He claims to support his results with radiosonde data.
As a matter of philosophy, climatologists should assume that the climate is in a conditionally stable state, because the probability is zero that what we observe is a transient path between stable states. Then they should set about to estimate what controls that state, and the depth or dynamic range of the controls. From this modeling, they could determine how Earth might experience a significant state change, and it would help them distinguish between trivial and important variations.
Instead, the GCMs model Earth as balanced on a knife edge, ready to topple into oblivion with the slightest disturbance. This is analogous to finding round boulders perched on the sides of hills, and cones balanced on their apexes. This is Hansen’s Tipping Points. This modeling is intended to achieve notoriety and to frighten governments.
Of course Earth’s greenhouse effect is controlled. Earth has two stable states, a warm state like the present, and a cold or snowball state. Temperature is determined by the Sun, but controlled in the warm state by cloud albedo, the strongest feedback in climate, positive to amplify solar variations, and negative to mitigate warming. In the cold state, surface albedo takes over as the sky goes dry and cloudless, the greenhouse effect is miniscule, and white covers the surface. The cold state is more locked-in than controlled.
The regulating negative feedback is proportional to humidity, which is proportional to surface temperature. IPCC admits the humidity effect, but doesn’t make cloud cover proportional to it. Remember, proportional only means global average cloud cover increases with global average surface temperature, not that they occur in some neat linear relationship. And to answer your question, this all occurs at a pressure of one atmosphere.
Thanks Jeff.
The reason I asked is that I noticed this curious apparent relationship between specific humidity at the 300mb level, and solar activity, and I wondered how this might fit with Miskolczi’s scheme:
http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png
This is up around the altitude the Earth mainly radiates to space from, and I was wondering if it might indicate that the temperature there is proportional both to temperature at that altitude and solar irradiance received at that altitude. The interface…
Oops. Upwelling longwave at that altitude and solar irradiance received at that altitude. Maybe.
All ideas welcome.
Jeff – I find numerous errors in your statement, wbich I believe could be rectified if you reviewed the two threads in this blog addressing the greenhouse effect, as well as well as other sources (you could start with Hartmann’s climatology text and then graduate to Pierrehumbert’s “Principles of Planetary Climate” due out very shortly.
The reason I don’t address them here is that I find myself unable to provide an adequate response without consuming many pages, and so I would simply end up listing the errors without explaining what the correct answers are. There are probably others who can be more succinct, and I hope they may respond.
I would be glad to try to respond to individual points, however, if you bring them up singly (e.g., the “knife-edge” fallacy).
There are probably others who can be more succinct, and I hope they may respond.
The “big gun” is just making things up.
For this, and without validating his analysis, I applaud him.
Join the crowd. All we need now is a reputable validator of his analysis.
But even if his analysis survives this validation, what good is that if it doesn’t refute global warming? Reducing flow of water vapor into the atmosphere could well increase global warming instead of cooling it as FM claims.
In any event this is simply yet another model. Some of us out there, on both sides of the debate, don’t trust complex models that we have no way of verifying or validating ourselves. Until easily understandable specifications are written for these models, and the models have been shown to meet those specifications, they can’t be trusted.
In the meantime simply looking at the temperature and CO2 records is a lot more convincing.
Which side of that graph would you say is the “present day”?
Yes it’s frustrating when graphs are inadequately labelled.
The x axis is missing the BP acronym
I expected most people would recognise the younger Dryas on the right of the graph.
As a matter of philosophy, climatologists should assume that the climate is in a conditionally stable state, because the probability is zero that what we observe is a transient path between stable states.
Yeah, right.
As soon as we stop doubling the CO2 we pump into the atmosphere every third of a century we can say something like this.
I have the same issue as Al Thekasski.
There is no dynamics in the line by line radiation transfer models as far as I know.
Given a ground temperature, it needs a fixed atmospheric profile (temperature, concentrations and pressure) to run.
As I have seen the units used in the posts W/m² (e.g ONE number!), there is some averaging going on.
I am not a specialist of radiative transfer but it seems impossible to run at every time step of the model as many individual radiative transfer calculations as there are horizontal cells.
Especially when convection is involved which massively changes the temperature and humidity profiles at every instant.
If it is true that only a few “standard” profiles are considered without spatial coupling, I do not believe the accuracy of the radiation flows of 1% which has been thrown around here.
So the question, how many profiles (temperature, pressure, concentration) are considered for every time step of the model?
Besides it is also not true that for any column of basis 1 m² “radiation in = radiation out” as the temperature variations readily show.
Tomas – the modelers are the ones who should probably be addressing your comment. However, you may be confusing climate modeling with the radiative transfer equations as a means of assessing climate forcing from changes in CO2 or other moieties. Forcing is calculated by assuming everything remains constant in the troposphere and on the surface except for a change in radiative balance (typically at the tropopause). That means an assumption of unchanging temperature, humidity, pressure, etc.. Convection is a response to changes induced by forcing and is excluded from the forcing calculations themselves.
The models then attempt to incorporate the other variables over the course of time and grid space (certainly including convection), but that is a separate issue from the radiative transfer equations as a means of determining the effects of forcing.
I should add that the models don’t assume, even with forcing calculations, that pressure, temperature, humidity, etc., are the same all over the globe or at different seasons. This is one reason why their estimates of the temperature change from CO2 doubling (without additional feedbacks) is 1.2 C instead of the 1 C change estimated simply by differentiating the Stefan-Boltzmann equation and assuming a single mean radiating altitude and lapse rate.
With regard to convection, I was referring to a change in convection. Lapse rates themselves reflect the effects of convection, but forcing calculations assume no change in these, but only a radiative imbalance, and it is the latter that provides the basis for applying the radiative transfer equations.
Convection is a response to changes induced by forcing and is excluded from the forcing calculations themselves.
Well as I said, I am not a specialist in radiative transfer but this is certainly not true or not meant what it is saying.
Convection or more generally fluid flows are in no way an “answer” to radiation and even less to some “forcing”.
One could as well say that the radiation is the answer on the particular properties of fluid flows (like temperature and pressure fields).
The right expression is that both are coupled so that neither can be considered independently of the other.
So considering a radiative transfer uncoupled of the fluid dynamics is a nice theoretical exercice but has nothing to do with the reality.
Hence my question.
We may not disagree as much as it seems. Forcing (at least on planet Earth) is a hypothetical concept based on the assumption of unchanging conditions outside of the radiative imbalance. It has utility, however, as the basis for adding feedbacks.
On a planet without water, it might be possible to measure forcing directly.
Thomas
A single value is the usual outcome of integration whether of the spectrum as is the case of line by line integrators like HARTCODE and FASCODE for example and since the output is a value for energy an extensive property it’s perfectly valid to average it over time and space.
The line by line integrators mentioned work with atmospheric profiles taken from radiosonde ascents from all over the world. Though in TIGR 1 data set the tropics are under represented.
Actually Tomas, I am not yet into the question of dynamics and associated possibility of substantial errors due to order of averaging of fluctuating functions. My concern was about validity of static calculations under realistic atmospheric conditions, when vertical gradient of air temperature changes its sign in the middle.
Consider the following example. Let assume that we have a standard profile of atmosphere — temperature goes lower for the first 11km, then some 1-2km tropopause, and then goes the stratosphere where temperature is increasing with height. Assume that the absorption spectrum consists only of two bands. Let a narrow band (say, 14-16um) have quite strong absorption, while another, wider area (say, 3x wider than the strong band), have a very weak absorption (aka “transparency window”). Let assume their the average “emission height” of the whole range to be at 6km. According to the standard approach of averages, the ”average emission height” will go up with increase of CO2, where “higher= colder”. Colder layer emits less, and therefore the global imbalance of OLR would occur and will force climate to warm up. This is the standard AGW concept.
However under a more careful consideration the average (“effective emission height”) of our hypothetical spectrum of 6km is made of 0km for 3/4 of IR range, and of 24 km for the remaining 1/4 of the band. If we add more CO2 , the increase of 0 km band gives you zero change in OLR, while increase in 24km emission height will give you MORE OLR, because the temperature gradient in stratosphere is opposite to one in troposphere, so “higher = warmer”. As result, the warmer layer would emit more, and the overall energy imbalance would be POSITIVE. This implies climate COOLING, or just exactly opposite to what the standard “averaging” theory says.
In reality the spectrum is more complex, edges of absorption bands are not abrupt, so many different trends would coexist. But the above example suggests that it seems very likely that warming and cooling effects may cancel each other in the first approximation. Therefore, the sensitivity of OLR to CO2 increase is a second-order effect, and must be much harder to calculate accurately. Hence my question to one of the fathers of “forcing” concept.
Earth has two stable states, a warm state like the present, and a cold or snowball state.
Earth can’t have any stable states because if it could , it would have already found it in the last 4.5 billions years and stayed there forever.
At least if the word “stable” is to be understood as “a fixed point in the phase space”.
Earth as a dissipative system has never been in any kind of equilibrium and will never be – its trajectory in the phase space is dynamical and wandering between an infinity of possible regions in the parameter space.
It is precisely because the energy supplied and the energy dissipated are not stationary at any time scale that there can’t be any stable point.
The Earth can be only understood in terms of its dynamical orbits in the parameter space and not in some inexisting “equilibrium” or “stable” states.
Tomas,
The radiation is indeed being calculated at every gridbox of the model for every physics or radiation time step (more than 3300 times for each time step for a moderately coarse spatial resolution of 4 x 5 lat-lon degrees. At each grid box, the radiative heating and cooling rates are calculated for the temperature, water vapor, aerosol and cloud distributions that happen to exist at that grid box at that specific time. The (instantaneous) radiative heating and cooling information is then passed on to the hydrodynamic and thermodynamic parts of the climate model to calculate changes in ground and atmospheric temperature, changes in water vapor, clouds, and wind, as well as changes in latent heat, sensible heat, and geopotential energy transports. All these changes in atmospheric structure are then in place for the next time step to calculate a new set of radiative transfer inputs to keep the time marching cycle going.
Thomas
I do believe there is sufficient evidence that the earth has been in snowball state (an example and also for the current state it’s in.
My theory about snowball states of the Earth is that life adapts to it and fills every niche until the snow is black (blue, green, brown, anything but white) with life. This reverses the albedo and the snow then melts.
The obvious objection is that snow should still be black today, teeming with the same life, instead of white.
I have the following suggestions.
1. The snow/ice species were completely killed off by global warming. There was no snow at all in the Cambrian.
Counter-objection: we’ve had 49 million years since the Azolla event for those species to regroup.
Counter-counter-objections:
(a) Snow species evolve so slowly (because of the temperature) that we’re not there yet.
(b) Snowball Earth was a much more stable place for snow species than today’s ice caps, which are subject to storms too violent for species to evolve comfortably on account of the huge difference between tropical and polar temperatures. When the equator was icebound it would have been a far calmer place than today’s ice caps.
That sounds plausible enough for me to accept it until something better comes along.
“Life” is very resilient, as we have seen over and over.
That sounds plausible enough for me to accept it until something better comes along.
“Life” is very resilient, as we have seen over and over.
So what do you think of JoNova?
So what do you think of JoNova?
Less plausible then supernova
http://www.nature.com/nature/journal/v275/n5680/abs/275489a0.html
Maksimovich
Could the rapid growth in the industrial activity since 1920’s have changed ( increased ) ionisation of the stratosphere?
The reason we have a comprehensive test ban is obvious.Industrial activity is less then the effects of the decrease in modulation of the earths geomagnetic field over the last 100 years eg Svalgaard,
As a thought experiment ie as an inverse model.Are the temperature excursions in the latter part of the last century due to an increase in forcing say GHG etc or a decrease in the efficiency in dissipation say such as heat transport to the poles ?
I don’t understand the segue but I’m not surprised.
I don’t know Jo Nova personally. I do however like what she does and how she does it on her blog. I participate often and help out as much as I can.
Anything else you’d like to know Vaughan?
Anything else you’d like to know Vaughan?
Sure. I’d been trying to decide between JoNova, Ann Coulter, and Michelle Malkin as to who was the most vicious. Currently I’m shooting for JoNova. If you disagree I’d love to know why.
Another dimension is intelligence. I put Ann Coulter miles ahead of the other two. Love to hear your arguments on that one too. Reckon JoNova is a genius?
I’d also be interested to know whether Michelle Malkin wins in any category. Feel free to be creative in making up categories.
Should I start a pool on these three?
I left out Leona Helmsley because no one likes the Queen of Mean, and also Martha Stewart because every likes her after what the system put her through and she took it all with her usual consummate grace. Not unlike Bill Gates who also came out smelling like a rose after the wringer we technocrats put him through.
I say this as the designer of the Sun logo, which ought to incline me to a more cynical view of Gates but it doesn’t, perhaps because my logo has been appearing on the back cover of the Economist about every third week since we were acquired by Oracle and Scott McNealy was fired. The Wikipedia article attributes the Sun name to my Ph.D. student Andy Bechtolsheim but it originated with Forest Baskett, Andy’s previous advisor and my predecessor on the Sun project.
Like God, capitalism works in mysterious ways.
Glad to see you’ve opened yourself up nice and wide for all to see.
I won’t be getting down into this trash but feel free to wallow all by yourself.
I won’t be getting down into this trash
Looks like we understand each other’s position.
Staying above the fray is a good idea for those working in environmental science. Since I don’t do the latter I don’t find a need to do the former.
“I say this as the designer of the Sun logo”
I’m impressed. :-)
I got a sparcstation IPC in 1995 with a sony 1152×900 motitor.
my PC owning friends were very jealous.
The logo was a stroke of brilliance, apropos of nothing much.
@Fred Moolen
I asked this question as a reply to your post upthread a bit, but it’s a long thread and you may miss the question
>but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.<
Are you suggesting that this has NOT been done ? If it hasn't, and I really doubt that, then I am dumbfounded – another of my assumptions shot to pieces
I do assume it has been done. My uncertainty relates to how extrapolable the results are to atmospheric conditions involving far less concentrated gas concentrations and far longer path lengths. In any case, I’m encouraged by the fact that observational and modeled radiative transfer data seem to correlate well, at least in the troposphere under clear sky conditions.
Absorption, or actually spectral transmission has been measured in the laboratory by spectroscopists for practically every atmospheric gas that matters. The line by line absorption line coefficients (for more than 2.7 million lines) are tabulated in the HITRAN data base.
Actually, it not measured laboratory spectra that is being tabulated, but it is a combination of theoretical quantum mechanical calculations and analyses (which they can do very precisely and very effectively), normalized and validated by the laboratory spectra, that enable the modern-day spectroscopists to define the absorption line spectral positions, line strengths, line widths, and line energy levels along with all their quantum numbers – everything that is needed to perform accurate line-by-line radiative transfer calculations for any combination, amount, and vertical distribution of atmospheric greenhouse gases.
Thank you for your reply. I am slowly garnering the hard, basic data against which to test my doubts on the significance of AGW. This thread has been very useful, primarily because you and Chris Colose have been honest in your replies to honest questions
The to&fro of actual technical debate on this thread has been the most comprehensive I have seen for 4-5 years. Treating people such as myself as suitable fodder for press releases is guaranteed to aggravate polarisation of debate. You and Colose have slowly changed from that also – perhaps Judith C was right to try this blog experiment :)
David Hagen, 12.8.10 8:14 pm
You inquired about Figure 10 in Miskolczi (2010). He says,
>>we now simply compute τ_sub_a every year and check the annual variation for possible trends. In Fig. 10 we present the variation in the optical thickness and in the atmospheric flux absorption coefficient in the last 61 years.
The two ordinates in Figure 10 are both in percent, indicating the relative change year by year, confirming what he said in the text. The text and chart are clear that these are differences, the discrete analog of differentiating.
David Hagen, 12.8.10 8:35 pm
You asked about my statement,
>> Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity.
And then refer to an article by Roy Spencer questioning the nature of cloud feedback, and especially questioning whether “clouds dissipate with warming giving a positive feedback”.
The argument for my position qualitative, not quantitative. It is not complex, and it involves physical phenomena admitted by IPCC or which are everyday occurrences.
Spencer discusses a regional phenomenon since the 1950s to say,
>> These problems have to do with (1) the regional character of the study, and (2) the issue of causation when analyzing cloud and temperature changes.
And later,
>>I am now convinced that the causation issue is at the heart of most misunderstandings over feedbacks.
I agree.
Spencer concludes,
>> The bottom line is that it is very difficult to infer positive cloud feedback from observations of warming accompanying a decrease in clouds, because a decrease in clouds causing warming will always “look like” positive feedback.
Spencer’s inference is à posteriori modeling, developing a model to fit the data. A more satisfying method is to create an à priori model, one which relies on physical reasoning first. The à priori model must contain a cause & effect. The à posteriori model may, depending on modeler skill. The main difference is that the à priori model is rational in the physics, while the à posteriori model is a rationalization of physics to fit data.
I would not make the inference Spencer finds objectionable. On this topic of cloud feedbacks, I argue from causation first, based on physics, leading to a model that can be validated against data.
First I note that cloud albedo is a powerful feedback. It gates solar radiation, so has the greatest potential among Earthly parameters to be a feedback. It is a quick, positive feedback because of the burn-off effect witnessed by everyone. Cloud cover dissipates upon exposure to the Sun, so when the Sun output is temporarily stronger, the effects on Earth are increased proportional to the TSI increase, but more, magnified because burn-off occurs sooner. The reverse holds as well. The effect is to cause solar variations to be a predictor of Earth global average surface temperature, as shown in my paper SGW. http://www.rocketscientistsjournal.com . IPCC denied this relationship exists.
At the same time, cloud albedo is a slow, negative feedback with respect climate warming. It is slow because ocean heat capacity makes ocean temperature changes slow. Next, I assume that the climate throughout recorded history has been in a conditionally stable state. Hansen’s tipping points never occur. The cause of this negative feedback I attribute to increased warming causing increased humidity, resulting in increased cloud cover. A little calculation shows that this effect could be as large as reducing the instant climate sensitivity parameter by a factor of 10 without being detectable within the state-of-the-art for measuring cloud albedo. Recent work by Lindzen (“On the determination of climate feedbacks from ERBE data”, Geophysical Res.Ltrs., 7/14/09) shows climate sensitivity is about 0.5ºC instead of a nominal figure of about 3.5ºC. That’s reduction by a factor of 7, an empirical validation.
Now cloud cover is the result of humidity condensing around CCNs. The probability that humidity and the concentration of CCNs are exactly in balance has to be zero. So one or the other must be in superabundance, leaving cloud cover dependent on the other parameter. In order for cloud albedo to be the regulator stabilizing Earth in the warm state, it must be able to respond directly to changes in humidity. That means that the CCN must be in superabundance. The alternative is that cloud cover could not respond to warming or cooling, meaning we would have to find an alternative, powerful mechanism. The candidate set seems to have one member: cloud cover.
Thanks Jeff for expanding on your cloud perspective.
You may find interesting the work by Willis Eschenbach on clouds acting as a global thermostat. See WUWT: Willis publishes his thermostat hypothesis paper
See also Spencer at WUWT Dec. 9, 2010
The Dessler Cloud Feedback Paper in Science: A Step Backward for Climate Research
Spencer’s phase space approach appears key to distinguishing cause and effect.
*****
First I note that cloud albedo is a powerful feedback. It gates solar radiation, so has the greatest potential among Earthly parameters to be a feedback. It is a quick, positive feedback because of the burn-off effect witnessed by everyone. Cloud cover dissipates upon exposure to the Sun, so when the Sun output is temporarily stronger, the effects on Earth are increased proportional to the TSI increase, but more, magnified because burn-off occurs sooner.
*****
So this is an observation rather than a derivation from first principles? Do you have studies that would lend a lot of confidence to this assertion?
I picked out one element of Jeff Glassman’s original claims (from December 7, 2010 at 9:31 am).
He originally stated this:
I challenged these claims on December 7, 2010 at 9:31 am.
In his long response of December 8, 2010 at 10:40 am he says:
Of course!
“Of course” seems to mean here “they didn’t use it, they made stuff up instead”.
Many early papers from the 60s and 70s do include all of the equations and the derivations – and the simplifications – necessary to solve the RTE (radiative transfer equations).
It might seem incredible that 100’s of papers that follow – which include results from the RTE – don’t show the equations or re-derive them.
Of course, these authors probably also made up all their results and didn’t use the Beer Lambert law..
Well, I might seem like a naive starry-eyed optimist here, but I’ll go out on a limb and say that if someone uses the RTE they do use the physics of absorption – the Beer-Lambert law. And they do use the physics of emission – the Planck law modified by the wavelength dependent emissivity.
Then in his followup claims, Glassman says:
What’s the claim?
Is Glassman’s problem that Myhre doesn’t use the Beer Lambert law OR that Myhre hasn’t got a pyrgeometer out to measure the DLR. And how would Myhre measure the radiative effect of 1000ppm CO2 in the real atmosphere?
I believe Glassman’s real issue is not understanding the solution to the RTE in the real atmosphere.
The RTE include emission as well as absorption. The absorption characteristics of CO2 and water vapor change with pressure and temperature. Pressure varies by a factor of 5 through the troposphere. Water vapor concentrations change strongly with altitude. These factors result in significant non-linearities.
The results for the RTE through the complete atmosphere vs concentration changes are not going to look like the Beer Lambert law against concentration changes.
Glassman says:
Perhaps this is Glassman’s issue – he doesn’t believe that the results presented are correct because he thinks that doubling CO2 should result in a change in proportion to the Beer Lambert absorption change? That would only happen in an atmosphere of constant pressure, constant temperature and a constant concentration vs altitude.
When he doesn’t see this he imagines that these climate scientists have been making it up to fit their pre-conceived agendas?
Well, despite many claims and accusations about climate scientists it is still true that when Myhre et al did their work they used the Beer Lambert law but not just the Beer Lambert law. And it is still true that the IPCC relied on Myhre’s work and therefore the IPCC also “used” the Beer Lambert law.
Glassman’s original claim is still not true.
But for the many who know that we can’t trust these climate scientists who just “make stuff up” – anyone can calculate “the real solution” to the RTE vs increasing concentrations of CO2.
The RTE are not secret. Anyone with a powerful computer and the HITRANS database can do the calculations and publish the results on their blog.
A Lacis, 12/9/10, 12:31 am
If Miskolczi’s papers had been published in the mainstream scientific literature, we could be sure of just one thing: they conformed to the AGW dogma. I critiqued them only on request, partly in the hope of finding a gem, but always to hone the science.
I disagree with you that my education in radiative transfer is need of augmentation. As I said before, no matter how the climate models might calculate radiation, or how you might think they do, in the end they produce a radiative forcing dependent on the logarithm of CO2 concentration. That might be valid in a narrow region, but for climate projections over a doubling of CO2 concentration, the models violate physics. That is something you might want to study.
You suggested IPCC as a source for my education on radiative transfer. The Fourth Assessment Report contains this revelation:
>> The results from RTMIP imply that the spread in climate response discussed in this chapter is due in part to the diverse representations of radiative transfer among the members of the multi-model ensemble. Even if the concentrations of LLGHGs were identical across the ensemble, differences in radiative transfer parametrizations among the ensemble members would lead to different estimates of radiative forcing by these species. Many of the climate responses (e.g., global mean temperature) scale linearly with the radiative forcing to first approximation. AR4, §10.2.1.5, p. 759.
RTMIP was the Radiative-Transfer Model Intercomparison Project, a response to a chronic problem with radiative transfer modeling. The models didn’t agree, and still don’t. Furthermore, the modelers reduce the problem to parametrization, putting in a statistical estimate for a process too complex or too poorly understood to emulate. So, regardless of what you perceive as my needs in the theory of radiative transfer, in the last analysis radiative transfer is pretty much a failure and irrelevant in IPCC climate models.
It is a failure because, in the end, the modeled climate responses scale linearly with the radiative forcing to a first approximation. And if climate models could get the climate right in the first order, we would have a scientific breakthrough. As I wrote to you above re barking up the wrong tree, the fact that global mean temperature turns out to a first approximation to be proportional to radiative forcing, means that the models to a first approximation are producing a dependence on the logarithm of CO2 concentration. I have no doubt that that could be true and valid, all in the first order, over the domain of CO2 concentration seen at Mauna Loa. I also have no doubt that it is not valid for a doubling of CO2.
Radiative forcing will follow a form like F0 + ΔF*(1-exp(-kC)), where C is the gas concentration. That is an S-shaped curve in the logarithm of C, showing a saturation effect. It is not a straight line. You may apply this function in bulk, a first order approximation, or in spectral regions as you have the time and patience to do. However, knowing the greenhouse effect of CO2 requires knowing where you are on the S-shaped curve. This is just one of many fatal errors in IPCC modeling.
Why would anyone be motivated to master radiative transfer for arbitrary failed models? They are initialized in the 18th Century by zeroing out on-going warming from the last several cold epochs, causing modelers to attribute normal warming to man. The models place the surface layer of the ocean in thermodynamic equilibrium. They assume CO2 is well-mixed and long-lived in the atmosphere. They make the solubility of natural and anthropogenic CO2 different. These IPCC models are open loop with respect to the positive feedback of CO2, and open loop with respect to the positive and negative feedback of cloud albedo.
Perfecting radiative transfer will have no effect on this sorry excuse for science.
I am dumbfounded. Since I put up my contribution at December 7, 2010 at 9:12 am, there have been numerous posts with claims and counter claims. This reminds me of the ancient philosophers arguing as to how many angels can dance on the head of a pin. I cannot see how these differences can be easily reconciled, and I come back to my main point.
It is impossible with current technology to MEASURE the change in radiative forcing for a doubling of CO2. So it will never be possible to find out who is right and who is wrong. And the IPCC can never establish that CAGW is correct using the “scientific method”. We can never know what the true value is for the change in radiative forcing for a doubling of CO2, not whether such a number actually means anything.
Jim,
Radiative transfer is based directly on laboratory measurement results, and as such, has been tested, verified, and validated countless times. John Tyndall in 1863 was one of the first to measure quantitatively the ability of CO2 to absorb thermal radiation. Since then spectroscopists have identified literally thousands of absorption lines in the CO2 spectra, and have tabulated the radiative properties of these lines in the HITRAN data base. They have full understanding of why each of the CO2 absorption lines is there, and why it has the spectral position, line strength, and line shape that is measured by high resolution spectrometers for a sample of CO2 in an absorption cell for any pressure, temperature, and absorber amount conditions.
It is more a matter of engineering than science to calculate by radiative transfer methodology how much radiation a given amount of CO2 will absorb. Just as there is no need to throw someone off a ten story building to verify how hard they will hit the pavement, there is no real need to measure directly how much radiative forcing doubled CO2 will produce. Nevertheless, the actual experiment to do this (doubling of CO2) is well underway. In the mid 1800s, atmospheric CO2 was at the 280 ppm level. Today it is close to 390 ppm, and increasing at the rate of about 2 ppm per year. At that rate, before the end of this century we will have surpassed the doubling of CO2 since pre-industrial levels.
The radiative forcing for doubled CO2 is about 4 W/m2. The precise value depends on the atmospheric temperature profile, the amount of water vapor in the atmosphere, and also on the details of the cloud cover. The 4 W/m2 is a reasonable global average for current climate conditions. The direct warming of the global temperature in response to the 4 W/m2 radiative forcing is about 1.2 C.
Best estimates for water vapor, cloud, and surface albedo feedbacks increase the global mean temperature response to about 3 C for doubled CO2. Because of the large heat capacity of the ocean, the global temperature response takes time to materialize, but that is the global equilibrium temperature that the global climate system is being driven toward.
Current technology measurement capabilities are not adequate to measure directly the radiative forcing of the GHG increases. But current measurement techniques do measure very precisely the ongoing increases in atmospheric greenhouse gases, and radiative transfer calculations (using the laboratory measured properties of these GHGs) provide an accurate accounting of the radiative forcing that is driving climate change.
I fail to see how this discussion can advance case of the CO2 induced GW, if a simple question related to reconciling:
– 1920-45 modest rise in CO2 with sharpest rise in temperature
– 1945-80 cooling period with the steepest rise of CO2 in the recorded history
http://www.vukcevic.talktalk.net/CO2-Arc.htm
has no clear answer.
Andy Lacis has stated that it’s something to do with oceanic cycles, but has declined to tell us if or how these oceanic cycles have been incorporated into his model, or what the magnitude of their contribution to the recent warming was.
This is unsurprising, because if he did, there would have to be a reassessment of the modeled climate sensitivity to co2, which would inevitably drop.
Let’s assume Dr. Lacis is correct.
If there was no increase in CO2 (providing CO2 does as suggested) then inevitable conclusion is that in 1960s temperature would have been much lower, about 0.8 C, approximately as 1810s (Dalton minimum).
I think IPCC needs to do some work on that one.
That’s the only bit that’s relevant, and also the only bit which can’t be determined by your RTMs.
The radiative forcing of 4W/m2 may be correct and measurable (I have no argument with it), but it’s largely irrelevant. The relevant bit is the sensitivity, as expressed as degrees per CO2 doubling. And that, as I see it, is little more than guesswork.
A. Lacis writes “It is more a matter of engineering than science to calculate by radiative transfer methodology how much radiation a given amount of CO2 will absorb. Just as there is no need to throw someone off a ten story building to verify how hard they will hit the pavement, there is no real need to measure directly how much radiative forcing doubled CO2 will produce.”
This is complete and utter garbage. There is lots of experimental data that if someone falls off a 10 story building they will hit the pavement hard. However, it is still IMPOSSIBLE to measure radiative forcing directly. If it was possible to measure radiative forcing directly, then there would be no need to rely on estimates from radiative transfer models.
Can you describe the way that radiative forcing would be actually measured.
A. Lacis writes “Current technology measurement capabilities are not adequate to measure directly the radiative forcing of the GHG increases. But current measurement techniques do measure very precisely the ongoing increases in atmospheric greenhouse gases, and radiative transfer calculations (using the laboratory measured properties of these GHGs) provide an accurate accounting of the radiative forcing that is driving climate change.”
You seem to agree that radiative forcing cannot be measured directly. That is all that matters. I agree that when you add CO2 to the atmosphere, it disturbs the radiative balance of the atmosphere. CO2 is a GHG. But this still does not alter the fact that radiative forcing cannot be MEASURED.
This will be a much more important issue when Judith introduces how much a change in radiative forcing affects global temperatures.
Radiative forcing can never be measured. No improvement in empirical capabilities can make it possible, because it is a concept related to such a modification in the atmosphere that cannot occur. It is defined as the net flow of energy in a situation where the radiative transfer changes without any change in the temperature profile, but in all real changes the temperature profile will also change.
Thus the radiative forcing will always remain a parameter calculated using some theoretical model.
The related concept of climate sensitivity (including feedbacks) can be measured at some future time as it refers a change that can occur on the real earth. The accuracy of this measurement may remain low, but some direct empirical measurements will be possible.
Pekka you write “The related concept of climate sensitivity (including feedbacks) can be measured at some future time as it refers a change that can occur on the real earth.”
Absolutely correct. HOWEVER, and it is a big however, the rise in temperature for a doubling of CO2 WITHOUT feedbacks, CANNOT be measured. That will be the isssue as we discuss this in detail, when Judith introduces the subject.