Confidence in radiative transfer models

by Judith Curry

The calculation of atmospheric radiative fluxes is central to any argument related to the atmospheric greenhouse/Tyndall gas effect.  Atmospheric radiative transfer models rank among the most robust components of climate model, in terms of having a rigorous theoretical foundation and extensive experimental validation both in the laboratory and from field measurements.   However, I have not found much in the way of actually explaining how atmospheric radiative transfer models work and why we should have confidence in them (at the level of technical blogospheric discourse).  In this post, I lay out some of the topics that I think need to be addressed  in such an explanation regarding infrared radiative transfer.  Given my limited time this week, I mainly frame the problem here and provide some information to start a dialogue on this topic, I hope that other experts participating can fill in (and I will update the main post).

Atmospheric radiative transfer models

The Wikipedia provides a succint description of radiative transfer models:

An atmospheric radiative transfer model calculates radiative transfer of electromagnetic radiation through a planetary atmosphere, such as the Earth’s.  At the core of a radiative transfer model lies the radiative transfer equation that is numerically solved using a solver such as a discrete ordinate method or a Monte Carlo method.  The radiative transfer equation is a monochromatic equation to calculate radiance in a single layer of the Earth’s atmosphere. To calculate the radiance for a spectral region with a finite width (e.g., to estimate the Earth’s energy budget or simulate an instrument response), one has to integrate this over a band of frequencies (or wavelengths). The most exact way to do this is to loop through the frequencies of interest, and for each frequency, calculate the radiance at this frequency. For this, one needs to calculate the contribution of each spectral line for all molecules in the atmospheric layer; this is called a line-by-line calculation.  A faster but more approximate method is a band transmission. Here, the transmission in a region in a band is characterised by a set of coefficients (depending on temperature and other parameters). In addition, models may consider scattering from molecules or particles, as well as polarisation.

If you don’t already have a pretty good understanding of this, the Wikipedia article is not going to help much.  There are a few good blog posts that I’ve spotted that explain aspects of this (notably scienceofdoom):

You find scienceofdoom’s treatments to be beyond your capability to understand?   Lets try more of a verification and validation approach to assessing whether we should have confidence in the radiation transfer codes used in climate models.

History of atmospheric (infrared) radiative transfer modeling

I don’t recall ever coming across a history on this subject?  Here are a few pieces of that history that I know of (I hope that others can fill in the holes in this informal history).

Focusing on infrared radiative transfer,  there is some historical background in the famous Manabe and Wetherald 1967 paper on early calculations of infrared radiative transfer in the atmosphere.  As a graduate student in the 1970’s, I recall using the Ellsasser radiation chart.

The first attempt to put a sophisticated radiative transfer model into a climate model was made by Fels and Kaplan 1975, who used a model that divided the infrared spectrum into 19 bands.  I lived a little piece of this history, when I joined Kaplan’s research group in 1975 as a graduate student.

In the 1980’s, band models began to be incorporated routinely in climate models.  An international program of Intercomparison of Radiation Codes  in Climate Models (ICRCCM) was inaugurated for clear sky infrared radiative transfer, with results described by Ellingson et al. 1991 and Fels et al. 1991 (note Andy Lacis is a coauthor):

During the past 6 years, results of calculations from such radiation codes have been compared with each other, with results from line-by-line models and with observations from within the atmosphere.  Line by line models tend to agree with each other to within 1%; however, the intercomparison shows a spread of 10-20% in the calculations by less detailed climate model codes.  When outliers are removed, the agreement between narrow band models and the line-by-line models is about 2% for fluxes.

Validation and improvement of atmospheric radiative transfer models

In 1990, the U.S. Department of Energy initiated the Atmospheric Radiation Measurement Program (ARM) program targeted at improving the understanding of the role and representation of atmospheric radiative processes and clouds in models of the earth’s climate (see here for a history).

A recent summary of the objectives and accomplishments is provided in the 2004 ARM Science Plan.  A list of measurements (and instruments) made by ARM at its sites in the tropics, midlatitudes and the arctic is very comprehensive.  Of particular relevance to evaluating infrared radiative transfer codes is the Atmospheric Emitted Radiance Interferometer.  For this of you who want empirical validation, the ARM program provides this in spades.

The ARM measurements have become the gold standard for validating radiative transfer models.  For line-by-line models, see this closure experiment described by Turner et al. 2004 (press release version here).   More recently, see this evaluation of the far infrared part of the spectra by Kratz et al. (note: Miskolczi is a coauthor).

For a band model (used in various climate models), see this evaluation of the RRTM code:

Mlawer, E.J., S.J. Taubman, P.D. Brown,  M.J. Iacono and S.A. Clough: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16,663-16,682, 1997 link

This paper is unfortunately behind paywall, but it provides an excellent example of the validation methodology.

The most recent intercomparison of climate model radiative transfer codes against line-by-line calculations is described by Collins et al. in the context of radiative forcing.

There is a new international program (the successor to ICRCCM) called the Continual Comparison of Radiation Codes (CIRC), which established benchmark observational case studies and coordinates intercomparison of models.

Conclusions

The problem of infrared atmospheric radiative transfer (clear sky, no clouds or aerosols) is regarded as a solved problem (with minimal uncertainties), in terms of the benchmark line-by-line calculations.   Deficiencies in some of the radiation codes used in certain climate models have been identified, and these should be addressed if these models are to be included in the multi-model ensemble analysis.

The greater challenges lie in modeling radiative transfer in an atmosphere with clouds and aerosols, although these challenges are greater for modeling solar radiation fluxes than for infrared fluxes.   The infrared radiative properties of liquid clouds are well known; some complexities are introduced for ice crystals owing to their irregular shapes (this issue is much more of a problem for solar radiative transfer than for infrared radiative transfer).  Aerosols are a relatively minor factor in infrared radiative transfer owing to the typically small size of aerosol particles.

However, if  you can specify the relevant conditions in the atmosphere that provide inputs to the radiative transfer model, you should be able to make accurate calculations using state-of-the art models.  The challenge for climate models is in correctly simulating the  variations in atmospheric profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols.

And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of  addressing this issue (this will be the topic of the next greenhouse post).

Note: this is a technical thread, and comments will be moderated for relevance.




1,207 responses to “Confidence in radiative transfer models

  1. David L. Hagen

    Ferenc Miskolczi posts papers and comments detailing the development of his quantitative Line By Line (LBL) HARTCODE program, testing it against data, and comparing it against other LBL models. He then used it to quantitatively evaluate the Earth’s Global Planck-weighted Optical depth.

    For the detailed discussion of the methodology and code, see:
    F.M. Miskolczi et al.: High-resolution atmospheric radiance-transmittance code (HARTCODE). In: Meteorology and Environmental Sciences Proc. of the Course on Physical Climatology and Meteorology for Environmental Application. World Scientific Publishing Co. Inc., Singapore, 1990. 220 pg.
    He provides extensive theoretical basis and experimental foundations. He printed the complete code in appendix C (for A. Lacis and others who might care to learn how LBL models calculate.)

    For comparative performance, see: Kratz-Mlynczak-Mertens-Brindley-Gordley-Torres-Miskolczi-Turner: An inter-comparison of far-infrared line-by-line radiative transfer models. Journal of Quantitative Spectroscopy & Radiative Transfer No. 90, 2005.

    Miskolczi published the first quantitative evaluation of the Global Optical Depth. He found it to be effectively stable over the last 61 years. See:
    The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-weighted Greenhouse-Gas Optical Thickness, Energy & Environment, Special issue: Paradigms in Climate Research, Vol. 21 No. 4 2010, August. See his Context Discussion:
    Using a quantitative <a href="Planck weighted Optical Depth, Miskolczi found:

    (1) A theoretically predicted GHG-invariant constant, tau = 1.86756….. ;
    (2) The global average calculated on the TIGR2 radiosonde data archive (GAT), tau = 1.8693;
    (3) One derived from the TFK2009 global energy budget, tau = 1.8779; and
    (4) The average of 61 NOAA NCAR 1948-2008 reanalysis annual means, tau = 1.8688.

    In the cases (1), (2), and (4), tau was calculated as tau = – ln TA, TA=ST/SU ; while in the case (3), tau = – ln (1– ED/eSU) , with e : emissivity.

    • I recall Nullius’s milk analogy about the way a small increase in the concentration of milk in water can affect transparency with enough depth (at least that’s how I remember it) illustrates the effect of small increases in the concentration of CO2 in the atmosphere. Does optical depth refer to this? If it does, and the optical depth has remained constant over 60 years, what does that say about the analogy?

    • David L. Hagen

      See Ferenc Miskolczi’s latest April 2011 results:
      The stable stationary value of the Earth’s global average atmospheric infrared optical thickness Presented by Miklos Zagoni, EGU2011 Vienna

      From quantitative Line By Line evaluations of the global optical depth using the best available dat from 1948-2008, Miskolczi finds:

      The dynamics of the greenhouse effect depend on the dynamics of the absorbed solar radiation and the space-time distribution of the atmospheric humidity. The global distribution of the IR optical thickness is fundamentally stochastic. The instantaneous effective values are governed by the turbulent mixing of H2O in the air and the global (meridional ) redistribution of the thermal energy resulted from the general (atmospheric and oceanic) circulation. . . .

      Global mean IR absorption does not follow the CO2 increase (from 1948 to 2008). Greenhouse effect and the 21.6% increase of CO2 in the last 61 years are unrelated. Atmospheric H2O does but CO2 does not correlate with the IR optical depth. . . .Atmospheric CO2 increase can not be the reason of global warming. . . .IR Optical Depth has no correlation with time. The strong CO2 signal in any time series is not present in the IR optical depth data.

      Thus Miskolczi finds there is NO correlation of global optical depth with CO2, only with H2O.
      Furthermore, the global average is about constant – with very little trend.
      So would “stationary” or “static” be better words than “saturated”?

      Some will argue that teh TIGR2 data is flawed. What better is there?
      Miskolczi has also adjusted to match satellite data.

      Has anyone else taken the effort to actually quantitatively evaluate the global optical depth and how it changes – or explain why it does not?

      • Christopher Game

        The term Planck-weighted greenhouse-gas optical thickness (PWGGOT) means that the optical thickness is evaluated for diffuse thermal radiation from a black surface (Planck weighting) at the bottom of the atmosphere for transmission to space. Only the greenhouse-gas effects are considered, and the calculation does not include explicit effects from clouds.

        David Hagen’s sentence “Thus Miskolczi finds there is NO correlation of global optical depth with CO2, only with H2O” means that there was no linear trend of the global year-round average PWGGOT over the 61-years. It would misread David’s sentence to suppose it meant that immediate CO2 effects do not affect the PWGGOT; of course they do. It is the 61-year linear trend that he refers to.

        David’s sentence says that Miskolczi found a trend effect only from H2O. Not so. Miskolczi’s Figure 11 shows also a trend effect from temperature. Miklos Zagoni’s presentation, to which David refers, also does not mention this. There is also a trend effect from CO2. These two trend effects (CO2 and temperature) must be making real contributions to the calculated values of the PWGGOT (in the sense that those values are determined partly by the quantities of CO2 and temperature and the method of calculation of the PWGGOT), but the magnitudes of those contributions are not to be regarded as showing a statistically significant linear trend when one has to judge only from the 61-year time series of values of the PWGGOT and does not have this a priori information. It is the overall value of the PWGGOT that shows no significant linear trend; this trend cannot be predicted, from just the method of calculation of the PWGGOT alone, because it depends essentially also on the trends of the time series of radiosonde data, which contain the information of the CO2, temperature, and H2O profile time series.

        Miskolczi entitled his 2010 paper ‘The stable stationary value of the earth’s global average atmospheric Planck-weighted greenhouse-gas optical thickness’ and his paper does not use the term ‘saturated’. Christopher Game

    • David L. Hagen

      Miskolczi provides further details, comparing
      Linear trends in the NOAA time series for the first and last 50 years. i.e., for 1948-1997 with 1959-2008. The first show small decline while the latter a small rise in the global optical depth.

  2. I will just summarize some areas to fill in on the post.
    The radiative transfer models in GCMs are band or “integral-line” type models that are calibrated from line-by-line models that themselves are calibrated on theory and direct measurement of spectral lines from radiatively active gases. In this way, this part of the GCM has a direct grounding in physics, and is very easy to verify with spectroscopic measurements. These models are crucial for quantifying the forcing effects of increased CO2 and H2O, and cloud-forcing effects in the atmosphere. The radiative transfer codes get their clouds from other parts of the GCM physics, and are not responsible for clouds per se, but can affect clouds through the processes of radiative heating and cooling in and on the surfaces of clouds that may impact the cloud lifetimes and development. More sophisticated GCMs also carry dust and other aerosols and the interaction of radiation with those directly or via their cloud effects. Obviously another place that radiative forcing is important, apart from the atmosphere, is the ground where longwave and shortwave fluxes interact with the land or ocean energy budgets. Radiation also helps to define the tropospheric and stratospheric temperature profiles, and how well GCMs reproduce these is an important metric. For climate models, GCMs have to obey a realistic global energy budget, as viewed from space, and this is mainly down to their radiative transfer model and cloud distribution.

    • Here’s a good site for the ocean’s absorption of SW vs. LW for non-scientists. Unfortunately, I don’t see any reference to heating of the ocean from down-welling LWR. From the site:

      “Note that only 20% of insolation reaching Earth is absorbed directly by the atmosphere while 49% is absorbed by the ocean and land. What then warms the atmosphere and drives the atmospheric circulation shown in Figure 4.3? The answer is rain and infrared radiation from the ocean absorbed by the moist tropical atmosphere. Here’s what happens. Sunlight warms the tropical oceans which must evaporate water to keep from warming up. The ocean also radiates heat to the atmosphere, but the net radiation term is smaller than the evaporative term. Trade-winds carry the heat in the form of water vapor to the tropical convergence zone where it falls as rain. Rain releases the latent heat evaporated from the sea, and it heats the air in cumulus rain clouds by as much as 500 W/m2 averaged over a year (See Figure 14.1).

      At first it may seem strange that rain heats the air. After all, we are familiar with summertime thunderstorms cooling the air at ground level. The cool air from thunderstorms is due to downdrafts. Higher in the cumulus cloud, heat released by rain warms the mid-levels of the atmosphere causing air to rise rapidly in the storm. Thunderstorms are large heat engines converting the energy of latent heat into kinetic energy of winds.

      The zonal average of the oceanic heat-budget terms (Figure 5.7) shows that insolation is greatest in the tropics, that evaporation balances insolation, and that sensible heat flux is small. Zonal average is an average along lines of constant latitude. Note that the terms in Figure 5.7 don’t sum to zero. The areal-weighted integral of the curve for total heat flux is not zero. Because the net heat flux into the oceans averaged over several years must be less than a few watts per square meter, the non-zero value must be due to errors in the various terms in the heat budget.”

      http://oceanworld.tamu.edu/resources/ocng_textbook/chapter05/chapter05_06.htm

    • Jim

      you said:
      “………….The radiative transfer models in GCMs are band or “integral-line” type models that are calibrated from line-by-line models that themselves are calibrated on theory and direct measurement of spectral lines from radiatively active gases……..”

      Please could you explain (in layman’s terms) a little bit more about this calibration process. What is it? What is done to ensure that errors are not carried over from one type of model into the next and then magnified iteratively? Or have I misunderstood…in which case sorry in advance:)

    • Richard S Courtney

      Jim D:

      You say;
      “For climate models, GCMs have to obey a realistic global energy budget, as viewed from space, and this is mainly down to their radiative transfer model and cloud distribution.”

      Hmmm.
      Yes, that is literally true, but it is misleading.

      As I have explained on two other threads of this blog, the “radiative transfer model” in each GCM is significantly affected by the climate sensitivity to CO2 in each model, and agreement with the “global energy budget” is obtained by adopting an appropriate degree of aerosol forcing (i.e. “cloud distribution”) in each model.

      The values of climate sensitivity and aerosol forcing differ by ~250% between the models. Hence, the GCMs emulate very different global climate systems.

      The Earth has only one global climate system.

      Richard

      • Yes, a part of the radiative transfer model is how it handles aerosols, both in clear air and in clouds. This includes pollution and volcanic emissions. Global observations of aerosol effects haven’t been enough to constrain this very well, and it is a complex effect, especially when clouds are involved. Given this lack of observational constraints the aerosol part of the radiative model and aerosol amounts have some wide error bars (as we see honestly portrayed in the IPCC report). As long as models do something within these error bars, they are plausible, but this is an area where more observations are needed and are being actively carried out (e.g. in the DOE ASR program) to do it better. It is closely tied to the cloud-albedo variation.

  3. David L. Hagen

    Regarding the “confidence” in radiation models, Ferenc Miskolczi addresses radiation errors in his comments on Kiehl-Trenberth 1997/IPCC and in Trenberth-Fasullo-Kiehl (TFK) 2009:

    The longwave section of the Kiehl-Trenberth 1997 (= IPCC 2007 AR4 WG1 Chapter1 FAQ1.1 Figure1) is wrong, both in the concept and in the numbers. The definition of the “Atmospheric Window” in the text does not match with the physical quantity shown in their chart (see Miskolczi’s Comments on KT97). The value of it, given as all-sky top-of-the-atmosphere window flux, should be about 66 Wm^-2, instead of 40 W m^-2, given in both the original KT97 and in the updated Trenberth-Fasullo-Kiehl (TFK) 2009 BAMS publication.

    A further serious problem is that KT97 used the U.S. Standard Atmosphere 1976 (USST76), with appendix B of Liou 1992, for vertical profiles of temperature and water vapor. But that atmosphere contains only about half of the real global average precipitable water. Calculating the greenhouse effect on that reduced GHG content, the KT97 distribution should have an atmospheric window radiation about 99 Wm^-2. This is even more unacceptable for real global average than the given 40 Wm^-2 value.
    For the correctly defined physical quantity (surface transmitted radiation, ST , on the whole spectrum) in his set of data collection, Miskolczi’s computations give a global average value of about 60 Wm-2 (see Table 2 of his recent publication and his Table 1 and Figure 1 for definitions of the quantities). This number is close to what NASA measured on their instrument with their methods, and fits well with the required value of the equilibrium distribution.
    Correcting the TFK2009 Atmospheric Window with the real global average, the resultant greenhouse effect again sits very close to the stable stationary value. In this sense, these quantities support the idea that the Earth’s greenhouse effect maintains a kind of constancy.

    For earlier color graphs, see Zagoni’s 2008 presentation on Miskloczi.

    • Miskolczi here is just quibbling with how the arrows are partitioned on these global energy budget summary diagrams. It is nothing fundamental about the radiative transfer models themselves.

      • David L. Hagen

        Jim D
        That difference is “only” a 30% error in attribution of upward emission St (to the top of atmosphere) that is not absorbed and re-emitted — identifying a major error in atmospheric radiation absorption/emission! Is that typical of GCM/energy balance accuracy expected? (Most other parameters were fairly close.) I thought Curry noted above:

        What is accurate enough in terms of fitness for purpose? I would say calculation of the (clear sky, no clouds or aerosol) broadband IR flux to within 1-2 W m^-2 (given perfect input variables, e.g. temperature profile, trace profiles, etc.)

        I seem to recall a a climategate email bewailing: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t”. with some some clarification.

        See the paper by: Kevin E Trenberth An imperative for climate change planning: tracking Earth’s global energy, Current Opinion in Environmental Sustainability 2009, 1:19–27
        Trenberth notes the observed energy flow is 145 while the “residual” (error) is 30–100 * 10^20 J/year. (ie 21-29% unaccounted for “residual”).

        Radiative forcing [2] from increased greenhouse gases (Figure 4) is estimated to be3.0 W m^2 or 1.3% of the flow or energy, and the total net anthropogenic radiative forcing once aerosol cooling is factored in is estimated to be 1.6 W m^2 (0.7%), close to that from carbon dioxide alone (Figure 4). The imbalance at the top-of-the-atmosphere (TOA) would increase to be 1.5% (3.6 W m^2) once water vapor and ice-albedo feedbacks are included.. . . To better understand and predict regional climate change, it is vital to be able to distinguish between short-lived climate anomalies, such as caused by El Nin˜no. An imperative for climate change planning: tracking Earth’s global energy or La Nin˜0 events, and those that are intrinsically part of climate change, whether a slow adjustment or trend, such as the warming of land surface temperatures relative to the ocean and changes in precipitation characteristics.

        Perhaps this 30% radiation error that Miskolczi identified together with the factor of 2 error in precipitable water column could help refine/find Trenberth’s missing energy?

      • He redefined what the window region was, and says the previous paper was wrong because they defined it differently from him. Seems like an opinion piece.
        The other part about not accounting for interannual variability with existing data sources is well known. Is it random error or bias? It would be good to know because random error decreases with the length of the time series. Bias implies we need more or better instrumentation.

    • David L. Hagen

      Judith
      Re: Global Optical Depth & Precipitable Water

      Planck-weighted Global Optical Depth
      Ferenc Miskolczi has evaluated:
      1) The Planck-weighted Global Optical Depth <a href=” ( tau a = – Ln (Ta) = 1.874)
      2) The sensitivity of Optical Depth versus water vapor
      3) The sensitivity of Optical Depth versus CO2
      4) The sensitivity of Optical Depth versus temperature, and
      5) The trend in Optical Depth for 61 years – very low.
      Each of these parameters are testable against each of the Global Climate Models, and vice versa. These would provide independent objective tests of the “confidence” in the radiative code and atmospheric properties in each of the GCMs and Miskolczi’s evaluations and 1D model.

      Precipitable Water
      In his work, Miskolczi reevaluated the atmospheric profile, obtaining water vapor, CO2, and temperature vs depth (pressure). In doing so he found:
      6) Precipitable water u=2.533 prcm
      6) This precipitable water is factor of two higher than the standard atmospheric profile USST-76 (u=1.76 prcm).
      e.g. See Fig 5 in Miskolczi Greenhouse effect in semi-transparent planetary atmospheresQuarterly Journal of the Hungarian Meteorological Service Vol. 111, No. 1, January–March 2007, pp. 1–40

      A. Lacis (below) noted that:

      In particular, given the nature of atmospheric turbulence, a ‘first principles’ formulation for water vapor and cloud processes is not possible. Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. However, once these coefficients have been set, they become part of the model physics, and are not subject to further change.

      Global climate models have been found to poorly, especially in predicting precipitation. Anagnostopoulos, G. G. , Koutsoyiannis, D. , Christofides, A. , Efstratiadis, A. and Mamassis, N. ‘A comparison of local and aggregated climate model outputs with observed data’, Hydrological Sciences Journal, 55:7, 1094 – 1110
      GCMs appear to markedly overpredict warming compared with global temperature changes. See:
      McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290.
      Global Climate Models apparently do not predict the significant correlation between precipitation / runoff and the 21 year Schwabe solar cycle. See:
      WJR Alexander et al. Linkages between solar activity, climate predictability and water resource development Journal of the South African Institution of Civil Engineering, Volume 49 Number 2 June 2007

      Miskolczi’s reported water vapor results with evidence of poor GCM performance raises key questions:
      9) What is the precipitable water vapor relative to Miskolczi’s atmospheric column (u=2.53 prcm) versus the USST-76 standard atmospheric column (u=1.76 prcm)?
      10) Which atmospheric profiles were used to tune the GCM’s?
      11) Could tuning to USST-76 etc. having low water vapor cause GCM’s to over predict climate feedback and poorly predict precipitation?

      I look forward to learning more on these issues.

      Recommend running another thread just on the uncertainties in atmospheric profiles.

  4. David,

    Your unmitigated faith in Ferenc nonsense does not speak well of your own understanding of radiative transfer.

    I include below an excerpt from my earlier post on Roger Pielke Sr’s blog http://pielkeclimatesci.wordpress.com/2010/11/23/atmospheric-co2-thermostat-continued-dialog-by-andy-lacis/

    The basic point being that of all the physical processes in a climate GCM, radiation is the one physical process that can be modeled most rigorously and accurately.

    The GISS ModelE is specifically designed to be a ‘physical’ model, so that Roy Spencer’s water vapor and cloud feedback ‘assumptions’ never actually need to be made. There is of course no guarantee that the model physics actually operate without flaw or bias. In particular, given the nature of atmospheric turbulence, a ‘first principles’ formulation for water vapor and cloud processes is not possible. Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. However, once these coefficients have been set, they become part of the model physics, and are not subject to further change. As a result, the model clouds and water vapor are free to change in response to local meteorological conditions. Cloud and water vapor feedbacks are the result of model physics and are thus in no way “assumed”, or arbitrarily prescribed. A basic description of ModelE physics and of ModelE performance is given by Schmidt et al. (2006, J. Climate, 19, 153–192).

    Of the different physical processes in ModelE, radiation is the closest to being ‘first principles’ based. This is the part of model physics that I am most familiar with, having worked for many years to design and develop the GISS GCM radiation modeling capability. The only significant assumption being made for radiation modeling is that the GCM cloud and absorber distributions are defined in terms of plane parallel geometry. We use the correlated k-distribution approach (Lacis and Oinas, 1991, J. Geophys. Res., 96, 9027–9063) to transform the HITRAN database of atmospheric line information into absorption coefficient tables, and we use the vector doubling adding method as the basis and standard of reference for GCM multiple scattering treatment.

    Direct comparison of the upwelling and downwelling LW radiative fluxes, cooling rates, and flux differences between line-by-line calculations and the GISS ModelE radiation model results for the Standard Mid-latitude atmosphere is shown in Figure 1 below. (available on Roger Pielke Sr’s blog)

    As you can see, the GCM radiation model can reproduce the line-by-line calculated fluxes to better than 1 W/m2. This level of accuracy is representative for the full range of temperature and water vapor profiles that are encountered in the atmosphere for current climate as well as for excursions to substantially colder and warmer climate conditions. The radiation model also accounts in full for the overlapping absorption by the different atmospheric gases, including absorption by aerosols and clouds. In my early days of climate modeling when computer speed and memory were strong constraints, the objective was to develop simple parameterizations for weather GCM applications (e.g., Lacis and Hansen, 1974, J. Atmos. Sci., 31, 118–133). Soon after, when the science focus shifted to real climate modeling, it became clear that an explicit radiative model responds accurately to any and all changes that might take place in ground surface properties, atmospheric structure, and solar illumination. Thus the logarithmic behavior of radiative forcings for CO2 and for other GHGs is behavior that has been derived from the GCM radiation model’s radiative response (e.g., the radiative forcing formulas in Hansen et al., 1988, J. Geophys. Res., 93, 9341–9364) rather than being some kind of a constraint that is placed on the GCM radiation model.

    Climate is primarily a boundary value problem in physics, and the key boundary value is at the top of the atmosphere being defined entirely by the incoming (absorbed) solar radiation and the outgoing LW thermal radiation. The global mean upwelling LW flux at the ground surface is about 390 W/m2 (for 288 K), and the outgoing LW flux at TOA is about 240 W/m2 (or 255 K equivalent). The LW flux difference that exists between the ground and TOA of 150 W/m2 (or 33 K equivalent) is a measure of the terrestrial greenhouse effect strength. We should note that the transformation of the LW flux that is emitted upward by the ground, to the LW flux that eventually leaves the top of the atmosphere, is entirely by radiative transfer means. Atmospheric dynamical processes participate in this LW flux transformation only to the extent of helping define the atmospheric temperature profile, and in establishing the local atmospheric profiles of water vapor and cloud distributions that are used in the radiative calculations.

    Armed with a capable radiative transfer model, it is then straightforward to take apart and reconstruct the entire atmospheric structure, constituent by constituent, or in any particular grouping, to attribute what fraction of the total terrestrial greenhouse effect each atmospheric constituent is responsible for. That is where the 50% water vapor, 25% cloud, and 20% CO2 attribution in the Science paper (for the atmosphere as a whole) came from. “Follow the money!” is the recommended strategy to get to the bottom of murky political innuendos. A similar approach, using “Follow the energy!” as the guideline, is an effective means for fathoming the working behavior of the terrestrial climate system. By using globally averaged radiative fluxes in the analysis, the complexities of advective energy transports get averaged out. The climate energy problem is thereby reduced to a more straightforward global energy balance problem between incoming (absorbed) SW solar energy and outgoing LW thermal energy, which is fully amenable to radiative transfer modeling analysis. The working pieces in the analysis are the absorbed solar energy input, the atmospheric temperature profile, surface temperature, including the atmospheric distribution of water vapor, clouds, aerosols, and the minor greenhouse gases, all of which can be taken apart and re-assembled at will in order to quantitatively characterize and attribute the relative importance of each radiative contributor.

    Validation of the GCM climate modeling performance is in terms of how well the model generated temperature, water vapor, and cloud fields resemble observational data of these quantities, including their spatial and seasonal variability. It would appear that ModelE does a generally credible job in reproducing most aspects of the terrestrial climate system. However, direct observational validation of the GCM radiation model performance to a useful precision is not really feasible since the atmospheric temperature profile and absorber distributions cannot all be measured simultaneously with available instrumentation to the required precision that would lead to a meaningful closure experiment. As a result, validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations that utilize laboratory measurements for cloud and aerosol refractive indices and absorption line radiative property information.

    • Thanks Andy. I agree that that validation is a mess owing to specifying the atmospheric characteristics, but clear sky validation has been done quite successfully by the ARM program.

    • Hi A Lacis,
      Given your background as a GISS climate modeler, I am interested in your view of Willis Eschenbach’s analysis of the performance of various models at http://wattsupwiththat.com/2010/12/02/testing-testing-is-this-model-powered-up/#more-28755.

      Thanks,
      Chip

    • Andy,
      Does the GISS ModelE cover all wavelengths between 200 nm (UV) and 50000 nm (longer IR) ? Or are there gaps? More specifically, does it cover the 2400 -3600 nm wavelengths?

    • David L. Hagen

      A. Lacis
      Judith has called for “Raising the level of the game”
      Will you rise to the level of professional scientific conduct?
      Or need we treat your comments as partisan gutter diatribe?
      Your diatribe on “unmitigated faith in Ferenc nonsense does not speak well of your” professional scientific abilities or demeanor. I have a physics degree and work with combustion where most heat transfer is from radiation. (“Noise” can distort a round combustor into a triangular shape!) Though not a climatologist, I think I am sufficiently “literate” to follow the arguments – and challenge unprofessional conduct when I see it.

      Per professional science, I see Miskolczi has created a world class software program to professionally calculate radiative exchange in his Line By Line HARTCODE. He then validates and round robin compares his HARTCODE LBL code with peer reviewed published papers.
      Miskolczi’s HARTCODE was used at NASA to correct/interpolate satellite data: AIRS – CERES Window Radiance Comparison, AIRS-to-CERES Radiance Conversion

      He then applies this radiative code using published data to evaluate atmospheric radiative fluxes, analyze them, and form a 1D planetary climate model.
      He takes the best/longest available 61 year TIGR radiosonde data and the NOAA reconstruction data. Miskolczi then calculates a “Planck-weighted spectral hemispheric transmittance using M=3490 spectral intervals, K=9 emission angle streams, N=11 molecular species, and L=150 homogeneous atmospheric layers.” That appears to be the first quantitative evaluation of the Global Optical Depth and absorption.

      He has posted preliminary results showing that NASA satellite data show supporting trends parallel to his analysis. See Independent satellite proof for Miskolczi’s Aa=Ed equation Note that TIGR Su = Ed + St is linear the surface upward flux Su, and the NASA data is parallel to that.

      From the scientific method, I understand that professionally you could challenge:
      1) The data
      2) The code
      3) The uncertainty, and bias, or
      4) Missing/misunderstood physics.

      You say: radiation “is the part of model physics that I am most familiar with, having worked for many years to design and develop the GISS GCM radiation modeling capability.” You could have evaluated Miskolczi’s definition of the “greenhouse effect”, his Planck Weighting, or calculation of the optical depth or atmospheric absorption. As you didn’t, I presume they are correct.

      Having developed them, you presumably have the tools to conduct an alternative evaluation to check on the accuracy of Miskolczi’s method and results. Judith cites your paper: “Line by line models tend to agree with each other to within 1%; however, the intercomparison shows a spread of 10-20% in the calculations by less detailed climate model codes. When outliers are removed, the agreement between narrow band models and the line-by-line models is about 2% for fluxes.”Presumably your radiation model has not more than double the error of Miskolczi’s HARTCODE (<2% vs <1%).
      Why then do you not try to replicate Miskolczi’s method in a professional scientific manner? Are you afraid you might confirm his results?

      You observe: “The reason this “Miskolczi equality” is close to being true is that in an optically dense atmosphere (the atmospheric temperature being continuous) there will be but a small difference in the thermal flux going upward from the top of a layer compared to the flux emitted downward from the bottom of the layer above.” Spencer makes a similar criticism. Miskolczi (2010) quantitatively measures and shows a small difference between up and downward flux. For the purpose of Miskolczi’s 1D planetary greenhouse model, that difference between upward and downward flux appears to be a second order affect that does not strongly bear on the primary results of his global optical depth or absorption. He still accounts for the atmospheric variation in temperature, water vapor, CO2 in the empirical data.

      The only serious issue you implied in you posts is how representative the TIGR and NOAA atmospheric profiles are to the global atmosphere: “. . .the atmospheric temperature profile and absorber distributions cannot all be measured simultaneously with available instrumentation to the required precision that would lead to a meaningful closure experiment.”

      Regarding Miskolczi’s Planck weighted Global Optical Depth and Absorption, other issues that have been raised is how well he treats clouds, and the accuracy of the TIGR/NOAA data.
      You could quantitatively show contrary evidence that
      1) there are systemic false experimental error trend in the data over the last 61 years.
      2) there are major errors due to how clouds and convection are treated in this 1D model; or
      3) poor data distribution causes major errors in his results.

      Instead I see you respond with scientific slander, asserting that Miskolczi imposed the results of his subsequent simplified climate model on his earlier calculations.

      In your comments on Ferenc Miskolczi’s greenhouse analysis: you said that “There is no recourse then but to rely on statistical analysis and correlations to extract meaningful information.” You claim that “radiative analyses performed in the context of climate GCM modeling, have the capability of being self-consistent in that the entire atmospheric structure (temperature, water vapor, ozone, etc. distributions) is fully known and defined.”
      You go on to state: A Lacis | December 5, 2010 at 12:12 pm
      “We also analyze an awful lot of climate related observational data. Data analysis probably takes up most of our research time. Observational data is often incomplete, poorly calibrated, and may contain spurious artifacts of one form or another. This is where statistics is the only way to extract information.” Consequently you claim: “And this climate model does a damn good job in reproducing the behavior of the terrestrial climate system, . . .”

      However, when another scientist, Miskolczi, conducts such “statistical analysis and correlations to extract meaningful information” you scientifically slander him, asserting that he did not conduct his analysis as you believe it should. Yet he conducted a much more detailed analysis along similar lines to your proposed method.

      You assert: “Instead of calculating these atmospheric fluxes, Miskolczi instead assumes that the downwelling atmospheric flux is simply equal to the flux (from the ground) that is absorbed by the atmosphere.”
      You claim that he imposed the results of a consequent simplified model on his detailed calculations. I challenged you that you were asserting the exact opposite of his actual published method. When confronted, you refused to check his work or to correct your statement, or show where I was wrong. I find your polemical diatribes to sound like the Aristotelians criticizing Galileo. Your words border on professional malpractice.

      You state: “Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. . . . However, once these coefficients have been set, they become part of the model physics, and are not subject to further change.”
      Yet when Miskolczi does the same “tuning” of the atmospheric profiles with the available TIGR and NOAA data to obtain empirical composition, temperature, pressure and humidity, you accusing him of imposing his simple model on the calculations.

      You have not shown ANY error in the radiative physics he built on, nor in the coding of his HARTCODE software, nor in the atmospheric profiles he generated.
      You say: “The working pieces in the analysis are the absorbed solar energy input, the atmospheric temperature profile, surface temperature, including the atmospheric distribution of water vapor, clouds, aerosols, and the minor greenhouse gases”. I understand Miskolczi to include these with the effect of “clouds and aerosols” in the atmospheric profiles.

      You state: “validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations that utilize laboratory measurements for cloud and aerosol refractive indices and absorption line radiative property information.”

      It appears that Miskolczi has done exactly that, with independent data. It appears that when he comes up with results opposite to your paradigm, you conduct a viscous ad hominem attack scientifically slandering him instead of responding in a scientifically professional objective way.

      You again accuse: “Roy Spencer” of making “water vapor and cloud feedback ‘assumptions’”. I understood Spencer to have done the opposite. In his recent papers, he actually measures dynamic phase/ feedback magnitudes form satellite data and phase angle.
      In your cited Pielke post you state: “there is really nothing that is being assumed about cloud and water vapor feedbacks, other than clouds and water vapor behave according to established physics. Climate feedbacks are simply the end result of model physics.” However, you make assumptions on the stability of Total Solar Insolation, on the variability of clouds with cosmic rays, and on the cause and magnitude ocean oscillations. The cause and magnitude of natural CO2 changes vary strongly with each of those assumptions. The strong difference between your results and those of Miskolczi and Spencer raise serious questions on your results and your (low) estimates of uncertainties involved.
      At the International Conference on Climate Change
      Ferenc Miskolczi presented: Physics of the Planetary Greenhouse Effect
      http://www.heartland.org/bin/media/newyork08/PowerPoint/Tuesday/miskolczi.pdf
      And
      Miklos Zagoni, presented: Paleoclimatic Consequences of Dr. Miskolczi’s Greenhouse Theory
      http://www.heartland.org/bin/media/newyork08/PowerPoint/Tuesday/zagoni.pdf

      Physicist Dr. Ir. E. van Andel,has addressed The new climate theory of Dr. Ferenc Miskolczi.
      Physicist Mikolas Zagoni has presented on: The Saturated Greenhouse Theory of Ferenc Miskolczi.
      There are numerous technical posts on Miskolczi at Niche Modeling.

      From preliminary reading Miskolczi’s work appears professional and it had been peer reviewed. I would have preferred further refinement of its language. I do not have the time or tools to professionally review, reproduce or test Miskolczi’s work. I may be wrong in my “lay scientific” perspective. However antiscientific diatribes don’t persuade.

      You have the radiation modeling tools. Use them. You refused to read, or follow Miskolczi’s methods and calculations. Your actions speak much louder than your words. Until you provide credible qualitative or quantitative scientific rebuttal, I will stick to believing in the scientific method. I continue to hold that peer reviewed published papers like Miskolczi’s and Spencer’s have more weight than alarmist polemical posts.

      Will you step up to the challenge of “Raising the level of your game” with a professional scientific response?

      • David,

        In my remarks about Miskolczi’s paper, I never claimed the his line-by-line HARTCODE results were erroneous. I am not familiar with HARTCODE, so I said, let’s assume the Miskolczi is doing his line-by-line calculations correctly. I am more familiar with the line-by-line results of radiation codes like FASCODE and LBLRTM, which agree with out line by line model to better than 1%. Our GCM ModelE radiation model agrees with our line-by-line model also to better than 1%.

        The real problem with Miskolczi’s results in not HARTCODE, but what he uses it for. Why on Earth would anybody want to calculate all of atmospheric absorption in the form of a useless “greenhouse gas optical thickness” parameter. You should know that atmospheric gaseous absorption is anything but uniform. You need to preserve as much of the spectral absorption coefficient variability as possible, and to treat this spectral variability properly in order to calculate the radiative transfer correctly. This kind of approach is something that might have been used a hundred years ago when there were no practical means to do numerical calculations.

        Can Miskolczi calculate the well established 4 W/m2 radiative forcing for doubled CO2, and its equivalent 1.2 C global warming, with his modeling approach?

        And you are misinterpreting Roy Spencer’s remarks about the results of our Science paper where we zeroed out the non-condensing greenhouse gases. Roy thought that the subsequent collapse of the greenhouse effect with water vapor condensing and raining out was because we had “assumed” that water vapor was a feedback effect. I just pointed out that we had made no such assumption. Water vapor in the GISS ModelE is calculated from basic physical relationships. The fact that water vapor condensed and precipitated from the atmosphere was the result of the thermodynamic properties of water vapor, and not assumptions of whether we wanted the water vapor to stay or to precipitate.

      • David L. Hagen

        A. Lacis
        Thanks for your response and queries. It is encouraging to hear that: “Our GCM ModelE radiation model agrees with our line-by-line model also to better than 1%.” That is very respectable compared to the 2004 intercomparison of LBL models using HITRAN 2000. I believe performance and resolution have improved since then. E.g. Miskolczi uses: “wavenumber integration [] performed numerically by 5th order Gaussian quadrature over a wavenumber mesh structure of variable length. At least Δnj ≈ 1 cm−1 spectral resolution is required for the accurate Planck weighting.”
        AL: “Why on Earth would anybody want to calculate all of atmospheric absorption in the form of a useless “greenhouse gas optical thickness” parameter.”
        DH: One foundational reason is to uphold the very integrity of science against authoritarianism. A second foundational reason is to provide an independent check on the validity of predictions of Catastrophic Anthropogenic Global Warming (CAGW) compared to natural causes for climate change. What is the relative magnitudes of these competing factors?
        The orthodox model dominated by “anthropogenic CO2 warming + H2O Feedback” implies an increasing optical absorption or optical depth with increasing fossil combustion CO2. Alternative theories seek to quantify natural causes of climate change including the 21 year (“double”) magnetic Hale solar cycle, perturbations of Jovian planets on the sun and in turn the earth, solar modulation of galactic cosmic rays impacting clouds, rotation of the milky way galaxy affecting galactic cosmic rays etc. One or more of these are being tested to explain the strong correlation between variations in the Earth’s Length Of Day (LOD) with climate etc. Miskolczi (2004) (Fig 18, 19) demonstrates greater accuracy of LBL evaluations of atmospheric emission compared to conventional bulk models. That helps evaluation of global energy budgets. Quantitative LBL based 1D models can also provide an independent test for the “runaway greenhouse” affect.
        Both NASA and Russia lost rockets & space craft due to programming errors. We the People are now being asked for $65 trillion for “climate mitigation”. Many of scientists, engineers and concerned citizens are asking for “a second opinion” and for exhaustive “kicking the tires” tests.
        AL:

        validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations

        DH: In light of this validation difficulty you noted, Miskolczi’s comprehensive detailed quantitative calculation of the Planck weighted global optical depth directly from original TIGR radiosonde or summary NOAA data provides an independent test of this primary CO2 radiation climate sensitivity with the best available data extending back 61 years. If the global optical depth does NOT vary as expected, then that suggests other parameters have greater impact than conventionally modeled. To my knowledge, these Planck-weighted global optical transmittance and absorption parameters have never been calculated before. Nor have they been used as an independent test of GCM results. 1) Please let us know of any other publications that have done so.

        AL: “You should know that atmospheric gaseous absorption is anything but uniform.”
        DH: I agree. Miskolczi (2004), Miskolczi (2008) and Miskolczi (2010) show detailed nonlinear results of the absorption for water vapor vs CO2, for the surface, as a function of altitude, at Top of Atmosphere, as a function of latitude, grouped for atmospheric WINdow, MID Infra Red, Far Infra Red, for atmospheric down emission and up emission.
        2) Can you refer us to any other paper(s) that provide equal or higher detail on this non-uniformity? – especially with full LBL quantitative calculations?

        AL: You need to preserve as much of the spectral absorption coefficient variability as possible, and to treat this spectral variability properly in order to calculate the radiative transfer correctly.
        DH: I agree. Miskolczi retains full “spectral absorption coefficient variability” for each absorbing gas species, across 3459 spectroscopic ranges; as a function of atmospheric column including altitude calculated over 150 layers, as a function of latitude, and as a function of radiant angle.
        AL: This kind of approach is something that might have been used a hundred years ago when there were no practical means to do numerical calculations.”
        DH: It is precisely because high power computational resources are now available that Miskolczi is now able to conduct these extremely detailed quantitative computations (compared to prior highly simplified bulk calculations. I would never have dreamed of doing this on my sliderule). Miskolczi calculates absorption for individual cells as a function of altitude and latitude, incorporating all the variations in temperature, pressure, and water vapor, as a function of wavelength including direct short wave (visible) absorption, and reflected absorption (when the surface is not black); as well as Long Wave (broken into sub groups of the atmospheric WINdow, Mid and Far IR). Each cell views IR emissions from other cells along 11 directions. Miskolczi calculates Planck weighted absorption to give a true solar weighting. All this is then integrated to obtain a Planck-Weighted Global Optical transmission tau, and then the corresponding Planck-weighted Global Optical Absorption Aa or Optical Depth. See Miskolczi 2010, Fig 10. This is then repeated for each of the 61 years of available TIGR/NOAA data.
        From your description, I understand GCMs to only calculate simplified absorption and approximate this quantitative level of detail to reduce computational effort.
        3) Do you know of any other publications calculating transmission and absorption to this high LBL level of detail? Have any others reported this 61 year mean giving of atmospheric transmission tau A = 1.868754 and atmospheric absorption Aa = 0.84568?

        AL: Can Miskolczi calculate the well established 4 W/m2 radiative forcing for doubled CO2, and its equivalent 1.2 C global warming, with his modeling approach?
        DH: Yes, Miskolczi (2010) Sect 2 pg 256-247 addresses that and calculates detailed sensitivities for other parameters – only possible by high resolution LBL calculations:

        “In other words, CO2 doubling would virtually, with no feedback, increase the optical thickness by 0.0246. Calculations here show that an equivalent amount of increase can be caused by 2.77 per cent increase in H2O. . . the dependence of the optical thickness . . . is not feasible to express this by a summary explicit analytical function.. ..the spectral overlapping of the absorption bands of the individual absorbers. The dependence of the optical depth on the temperature is also extremely complex and again cannot feasibly be described by an explicit analytical expression. The above dependences can only be diagnosed by using the LBL method for the transmittance computation in conjunction with a realistic properly stratified spherical refractive real (or model) atmosphere which is subjected to temperature and absorber amount perturbations.”

        See also his detailed discussion of sensitivities in Sect. 4 p 259-260.
        4) Do GCMs accurately calculate the full Beer-Lambert law with increasing saturation? See Jeff Glassman below.

        AL: the results of our Science paper where we zeroed out the non-condensing greenhouse gases.
        DH: Thanks for the clarification. How have you addressed the enormous buffer capacity of the ocean with numerous related salts? E.g. See Tom V. Segalstad. Segalstad highlights the arorthite-kaolinite buffer, clay mineral buffers, and calcium silicate-calcium buffer which are at least three orders of magnitude above the ocean’s carbonate solution buffers. CO2 variation is thus highly constrained within “small” dynamic variations over geologically “short” periods.

        Miskolczi finds the atmospheric long wave (LR) transmission and Planck weighted Global Optical Depth (absorption) are highly stable. This suggests other factors such as solar & cosmic modulation of clouds and ocean oscillations may have much higher primary impact that currently modeled. There may be major systemic trend errors in the available TIGR data – that likely contaminate both GCM and Miskolczi’s models, and combinations thereof.
        5) We look forward to seeing how well GCM’s can reproduce and/or disprove Miskolczi’s results. It will be fascinating to discover the causes of these dramatic differences in these 61 year long term sensitivities based on the best available data.

    • Andy Lacis:
      Validation of the GCM climate modeling performance is in terms of how well the model generated temperature, water vapor, and cloud fields resemble observational data of these quantities, including their spatial and seasonal variability. It would appear that ModelE does a generally credible job in reproducing most aspects of the terrestrial climate system.

      Hi Andy. Does the GISS ModelE GCM successfuly reproduce the fall in tropospheric relative humidity since 1948 empirically measured by radiosonde balloons?

      If so, does this help explain the non-appearance of the tropospheric hotspot predicted by earlier, unrealisticl models?

      I’m very pleased to see your comment on the ‘best of greenhouse’ thread that your models now take account of oceanic oscillations.

      How much of the warming from 1975 to 2003 is now being attributed to them?

      Thanks.

    • A. Lacis

      “Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions.”

      This is the major problem I have with climate models. I count 6 inter-related parameters that have to be calibrated, plus an unknown number of unknown parameters(cosmic rays, electromagnetic effects such as lightning, turbulence, droplet size, and ???). That would require n factorial(I’m not sure) interaction matrices that have to be produced by measurements(which I am sure no one has done, unlike the radiation transfer models). If at any time during the model solution a parameter ends up outside it’s measured range the whole calculation falls apart. You can’t project fitted data outside it’s measured range. I don’t see how such a model can be made reliable and testable. The fact that it can be tuned to mimic a series of measurements doesn’t prove a thing about its ability to produce accurate results in a long term prediction.

      The radiative transfer through the atmosphere is only one part of how energy is transferred and at what rate. Just compare the rate of energy transfer from radiation, a few hundred watts per m^2 to that moved by evaporation and convection. Yes, the clear sky problem is likely OK. Unfortunately that is only a small part of the problem.

  5. Very interesting thread.

    For line-by-line calculations, absorption is computed on a prescribed spectral grid (at every model pressure and temperature level), with the equations of radiative transfer to calculate upwelling /downwelling radiative fluxes. Most of the absorption arises from molecular transitions, which from quantum physics is discrete, giving rise to absorption lines. Despite the discrete nature of molecular absorption and emission, this process is not monochromatic however (confined to a single wavelength). Rather, absorption for some transition becomes strongest at the line center and decays away from the center due to various ‘broadening’ mechanisms which arise simply from the Heisenberg uncertainy principle, pressure effects (dominant especially in the low atmosphere), or the motions of the molecules (where absorption is doppler shifted relative to an observer). The convolution of pressure and doppler effects gives rise to the so-called Voigt line-shape.

    There is some absorption however that is much more continuous and stronger, moreso than is explained by local absorption lines. This is the ‘continuum absorption’ and some mechanisms for this feature, depending on the gas or region on the EM spectrum are still a matter of uncertainty. It is important in some areas of the H2O and CO2 regimes however, and becomes important esepcially in dense atmospheres where collisions between molecules can allow transitions to occur that are otherwise forbidden. In fact, in some atmospheres (such as Titan or the outer planets), even the diatomic molecules like H2 can become very good greenhouse gases due to this process.

    Representing the continuum absorption in practical modelling has a few different approaches. Even for LBL calculations, several parameter choices must be made concerning the formualtion of the continuum, or the sub-
    Lorentzian (or super-Lorentzian, depending on the gas) absorption features in the wings of the spectral lines. Due to this, good fits can be made between LBL calculations and observational spectra. One problem is that for people interested in climates on, say, early Earth or Mars, the radiative transfer issue even in clear skies is far from resolved.

    Another approximate method involves band models, which are basically fits to transmission spectra generated by LBL calculations. Band averaging groups together many molecular transitions, and there are also other methods such as the correlated k-disribution (which I’m sure Andy Lacis will talk about due to this authorship on the 1991 paper with Oinas). One of the objects of the famous ‘Myhre et al 1998′ study which gives the ‘5.35*ln(C/Co)’ radiative forcing equation for CO2, is to compare LBL results from previous Narrow-band models and broad band models, which have different treatments of solar absorption or the vertical structure of the gas.

    For global warming applications, the strength of an individual greenhouse gas depends upon the distribution of itself and the temperature, its absorption characteristics, and the concentration itself. For very low concentrations of greenhouse gases, the absorption is approximately linear with mxing ratio in the air, weakening to a square root and then eventually to a logarthmic dependence. This is why methane is often quoted as being a ‘stronger greenhouse gas than CO2.’ It’s not an intrinsic property of the gas, but rather a consequence that methane exists in much lower concentrations and can provide a greater warming potential, molecule-for-molecule, in the current atmosphere.

    Without considering feedbacks, the temperature response to a given forcing (such as a doubling of CO2) can be almost purely determined by the radiative forcing for CO2, since the no-feedback sensitivity is merely given by the derivative of the Planck function with respect to the temperature. The ‘forcing’ depends obviously on the way forcing is defined, some authors using varying definitions, and these differences must be kept in mind when comparing literature sources. The IPCC AR4 defines ‘radiative forcing’ as the change in net irradiance *at the tropopause* while allowing the stratosphere to re-acquire radiative equilibrium (which occurs on timescales of months). Once this is done, it is found that the radiative forcing for a doubling of CO2 is given as ~3.7 W/m2. A comparison of 9 out of the 20 of the GCMs used in the IPCC AR4 shows differences in the net all-sky radiative forcing between 3.39 and 4.06 W/m2. See e.g.,
    http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3974.1

    In constrast, Table 1 of http://www.gfdl.noaa.gov/bibliography/related_files/bjs0601.pdf shows that the Planck feedback response has relatively very little spread amongst models (which agrees well with simple back of the envelope calculations as well), indicating a rather robust response for a no feedback atmosphere. It follows from all of this that most of the uncertainty assoicated with a doubling of CO2 comes from the various feedback effects, especially low cloud distribution, and these feedbacks involve not just radiative transfer but also dynamics and microphysics.

  6. steven mosher

    Thanks Judith,

    I think it needs to be dumbed down even further at least for a start.

    RTE, I would hope, would be the one aspect of AGW science that all sides can come to agree upon. One thing I’ve tried ( and failed of course) to convey to my friends on the skeptical side is that RTE is really not up for serious debate. The angle I take is that this physics is actually used to design things that actually work. That might be an angle some skeptics would appreciate. How RTE get used in engineering. We used to not be able to talk about it ( or get shot) but it would make a nice entry point for your readers to invite somebody who uses RTE in their current engineering job.

    So, I would start with observations ( measurements and experiments), then practical applications, then Theory. That fits the particular bent of thinking I see in the skeptical circles.

    • Maybe willis can have a go at this :)

    • randomengineer

      How RTE get used in engineering.

      Happens every day in semiconductor fabs. Automated critical dimension measurement and control equipment is always housed in temp controlled microenvironments.

      It’s also easy to demonstrate. Take any industrial microscope at high magnification and look at something (e.g. a feature of say 40 – 100 nanometers) and place your hand within 4 or so inches from the stage assembly. Your image feature will warp out of focus solely from the heat of your hand. Remove your hand, and focus eventually returns. The heat from your hand affects the microscope stage via RTE.

      Designing automation with this type of equipment requires tight control of the thermal environment for this reason (and probably spells out why it’s automated in the first place.)

      Yeah I know that by “engineering” you probably meant something a little fancier or more esoteric as opposed to everyday stuff… but this is, if nothing else, a simple yet practical demo of RTE knowledge being used in every day engineering.

      I hope I’ve contributed here as opposed to being intrusive.

      • thanks, this is the kind of example that people can relate to

      • steven mosher

        Thats exactly the kind of thing I am refering to.

        communications engineers, engineers in defense ( i worked in stealth .. broadband stealth) sensor specialists. We all know that RTE are accurate, they work, we rely on them day in and day out. we are not taking crazy pills and neither have we been subverted by some global watermellon conspiracy. For me its a threshhold question in having discussions with skeptics. If they wont learn enough to accept this, then we really can’t have a meaningful dialogue. So, I just ask them. can you at least accept physics that has worked for years and protected your freedom of speach?

        When a skeptic uses a chip ( his computer chips) to deny a science that makes chips possible, then he’s got a tangible example of the problem with his position.

        So, Judith How many industrial examples can we come up with.

      • randomengineer

        Since what I described is the human version of a basic apartment radiator I doubt you really need engineering examples. This is why I’d wondered if I was submitting TOO simple an example. Surely nobody really can question RTE physics ?!?

        I’m thinking you can use the temp meaurement record to prove the exact same concept *and* prove that GW is real. In fact you and I discussed this (very very briefly) some time back elsewhere. All you have to do is look at the tMin from a handful of desert (no humidity) stations. If tMin rises over time and rh doesn’t, it ain’t water vapour doing it, it’s GHG. There aren’t any other factors. IIRC you said you had the data (I don’t.)

        If I’m wrong about this I’d appreciate knowing why.

        If not, I’d like you to consider this because everyone here seems to be in agreement that it’s the simple irrefutable stuff, like you’re wanting here, that gets the message across. We don’t need juvenile pictures, really, that’s too dumbed down.

      • I should actually go back and revisit that desert study with a full dataset and my improved R skills!

        Thanks.

        On the RTE thing I would just start with the basics of radaitive transfer. I got my introduction on the job working with Radar Cross sections and then moved on to IR work. Just getting people to understand transmission windows and absorbion and scattering and reflection would be a good start.
        We also did cool things to enable short range covert communications because of windows or the lack of windows.. to be more precise.

      • Yes Steve, you do need to understand reflection.

      • I have absolutely no problem with understanding how this works planet wide.
        It is the actual physical changes that the planet is displaying blamed on Global Warming I have a problem with.
        The evidence is far closer to pressure build-up than planetary warming. But again that is not Co2 analysis totally tied to temperature and not physical changes.
        Ocean water does have defences it has build to prevent overheating and mass evaporation.

      • >Surely nobody really can question RTE physics ?!?It follows from all of this that most of the uncertainty assoicated with a doubling of CO2 comes from the various feedback effects, especially low cloud distribution, and these feedbacks involve not just radiative transfer but also dynamics and microphysics.< [Colose, December 5th above]

        then it's worth it's space here. I vaguely remember Judith promising a thread on feedbacks sometime

        I;m still interested in Dessler's use of the equation:

        DTf = 1.2C/(1-f)

        where f = sum of feedbacks (both signs)

      • ian, what are you interested in? I could probably help you

      • Thanks Chris

        I put up a post on Dessler’s use of that equation some threads ago, but no takers then so I’m most happy for you to answer here

        So, DTf (sensitivity) = 1.2C/(1-f)

        1)1.2C is Dessler’s preferred temperature rise from purely CO2 x 2 – I’ve found the range reported as 0.8C to 1.2C – how correct is this range ?

        2) Dessler included four (4) feedback elements to sum to “f” –
        f(wv) H2O vapour = +0.6)
        f(lr) = -0.3
        f(ia) = +0.1
        f(cl) clouds = +0.15
        (are there more than 4 ?)

        so “f” (summed) = +0.55

        so DTf = 1.2/1-0.55 = 2.7

        but if 0.8C is used, DTf = 1e, a 67% increase in sensitivity is required to reach 2.7

        and, what bothers me most:

        why cannot “f” (summed) = 1.0, in which case DTf is infinite (nonsensical). Is there something wrong with the equation as laid out by Dessler ?

        Last point: Dessler claimed (October 2010, MIT) that the various f values above are observed data. By nature and very long experience, I am more inclined to observed data than theory, no matter how elegant, so my obvious questions are on the reliability, range and methods for these observed data

        If you could kindly work through these questions, I am very interested

      • Ian,

        1) The temperature value of “1.2 C” is determined by the product of two things. First, it is dependent on the radiative forcing (RF) for CO2 (or the net change in irradiance at the TOA, or tropopause, depending on definition). It’s also dependent on the magnitude of the no-feedback sensitivity parameter (call it λo to be consistent with several literature sources). Therefore, dTo=RF*λo. So we can re-write the Dessler equation (although this goes back a while, Hansen, 1984 one of the earliest papers I know of to talk about things in this context, Lindzen has some other early ones) as dT=(RF*λo)/(1-f). (I will use the “o” when we are referring to no feedbacks). It turns out virtually all uncertainty in dTo is due to the radiative forcing for CO2 (~+/- 10%, the best method to calculate this is from line by line radiative transfer codes which gives ~3.7 W/m2). The uncertainty in the no-feedback sensitivity is considerably less. The 1/(1-f) factor is really at the heart of the sensitivity problem

        2) I’m not aware of any other radiative feedbacks than these

        3) The f–> 1 limit is a good question, and a large point of confusion, even for people who work on feedbacks. You are right that it is nonsensical for dT to go to infinity (or negative infinity if the forcing is causing a cooling) but there’s a couple things to keep in mind. First, the equation you cite is strictly an approximation. It neglects higher order derivative terms (think Taylor series) and thus one can add second order and third order powers to the equation, and solving for the new dT with these new terms will leave, say, a quadratic expression that needn’t blow up to infinity.

        Physically, f=1 does not necessite a runaway snowball or runaway greenhouse, nor is ‘f’ a cosntant that remains put across all climate regimes. Rather, it corresponds to a local bifurcation point, and it is fully possible to equilibriate to a new position past the bifurcation (although you need further information than the linear Dessler equation to determine where). Think of a situation where you are transitioning into a snowball Earth, and the ice-line reaches a critical (low) latitude to make the ice-albedo feedback self sustaining. In this case one needn’t create any new ‘forcing’ to complete the snowball transition. Rather, the previous state (with semi-covered ice) was unstable with respect to the perturbation, and the snowball Earth is a stable solution for the current solar and greenhouse paramters. Thus, just a little nudge in one of these forcings can tip the system past a bifurcation point, to end up in a new stable (say, ice-covered regime). But once the planet is ice-covered, or ice-free, you can’t get any more ice albedo feedback so the temperature won’t change forever.

        4) On the observational side, the water vapor and lapse rate feedbacks are well diagnosed to be positive and negative, respectively. They are usually combined by experts in this area to be a WV+LR feedback, since they interplay with each other. Brian Soden or Isaac Held (along with Dessler, and others if you follow references around…the AR4 is a good start) have many publications on this. I’m not personally familiar with how well quantified the ice-albedo feedback at the surface is observationally (maybe an ice expert here can chime in). The sign is robust, but in any case as Dessler noted, it’s a pretty small feedback. It is important for the Arctic locally, but not very big on a global scale. There’s a lot of papers talking about the ice-albedo feedback though and its role especially in sea ice loss.

        Clouds are the big uncertainty, especially low clouds. Longwave component of cloud feedbacks are pretty much always positive in models, and pretty good theories have been put out to explain this (Dennis Hartmann especially), so the shortwave (albedo) component is the big player here. I’m not in a great position to talk about every new update to cloud observations, but I don’t really know how much value they have right now to diagnose climate sensitivity. Even more so, we don’t know the 20th century radiative forcing with high confidence, mostly because of aerosols.

      • Thank you for your reply – it’s sufficient for me to plough through for now

        As for f >= 1, I had suspected

        >First, the equation you cite is strictly an approximation<

        in any case. I rather thought the "=" in the equation should have been "proportional to" and we were missing some other factor(s)

      • David L. Hagen

        chrisclose
        Thanks for the comments. Re: “Physically, f=1 does not necessite a runaway snowball or runaway greenhouse, nor is ‘f’ a cosntant that remains put across all climate regimes. Rather, it corresponds to a local bifurcation point, and it is fully possible to equilibriate to a new position past the bifurcation (although you need further information than the linear Dessler equation to determine where). ”

        Essenhigh explored the potential for bifurcation point with increasing CO2. He concluded:

        More specifically, the outcome of the analysis does not support the concept of “forcing” or precipitation of bifurcation behavior because of increased CO2. Rather, although the evidence is clear that global warming is currently occurring as discussed elsewhere,37 it would appear, nevertheless, that it is not the rising carbon dioxide concentration that is driving up the temperature but, evidently, that it is the rising temperature that is driving up the carbon dioxide by reducing the temperature-dependent saturation level in the sea and thus increasing the natural emissions

        Energy Fuels, 2006, 20 (3), 1057-1067 • DOI: 10.1021/ef050276y

        Can you refer to any studies that do find a bifurcation point from increasing CO2?

      • By nature and very long experience, I am more inclined to observed data than theory, no matter how elegant,

        Funnily enough I’m completely that way myself when it comes to climate science, unlike computer science where I’m the total theorist.

        I think it must be because in a computer we can account for every last electron, whereas the climate is subject to many effects completely beyond our ability to see, such as magma sloshing around in the mantle, inscrutable ocean isotherms, rate of onset of permafrost melting radically changing the feedbacks, etc.

        Digital climate models have something to offer, but so do analog models. The mother of all analog models of climate is mother nature herself.

        We can learn a lot from closer studies of that ultimate analog model!

        (Apologies to all the pessimists here for my being so upbeat about that.)

      • Spacing went goofy on that post. I’d wrote:

        >>Surely nobody really can question RTE physics ?!?<

        Well, no actually, but if it pushes the website closer to:
        [Colose quote]

        then it's worth it's space here … etc

      • Hi RandomE
        Is the following the type of study you’re looking for?

        “The arid environment of New Mexico is examined in an attempt to correlate increases in atmospheric CO2 with an increase in greenhouse effect. Changes in the greenhouse effect are estimated by using the ratio of the recorded annual high temperatures to the recorded annual low temperatures as a measure of heat retained (i.e. thermal inertia, TI). It is shown that the metric TI increases if a rise in mean temperature is due to heat retention (greenhouse) and decreases if due to heat gain (solar flux). Essentially no correlation was found between the assumed CO2 atmospheric concentrations and the observed greenhouse changes, whereas there was a strong correlation between TI and precipitation. Further it is shown that periods of increase in the mean temperature correspond to heat gain, not heat retention. It is concluded that either the assumed CO2 concentrations are incorrect or that they have no measurable greenhouse effect in these data.”

        The above is a paper By Slade Barker (24 July 2002)
        “A New Metric to Detect CO2 Greenhouse Effect
        Applied To Some New Mexico Weather Data”

        Here is the link http://www.john-daly.com/barker/index.htm

        hope this helps

        p.s. You say…”I’m thinking you can use the temp meaurement record to prove the exact same concept *and* prove that GW is real.”

        Have I misunderstood that sentence or is that whats called confirmation bias?

        regards

      • randomengineer

        Is the following the type of study you’re looking for?

        No, I don’t think so, but an interesting find regardless. Barker appears to be doing something else entirely.

        However… thank you, anyway.

        In what I spoke to Mosher about, only the min temps are needed, and these need to be looked at from a variety ultra low humidity stations around the world. Essentially the idea is to deal with only the coldest temps in super arid conditions, which would hopefully minimize the effect of water vapor. Rising min temps (which are expected, and from what I gather, what’s observed) would be attributable to GHG — I think, anyway. Maybe one of the experts here can say yea or nay before Mosh starts mucking about with data.

        Have I misunderstood that sentence or is that whats called confirmation bias?

        You understood it and no, it’s not confirmation bias. AGW is already proven. The goal is to help prove it to deniers.

        You’re well aware climate stuff is a political battle. Bad guys (and no, they’re not climate scientists) do exist, largely in the form of political interests, and the goal is to keep them from silly things like ruining the economy. Whether you presume ruination to be in the form of (left) socialist creep or (right) corporate sellout is none of my business, but regardless of which direction ruination approaches from, they’re going to be wielding knowledge as their weapon of choice. Read your Sun Tzu: you can choose to embrace the weapon yourself, or you can get clubbed with it. Denying the existence of the weapon guarantees the latter outcome.

      • Richard S Courtney

        Randomengineer:

        You say:
        “AGW is already proven.”

        Really? By whom, where and how?

        Or do you mean effects of UHI and land use changes?

        I and many others (including the IPCC) would be very grateful for your proof of AGW.

        Richard

      • randomengineer

        Or do you mean effects of UHI and land use changes?

        All of the above. Soot, pollution, land use, and yes, CO2 all play a role. Call it climate change if you prefer. 6 Billion souls with fire and technology and agriculture would be hard pressed to NOT change the environment.

        There’s plenty of room to be skeptical of magnitude and/or whether or not there’s a *problem* while still accepting the reality of the basic physics. For all we know the low feedback guys are correct and the effect of Co2 is minimal. That’s a great deal different however than claiming physics doesn’t work or that CO2 has no effect at all.

        It’s like Mosher says; let’s get past the silly unwinnable physics arguments and move on to what’s important.

      • Richard S Courtney

        Randomengineer:

        Please accept my sincere thanks for your good and clear answer to my question. Your response invites useful discussion, and it is a stark contrast to the typical response from AGW-supporters to such questions.

        As you say;
        “There’s plenty of room to be skeptical of magnitude and/or whether or not there’s a *problem* while still accepting the reality of the basic physics. For all we know the low feedback guys are correct and the effect of Co2 is minimal. That’s a great deal different however than claiming physics doesn’t work or that CO2 has no effect at all.”

        Absolutely right!
        If only more people would adopt this rational position then most of the unproductive ‘climate war’ could be avoided.

        The important issue to be resolved is whether or not AGW is likely to be a serious problem. You, Judith Curry and some others think that is likely, while I and a different group of others think it is very unlikely. Time will reveal which of us is right and to what degree because GHG emissions will continue to rise if only as a result of inevitable industrialisation in China, India, Brazil, etc..

        Without getting side-tracked into the importance of the land-use issues that interest Pielke, the real matters for discussion in the context of AGW are
        (a) to what degree anthropogenic GHG emissions contribute to atmospheric CO2 concentration
        and
        (b) how the total climate system responds to increased radiative forcing from e.g. a doubling of atmospheric CO2 concentration equivalent.

        So, in my opinion, the discussion in this thread needs to assess the possible effects on the climate system of changes to atmospheric GHG concentrations. And, as you say, this leads directly to the issue of climate sensitivity magnitude.

        I think most people would agree that doubling atmospheric CO2 concentration must have a direct warming effect that – of itself – implies a maximum of about 1.2 deg.C increase to mean global temperature (MGT) for a doubling of atmospheric CO2 concentration. The matter to be resolved is how the climate responds to that warming effect; i.e. is the net resultant feedback positive or negative?

        If the net feedback is negative (as implied by Lindzen&Choi, Idso, Douglas, etc.) then those on my ‘side’ (please forgive the shorthand) of the debate are right because a maximum climate sensitivity of 1.2 deg.C for a doubling of CO2 would not provide a problem. But if your ‘side’ of the debate is right that the net feedback is positive then there could be a significant future problem.

        So, the behaviour of the climate system (in terms of changes to lapse rates, clouds and the hydrological cycle, etc.) need to be debated with a view to discerning how they can be understood. And that is the debate I think we should be having.

        Again, thank you for your answer that I genuinely appreciate.

        Richard

      • randomengineer

        I’m glad we can talk, Richard. It beats name calling and such.

        So, in my opinion, the discussion in this thread needs to assess the possible effects on the climate system of changes to atmospheric GHG concentrations.

        You’re getting ahead of things just a bit. Let’s continue this on the next thread or so when our host starts discussing that part of things; I think this is planned already.

        For now, let’s concentrate on the topic du jour: we pretty much agree that humans change their environment in myriad ways. We know CO2 is a GHG and we know why. We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. What’s relevant is just knowing that the understanding is basically correct.

        Now, don’t tell your friends, but you’re in the “AGW is real” camp, same as me. That doesn’t mean we’re going to start campaigning for polar bear salvation or writing letters to the editor about imminent catstrophe. Dealing with reality and succumbing to fanatacism are different things.

        Is this a fair summary?

        If so, welcome to the dark side. We have cookies.

      • Richard S Courtney

        Randomengineer:

        I, too, am glad that we can talk. Indeed, I fail to understand why so many of your compatriots think name calling is appropriate interaction, especially when I have always found dialogue is useful to mutual understanding (although it usually fails to achieve agreement).

        You ask me if you have correctly summarised my position, and my answer is yes and no.

        My position is – and always has been – that it is self-evident that humans affect climate (e.g. it is warmer in cities than the surrounding countryside) but it seems extremely unlikely that GHG-induced AGW could have a discernible effect (e.g. the Earth has had a bi-stable climate for 2.5 billion years despite radiative forcing from the Sun having increased ~30% but the AGW conjecture is that 0.4% increase of radiative forcing from a doubling of CO2 would overcome this stability).

        So, I agree with you when you say,
        “we pretty much agree that humans change their environment in myriad ways. We know CO2 is a GHG and we know why. We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming.”

        But we part company when you say,
        “Whether this is a fraction of a degree or more isn’t relevant at this stage. What’s relevant is just knowing that the understanding is basically correct.”

        I think not. As I see it, at this stage what we need to know – but do not – is why the Earth’s climate seems to have been constrained to two narrow temperature bands despite immense tectonic changes and over the geological ages since the Earth’s atmosphere became oxygen-rich. This has to have been a result of the global climate system’s response to altered radiative forcing. So, as I said, we need to debate the response of the climate system to altered radiative forcing in terms of changes to lapse rates, clouds, the hydrological cycle, etc..

        Please note that I am on record as having repeatedly stated this view for decades.

        Hence, I am not and never have been on “the dark side”. But if adequate evidence were presented to me then I would alter my view. As the saying goes, if the evidence changes then I change my view.

        To date I have seen no evidence that warrants a change to my view. All I have seen are climate models that are so wrong I have published a refereed refutation of them, assertions of dead polar bears etc., ‘projections’ of future catastrophe that are as credible as astrology (I have published a refereed critique of the SRES), and personal lies and insults posted about me over the web because I do not buy into the catastrophism. And the fact of those attacks of me convinces me that everything said by the catastrophists should be distrusted.

        So, give me evidence that climate sensitivity is governed by feedbacks that are positive and not so strongly negative that they have maintained the observed bi-stability over geological ages despite ~30% increase in solar radiatived forcing. At present I can see no reason of any kind to dispute the null hypothesis; viz. nothing has yet been observed which indicates recent climate changes have any cause(s) other than the cause(s) of similar previous climate changes in the holocene.

        Richard

      • randomengineer

        Richard, I think I missed something.

        I’d said : “We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. “

        This isn’t a conclusion of rampant runaway warming, but a statement that merely adding GHG with all things being equal ought to raise the temp.

        Your position then switches to discussion of paleoclimate, and as I read things I think what you’re saying ultimately is that the GHGs are naturally suppressed such that the increase of GHG doesn’t cause warming due to damping.

        I don’t see a problem with asserting a natural state in real life working against a temp rise, but I find it somewhat unconvincing that the natural state that damps a temp rise is damping a temp rise that doesn’t happen in the first place.

        It seems to me that the logical position is that yes there SHOULD be a temp rise, BUT the temp rise isn’t happening due to [unspecified factors.]

        So I’m a bit confused regarding the position. Let me ask this then: should there be a temp rise with adding CO2 that doesn’t happen for [some reasons] or is adding CO2 something that never results in a temp rise?

      • Richard S Courtney

        Randomengineer:

        I apologise for my lack of clarity. It was not deliberate.

        I attempted – and I clearly failed – to state what I think to be where we agree and where we disagree. It seems that I gave an impression that I was trying to change the subject, and if I gave that erroneous impression then my language was a serious mistake.

        So, I will try to get us back to where we were.

        Please understand that I completely and unreservedly agree with you when you assert:

        “I’d said : “We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. “

        This isn’t a conclusion of rampant runaway warming, but a statement that merely adding GHG with all things being equal ought to raise the temp.”

        Yes, I agree with that. Indeed, I thought I had made this agreement clear, but it is now obvious that I was mistaken in that thought.

        And I was not trying to change the subject when I then explained why I think this matter we agree is of no consequence. The point of our departure is your statement – that I stress I agree – saying,
        “merely adding GHG with all things being equal ought to raise the temp.”

        But,importantly, I do not think “all things being equal” is a valid consideration. When the temperature rises nothing remains “equal” because the temperature rise induces everything else to change. And it is the net result of all those changes that matters.

        As I said, a ~30% increase to radiative forcing from the Sun (over the 2.5 billion years since the Earth has had an oxygen-rich atmosphere) has had no discernible effect. The Earth has had liquid water on its surface throughout that time, but if radiative forcing had a direct effect on temperature the oceans would haver boiled to steam long ago.

        So, it is an empirical fact that “merely adding GHG with all things being equal ought to raise the temp.” is meaningless because we know nothing will be equal: the climate system is seen to have adjusted to maintain global temperature within two narrow bands of temperature while radiative forcing increased by ~30%.

        Doubling atmospheric CO2 concentration will increase radiative forcing by ~0.4%. Knowing that 30% increase has had no discernible effect, I fail to understand why ~0.4% increase will have a discernible effect.

        I hope the above clarifies my view. But in attempt to show I am genuinely trying to have a dialogue of the hearing, I will provide a specific answer to your concluding question that was;

        “So I’m a bit confused regarding the position. Let me ask this then: should there be a temp rise with adding CO2 that doesn’t happen for [some reasons] or is adding CO2 something that never results in a temp rise?”

        Anything that increases radiative forcing (including additional atmospheric CO2 concentration) will induce a global temperature rise. But the empirical evidence indicates that the climate system responds to negate that rise. However, we have no method to determine the response time. Observation of temperature changes following a volcanic cooling event suggests that the response time is likely to be less than two years. If the response time is that short then we will never obtain a discernible (n.b. discernible) temperature rise from elevated atmospheric CO2. But if the response time is much longer than that then we would see a temperature rise before the climate system reacts to negate the temperature rise. And this is why I keep saying we need to determine the alterations to clouds, the hydrological cycle, lapse rates, etc. in response to increased radiative forcing. We need to know how the system responds and at what rate.

        Alternatively, of course, I may be completely wrong. The truth of that will become clear in future if atmospheric CO2 concentration continues to rise. (All scientists should remember Cromwell’s plea, “I beg ye in the bowells of Christ to consider that ye may be wrong”.)

        In conclusion, I repeat my gratitude for this conversation. Disagreement without being disagreeable is both pleasurable and productive.

        Richard

      • randomengineer

        All clear now, thanks. Summary: your position seems to be that RTE and GHE works as advertised (in a lab condition anyway) but where it concerns the real world, there are factors changing the general rule:

        But,importantly, I do not think “all things being equal” is a valid consideration.

        …which I think is fair enough.

        I can further condense this.

        Napoleon was fond of the aphorism (stolen from a Roman general) — “no battle plan ever survives contact with the enemy.” There’s nothing wrong with noting that history is replete with examples of being utterly wrong and basing one’s starting position on this.

        I’m happy to see that where we part ways is where things get murky rather than where things are clearly visible via lab experiment and the like. I’ll see you on the next GHG forcing thread. Bring your A game. :-)

        Cheers.

      • This is in reply to your comment below.

        Napoleon? Unnamed Roman General? Build railroads son! Try von Moltke.

      • randomengineer, I’m one of those sticklers for protocol and, expressing the sentiment of Australian poet C. J. Dennis’s Sentimental Bloke, worry when people are “not intrajuiced” properly. I therefore wanted to introduce Mr Courtney to you, but I regret to say that googling your pseudonym turned up nothing useful.

        So I’m sorry but I can only help with the other direction . If you google “Richard S. Courtney” you can best meet the good gentleman by skipping over the first few pages and going straight to page 5.

        I hasten to point out the risk of confusing him with another Richard S. Courtney, of the Kutztown University Department of Geography in Pennsylvania. The former appears as the 20th name on the list of more than 100 scientists rebuking Obama as ‘simply incorrect’ on global warming. Unlike the latter Courtney, the former is listed among those 100 distinguished scientists as

        “Richard S. Courtney, Ph.D, Reviewer, Intergovernmental Panel On Climate Change”

        So you should realize you are dealing here with someone who knows what he’s talking about. To my knowledge no one has invited the Pennsylvania professor to serve as a reviewer for the IPCC.

        One way to keep these two gentlemen straight is that the former has a Diploma in Philosophy (presumably what the Ph.D. stands for) from some (thus far unnamed) institution in the UK city of Cambridge. In the normal scheme of things a Diploma in Philosophy would constitute progress towards a degree in divinity, while the Pennsylvania professor is a Doctor of Philosophy from Ohio State University, whose 1993 dissertation is mysteriously titled, “A Spatio-temporo-hierarchical Model of Urban System Population Dynamics: A Case Study of the Upper Midwest.” In his dissertation he employs Casetti’s Expansion Method to redefine a rank-size model into a 27 parameter equation capable of identifying spatial, temporal, and hierarchical dimensions of population redistribution that I use to study the urban system of the Upper Midwest.

        All sounds like complete gibberish to me so I suggest you stick with the divine Mr Courtney, distinguished reviewer for the IPCC with a Diploma in Philosophy, and not let his namesake distract you.

        As I’ve never met either in person I’m afraid I have no other way of distinguishing them so you’re on your own there.

      • Willis Eschenbach

        randomengineer, you say:

        All you have to do is look at the tMin from a handful of desert (no humidity) stations. If tMin rises over time and rh doesn’t, it ain’t water vapour doing it, it’s GHG.

        You should turn in your engineer’s badge for this whopper. You are telling us that there is only one thing on the entire planet that affects minimum desert temperatures — GHGs. I don’t think even you believe that.

        You say “If I’m wrong about this I’d appreciate knowing why.” You are wrong because there isn’t a single place on earth where the climate is only affected by one single factor. Every part of the planet is affected by a variety of feedbacks and forcings and teleconnections. The desert may warm, for example, from an El Niño … and that fact alone is enough to destroy your theory that the desert temperatures are only affected by one isolated factor, GHGs.

      • randomengineer

        Rising over time = 120 years. An el nino isn’t going to change this. It certainly isn’t going to change this from a variety of stations from around the planet. Desert = low humidity. Antarctica counts.

        Yes there are always things that will affect temps, but factor out LOCAL humidity effects (over time in a desert) and you will see a better picture of the local temps unaffected by humidity.

        Land stations otherwise suffer from land use change; are they really reflecting a global temp or are they a proxy for land use? Rural stations will show temp increase once irrigation is used.

        Engineers do their best to control for the one variable they are interested in seeing. In my case, looking at temp without local land use interference. Factor out local humidity.

        If temp rises over time sure maybe there’s more water vapour (potent GHG) globally but this can be factored in/out via plenty of other studies. If temp rises and GLOBAL water vapour is factored out then what’s left is mostly effects from remaining GHG.

        I’m a “believer” and skeptic. A lukewarmer. I get physics. I’m skeptical that the alarmists are correct. My guess is what the results from the desert/low humidity study will show is ~0.2 degrees over the 120 years attributable to CO2. Others may expect higher. I don’t.

      • Rising over time = 120 years. An el nino isn’t going to change this.

        I was going to point out the same thing but you beat me to it.

        It’s even true of the Atlantic Multidecadal Oscillation, which is 10-20 times slower than El Nino events and episodes (with the variability being entirely in the latter) and therefore sneaks up on you, unlike ENNO and ENSO, but in a major way. So Willis could ding you on that one too except that I found a way to iron out the AMO as well.

        What you do is use exactly 65.5 year moving average and that minimizes the r² when fitting the composite Arrhenius-Hofmann Law, AHL, to the HADCRUT record.

        One might expect the r² to keep shrinking with yet wider smoothing, but instead it bottoms out at 65.5 years and starts climbing again.

        (Caveat: that level of smoothing impacts the CO2 rise itself a small amount, easily corrected for by applying it to both the HADCRUT data and the AHL when fitting the latter to the former to get the climate sensitivity. It doesn’t make the AHL any smoother, just distorts it in the same way it distorts the record.)

      • Stephen:

        Atmospheric RTE’s based on MODTRAN deal with relatively low levels of CO2 (if I’m not mistaken, in the order of 100 to 200 bar.cm for CO2). Combustion engineering deals with levels that can get much higher. The graph at google books here:

        http://tinyurl.com/2cgg6p6

        Page 618,

        does not fully reconcile with the MODTRAN reconstruction. Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.

        So I’ll throw it back to you. Where is Leckner’s mistake?

        Cheers

        JE

      • Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.

        So I’ll throw it back to you. Where is Leckner’s mistake?

        MODTRAN doesn’t increase forever. The upper limit is the emissivity of a black body at the same temperature when absorptivity = emissivity = 1. You can see that at the center of the CO2 band in atmospheric spectra which has an effective brightness temperature of about 220 K. I’ve calculated total emissivity for CO2 using SpectralCalc ( http://www.spectralcalc.com ) and the results agree quite well with both Leckner and MODTRAN. You just have to make sure you’re using the same mass path. The CO2 mass path for the atmosphere is ~300 bar.cm. The main difference between atmospheric emission using MODTRAN and using the Leckner model is that the atmosphere isn’t isothermal.

  7. Michael Larkin

    Sounds good to me, Steven, even though I’m not wholly sceptical in this area. I just wanna understand the basics so that I can start keying in better to the kinds of things being said earlier in the thread. If Willis felt tempted to have a bash, so much the better!

  8. This is a fascinating thread, including its historical elements. Among the various informative links, one that caught my eye was the first of the Science of Doom links provided by Dr. Curry, which references the following article by Dessler et al and reproduces a figure from that article showing the tight correlation between modeled and observed OLR values as a function of atmospheric temperature and water vapor –
    Dessler et al

    Because the radiative properties of water vapor are critical to an understanding of both greenhouse effects per se and positive feedbacks estimated from warming-mediated increases in water vapor overall and in critical regions of the upper troposphere, the concordance between predicted and observed relationships linking water vapor to OLR istruck me as worthy of special notice.

  9. Four points:

    1: For an excellent layman’s introduction to radiative heat transfer in the atmosphere, I recommend the scienceofdoom blog.
    2: A question for Steve Mosher or Willis E et al. If one increases the partial pressure of CO2 in the atmosphere (300 ppm, 400ppm, etc), will the absorbance of EMR continue to increase up to the point where there is so much CO2 that dry ice forms?
    3: What is the difference between HITRAN and MODTRAN? (may not be relevant).
    4: In blast furnace calcs, etc., we use Leckner’s curves. These provide a graph of delta q versus[CO2], that is fairly similar to the IPCC version (5.35ln[CO2]), but these do not continue exponentially beyond 200 ppm CO2. Why does climate atmospheric absorbance differ from engineering atmospheric absorbance? I realize there is a temperature and pressure gradiant, but the explanations I’ve seen do not fully explain this disconnect. See “the path length approximation” post on my blog. Please feel free to tear it apart.

    • steven mosher

      I’ll take #3.

      HITRAN is the database.
      http://www.cfa.harvard.edu/hitran/

      “HITRAN is a database, not a “simulation” code. It provides a suite of parameters that are used as input to various modeling codes, either very high-resolution line-by-line codes, or moderate spectral resolution band-model codes. These codes are based on the Lambert-Beers law of attenuation and may include many more features of modeling, for example line shape, scattering, continuum absorption, atmospheric constituent profiles, etc. ”

      MODTRAN (and LOWTRAN) is a simulation code. Its quick and dirty. you might use it to estimate, for example, the IR signature a plane would present to a sensor on the ground. Like this
      http://en.wikipedia.org/wiki/Northrop_YF-23 which we optimized for stealthiness using RTE.

  10. Michael Larkin

    Going back to my sixth-form physics over forty years ago (just scraped a bare pass at A-level! :-)), I remember the two types of spectra: absorption and emission.

    WRT absorption spectra, IIRC, then when you shine full-frequency light through a gas, and at the other side analyse the emerging light using a prism, you find that, depending on what the gas was, there are black lines in the spectrum. What has happened is that certain photons of a particular frequency/wavelength have been absorbed by the gas, kicking electrons of its constituent atoms into higher-energy states. So those photons don’t get through the gas, accounting for the black lines. Where you get the lines is characteristic for a given gas. Hope I’m right so far.

    WRT emission spectra, you get these when you heat elements up. The extra energy input causes emission of photons of specific frequencies/wavelengths, and so what you get in the spectrum is mostly black, with bright lines due to these extra photons. The pattern you get is characteristic for a given element. I think I recall this is the way that one can determine the elements in distant stars. I also remember Doppler effects when an emitting source is in motion, which shifts the location of the lines and enables estimates of velocity to be made.

    When people talk about CO2 “trapping” energy, I have this idea of it being selectively absorptive of specific frequencies of electromagnetic radiation, and that that can be observed spectroscopically. I’m assuming absorption spectra apply here. But it’s a bit confused because an excited electron won’t stay excited forever and may drop back to a lower quantum energy, re-emitting the photon it received (at the same frequency/wavelength?).

    And that radiation may in its turn excite some other CO2 molecule, and so on. We can’t talk of a single photon fighting its way through countless CO2 molecules before it hits the ground or alternatively finds its way into space. To me, it seems like what makes it through is the energy that that single photon represents; there might have been millions of re-emissions of (same-energy?) photons in the interim.

    I get the impression that this relates to the term “optical density”, and intuitively, I’d guess that the greater the optical density, the more CO2 molecules per unit volume, and the more delay there is in the system. Moreover, it seems to be the delay which accounts for the rise in temperature of the atmosphere. So the chain of logic seems to be: more CO2/unit volume => greater optical density => greater delay in photons escaping the atmosphere => increase in temperature of the atmosphere.

    I know there are other things involved, too. Such as conduction (apparently not a big factor?), convection, reflection and refraction, and so on.

    Looking at the Wikipedia article, it is talking about spectral lines which I’m kind of guessing are for absorption spectra, and it’s also talking about “monochromatic” and “line-by-line”. So I’m getting this idea of cycling through different frequencies (each one being a “chrome” or colour, I suppose, though I realise we may not be talking about visible light, e.g. infra-red or ultra-violet) and in some way picking up the lines for all the different constituents of the atmosphere.

    I’m laying all this bare even if I might have it hopelessly wrong just so some kind soul can perceive how I’m thinking and intervene where it’s necessary, at the right sort of level for me, which I think will be about the same level for anyone with (somewhat sub-par) A-level knowledge of physics (for Americans, A-levels are qualifications you need to get into university; so substitute whatever qualifications you guys need).

    I’m trying to focus on a level that isn’t too high, but then again, not too low. All this talk of panes of glass or blankets is too low, and hopefully I will have indicated what is too high, although I definitely want to go higher if possible :-)

    • Long question, so I’ll just answer a bite-sized part. Absorption and emission as applied to the atmosphere.
      When you look up in clear sky in the IR you see emission lines of CO2. This is because the CO2 is warmer than the background, which is cold space (simplified a bit), so the CO2 emission is larger (brighter) than the background.
      If you look down from a satellite in the IR you see absorption lines of CO2. This is because the background black radiation from the ground is warmer than the atmosphere, but in the CO2 bands you see only the last emission which is from higher in the atmosphere which is colder, so it appears as a dark line in the spectrum.

      • Michael Larkin

        Thank you, Jim D.

        I found that a valuable comment; I hadn’t realised that we would be talking about absorption and emission spectra according to whether we were looking from the ground or from space.

        If I might ask a question relating to that, would I be correct to assume the the absorption/emission lines would be in the same location?

      • Michael Larkin

        As you were, Jim D. I infer from a reply I got from A Lacis that the two spectra, if overlapped, would eliminate any dark lines, so that they are presumably in the same location.

      • As you were, Jim D. I infer from a reply I got from A Lacis that the two spectra, if overlapped, would eliminate any dark lines, so that they are presumably in the same location.

        Did Andy take Stokes shift into account?

      • Yes, they are the same wavelengths which are an intrinsic property of the molecules.

      • I have yet to see any consistency between these discussions of emission by CO2 and Kasha’s rule that the only emitting transitions are those from the lowest excited state of any given multiplicity (transitions preserve multiplicity). Corollaries of the rule are the independence of emission from absorption, and Stokes shift quantifying this. This has been known for 60 years, why does it never come up in these radiation discussions?

        Using directly observed absorption spectra to indirectly infer emission spectra seems a bad idea. Emission spectra should either be observed directly or calculated, whichever is more convenient provided it yields reasonable answers.

        _____
        ×⁻³⁴

      • The Kasha’s rule has a large effect when the exitation is done by a high energy radiation (high compared to the temperature). I do not think, it is important when the gas is in thermal equilibrium.

      • Ok, but is there any correlation between absorbed and emitted photons in CO2?

        And if not, can one assume that the emission probabilities are in proportion to the line strengths?

        I’d like to calculate the mean free path of a photon emitted from a CO2 molecule when the level is 390 ppmv, at STP. For photons emitted from the strongest line I’m getting a mean free path of around 4 cm based on the strengths and line widths at STP.

        For weaker lines the mfp will be longer, but the probability of such photons will be smaller, so it’s not obvious whether the sum over all 100,000 lines or so converges quickly or slowly.

        Kasha’s rule is particularly relevant to this since if it were applicable the series would converge faster.

        Has anyone done this calculation before?

      • I do not think, it is important when the gas is in thermal equilibrium.

        So are you claiming that the absorption lines work equally well as emission lines below 300 K?

        Kasha himself in his 1950 paper says nothing that would imply this. Do you have any source for your claim?

    • Michael
      Here is a nice graph from the top of tropopause looking down
      It looks very like an absorption graph to me.
      The large bite out of the black body envelope is caused by thermalisation of the 15um radiation.
      http://wattsupwiththat.files.wordpress.com/2010/12/ir_window_anesthetics.png

  11. David L. Hagen

    Radiative heat transfer is very important in modeling combustion.
    e.g. in modeling fires and developing fire fighting techniques; for boilers in electric power plants; and in modeling internal combustion in engines and gas turbines. Modeling “gray” atmospheres with combustion is particularly challenging and computationally intensive. e.g. See: CFD-Based Compartment Fire Modeling

    In gas turbines, design errors on temperature and noise can result in a combustor being “liberated” with a few million dollars of damage per turbine blade set of stream damage in the turbine.

  12. John Eggert said on December 5, 2010 at 11:44 pm:

    Atmospheric RTE’s based on MODTRAN deal with relatively low levels of CO2 (if I’m not mistaken, in the order of 100 to 200 bar.cm for CO2). Combustion engineering deals with levels that can get much higher. The graph at google books here:

    http://tinyurl.com/2cgg6p6 – Page 618,

    does not fully reconcile with the MODTRAN reconstruction. Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.

    So I’ll throw it back to you. Where is Leckner’s mistake?

    Well, the vertical axis of his graph is annotated wrongly. But on a serious note it’s important to understand the basics.

    What is emissivity?

    Emissivity is a value between 0 and 1 which describes how well a body (or a gas) emits compared with a “blackbody”. Emissivity is a material property. If emissivity = 1, it is a “blackbody”.

    The Planck law shows how spectral intensity (which is a continuously varying function of wavelength) of a blackbody changes with temperature.

    When you know the emissivity it allows you to calculate the actual value of spectral intensity for the body under consideration. Or the actual value of total flux.

    Emissivity is sometimes shown as a wavelength-dependent graph. In the Leckner curves the value is averaged across the relevant wavelengths for various temperatures. (This makes it easier to do the flux calculation).

    Now some examples:
    -A surface at 300K with an emissivity of 0.1 will radiate 46W/m^2.
    -A surface at 1000K with an emissivity of 0.1 will radiate 5,670 W/m^2.

    Same emissivity and yet the flux is much higher for increased temperatures.

    Leckner wasn’t wrong. The question is confused.

    How come the government income tax rate reaches a maximum and yet the more I earn, the more the government takes from me in tax?

    I believe the question in your mind is about “saturation”. Maybe try CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.

    • Michael Larkin

      Thanks for this, Scienceof doom. I am hoping to graduate from this thread so that I can launch into your site!

      You say:

      Now some examples:
      -A surface at 300K with an emissivity of 0.1 will radiate 46W/m^2.
      -A surface at 1000K with an emissivity of 0.1 will radiate 5,670 W/m^2.

      Okay. So does this relate to the Stephan-Boltzmann equation: j = eoT^4 where e is the emissivity, o the proportionality constant, and T is in deg. K?

      Anything less than perfect emissivity (where e = 1, so that we have a black body): would this be the “grey body” that I hear about so often? Is the effectiveness of a grey body quantified according to the value of e?

      Elementary questions, I know, but that is what this thread is about for me and so I hope you will indulge me.

      • You are correct, these are calculated by using the Stefan Boltzmann equation. Plug the numbers in and you will get the answers I did.

        You are mostly correct about “grey body” – although generally it is used for the special case where the body (or gas) is not radiating as a blackbody, yet the emissivity is constant across wavelength.

        This doesn’t really happen in practice but can be useful to calculate the results of simple examples.

        For a graph of how emissivity/absorptivity varies with wavelength seethe comments in The Dull Case of Emissivity and Average Temperatures.

      • Michael Larkin

        Thanks once again, SoD. A valuable point about emissivity not necessarily being constant across wavelength. I wouldn’t have thought about that had you not mentioned it.

    • SOD:

      I’ve read your section a number of times over the last number of months. It is a good reference when I’m talking to people about these things. The fact remains. If you are measuring how much hotter a gas will get in a blast furnace off gas, there comes a point when increasing CO2 no longer increases the heat of the gas. What you are saying is this doesn’t happen. But it does. And there is no confusion in the question. Either the calculation of atmospheric absorbance in combustion engineering is fundamentally the same or it is fundamentally different from the calculation of atmospheric absorbance in climate. One curve has an asymptote and the other doesn’t. Otherwise, there is very little difference between the two.

      • I’ve read your section a number of times over the last number of months. It is a good reference when I’m talking to people about these things. The fact remains. If you are measuring how much hotter a gas will get in a blast furnace off gas, there comes a point when increasing CO2 no longer increases the heat of the gas. What you are saying is this doesn’t happen. But it does. And there is no confusion in the question. Either the calculation of atmospheric absorbance in combustion engineering is fundamentally the same or it is fundamentally different from the calculation of atmospheric absorbance in climate.

        I’ll take a stab at this one – I don’t think there is any fundamental difference here. If I understand that figure correctly (the earlier linked image from google books), I think those “effective emissivity” curves are integrated over all wavelengths (Eq. 8.76 in that book). This will weight the spectral emissivity of the gas by the blackbody (Planck) curve. So, what happens is when you get very high temperatures, the curve is peaking at shorter wavelengths and the CO2 absorption bands at 15 and 4 microns become less important. I don’t think there are any CO2 absorption bands at wavelengths shorter than 4 microns so it would just keep dropping for larger temperatures.

        I guess in climate applications those effects aren’t usually considered, since even for “huge” temperature increases (say 10K), there is no significant shift in the peak of the blackbody curve.

  13. Michael,
    You pretty well have the basics of absorption line formation and line emission. The detailed mechanics of how a molecule emits or absorbs a photon of a particular frequency can be a bit complicated. Any single (say CO2) molecule will be in some specific vibration-rotation energy state. If a photon comes by with just the right frequency to an allowed higher energy vibration-rotation energy state, there is some specified probability that that photon will be absorbed, raising that molecule its new vibration-rotation energy state. The molecule will sit in that state for an exceedingly brief time before a photon (of the same wavelength) is spontaneously emitted, and the molecule returns to its original energy level. But before it got a chance to radiate, that molecule might have undergone a collision with another molecule that might knock it into a different vibration-rotation energy state.
    FOrtunately, all these complications do not directly factor into calculating radiative transfer. A single cubic inch of air contains close to a billion billion CO2 molecules, so a statistical approach can be taken to doing practical radiative transfer modeling.

    I like to use the example of an isothermal cavity (with a small pinhole to view the emerging radiation) to illustrate some basic principles of radiation. As you might expect, the radiation emerging from the pinhole will be continuous Planck radiation at temperature T (emitted by the back wall of the cavity). If we now place CO2 gas (also at temperature T) into the cavity, Kirchhoff’s radiative law states that the radiation emerging from the pinhole will still be continuous Planck radiation at temperature T. This is because in a strictly isothermal cavity, everything is in thermodynamic equilibrium, meaning that every emission-absorption transition and every collisional interaction must be in equilibrium (otherwise the temperature of some part of the cavity will change).

    If this parcel of CO2 gas is pulled from the cavity, it will continue to emit radiation representative of temperature T, which if viewed against a very cold background, will appear as emission lines. If the background is heated to temperature T, the emission lines will still be there, but there will be superimposed absorption lines at the same spectral positions of the same strength yielding a blank continuous Planck spectrum of temperature T just as in the isothermal cavity. If now the background temperature is heating to a hotter temperature, absorption will win out over emission, and the resulting spectrum will be a pure absorption spectrum.

    The line spectrum that CO2 exhibits will depend on the local pressure and temperature of the gas. Pressure generally only broadens the spectral lines, without shifting their spectral position. Temperature, on the other hand, changes the equilibrium collision vibrational-rotation energy state distribution, which can make some spectral lines be stronger, others weaker. Thus a flame spectrum of CO2 will be quite different from the ‘cold atmosphere’ spectrum that is relevant to current climate applications.

    The basic atmospheric spectral line compilation is the HITRAN data base that contains line spectral position, line strength, line width, and line energy level information for more than 2.7 million spectral lines for 39 molecular species. This is the information that goes into a line-by-line model such as LBLRTM or FASCODE, together with the pressure, temperature, and absorber amount profile information that describes the atmospheric structure for which the line-by-line spectrum is to be computed. The line-by-line codes require significant computer resources to operate, but they are the ones that give the most precise radiative transfer results.

    MODTRAN is a commercially available radiative transfer program that atmospheric radiation with moderate spectral resolution (one wavenumber) and with somewhat lesser accuracy. To assure maximum precision and accuracy, we use line-by-line modeling to test the performance the radiation model used in climate modeling applications.

    • Michael Larkin

      A Lacis,

      Thank you for your extensive post. It is so valuable to know I have the basics approximately right – that means I can build further on that.

      I understood your first two paras very well.

      Para 3:

      I had to look up “isothermal” and Kirchoff’s law so I could catch your drift. It occurred to me that maybe others at my level are lurking and learning too, so:

      (from Wikipedia):

      “An isothermal process is a change of a system, in which the temperature remains constant: ΔT = 0. This typically occurs when a system is in contact with an outside thermal reservoir (heat bath), and the change occurs slowly enough to allow the system to continually adjust to the temperature of the reservoir through heat exchange. In contrast, an adiabatic process is where a system exchanges no heat with its surroundings (Q = 0). In other words, in an isothermal process, the value ΔT = 0 but Q ≠ 0, while in an adiabatic process, ΔT ≠ 0 but Q = 0.”

      Right. So it looks like we are talking about thermal equilibrium in your example of an isothermal cavity.

      (Kirchoff’s law – from Wikipedia):

      At thermal equilibrium, the emissivity of a body (or surface) equals its absorptivity.

      Right. So as much energy is coming in as is going out; Delta T = 0. The inner surface of your isothermal cavity seems to be acting as a black body (“Planck radiation”).

      Para 4:

      I’m assuming that “pulling CO2” from the cavity isn’t meant literally. It’s a thought experiment, right?

      You seemed to have answered my earlier question to Jim D about whether absorption and emission spectra for the same gas would be complementary WRT the position of their lines. At least, that’s what I thought, but…

      Para 5:

      Hmm. Pressure relates to density of CO2, i.e. locally, the number of molecules per unit volume .The broadening of the lines where the pressure is greater – is that an intensity rather than frequency change?

      I can see that a flame (presumably emission?) spectrum would be quite different from a cold atmosphere spectrum, but what I am not sure about is whether you’re saying some lines might disappear and new ones appear according to circumstances. In view of what SoD told me above, I understand that emissivity isn’t necessarily constant across wavelength. Overall, I’m a little confused about this point (probably my fault more than anyone else’s).

      Paras 6 and 7: Thanks. Removes some of the mystery from what the heck HITRAN and MODTRAN are all about!

      • “The broadening of the lines where the pressure is greater – is that an intensity rather than frequency change?”

        Normally, according to quantum theory, for a molecule to absorb a photon, the photon’s energy must exactly match the energy involved in the transition from one energy level to another within the molecule – e.g., the excitation of a vibration mode in CO2. However, if an encounter of a CO2 molecule with a neighboring molecule (N2, O2, etc.) adds or subtracts a small amount of energy, that amount can make up for a difference between the energy of the incoming photon and the energy needed for a quantum transition. This permits the total incoming energy to match what is needed.

        The higher the density of molecules (i.e., the higher the atmospheric pressure), the greater will be the likelihood of an encounter that creates the needed energy adjustment. This means that at high pressure, photons slightly different in energy from the “exact match” energy will be more likely to be absorbed, so that an absorption line at a particular energy level will broaden to encompass these additional photons whose energy doesn’t quite match the line center.

      • Michael Larkin

        Thanks for this, Fred – you put it beautifully and I can understand it very well. One more small piece of the jigsaw! :-)

  14. Miklos Zagoni

    Andy,
    May I ask: According to your data and calculations, how much is the annual global mean atmospheric longwave absorbed radiation?
    Thank you in advance

  15. Judith, you write “And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of addressing this issue (this will be the topic of the next greenhouse post).”

    When I first read Myhre et al, it states that 3 radiative transfer models were used. I have read the definition of radiative forcing in the TAR, and I was surprised that Myhre did not seem to discuss WHY radiative transfer models were suitable to estiamte change in radiative forcing. It has never been obvious to me that radiative transfer models ARE suitable to estimate change in radiative forcing. Can anyone direct me to a published discussion as to why radiaitve transfer models are suitable to estimate change in radiative forcing?

    • Jim, for clear sky radiative forcing, this thread just provided tons of documentation that the best radiation codes do a good job of simulating the spectral distribution and broad band radiative fluxes. In terms of forcing, the models have been validated from the tropics to the arctic, with over an order of magnitude in difference in total water vapor content. While the models have not been validated observationally for a doubling of CO2, we infer from the above two tests that they should perform fine. The Collins et al. paper referenced here directly addresses this issue (points out that some of the radiation transfer codes used in climate models do not perform well in this regard), but the best ones do.

      • Thank you Judith, but that is not my problem. My problem relates to the definition of radiative forcing in the TAR Chapter 6; viz

        “The radiative forcing of the surface-troposphere system due to the perturbation in or the introduction of an agent (say, a change in greenhouse gas concentrations) is the change in net (down minus up) irradiance (solar plus long-wave; in Wm-2) at the tropopause AFTER allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropo-spheric temperatures and state held fixed at the unperturbed values”.

        As I see this, the atmosphere is in two different states; one part has adjusted to radiative equilibrium and one has not. I assume that radiative transfer models reproduce this difference, but I have not been able to find out how. Can you direct me to a publication which explains this please?

      • David L. Hagen

        Jim Cripwell
        For a discussion of various sensitivities a 1D Line By Line radiation model based on radiosonde data, see Miskolczi (2010) Sect 2 pg 256-247 http://www.friendsofscience.org/assets/documents/E&E_21_4_2010_08-miskolczi.pdf

  16. Alexander Harvey

    I think there needs to be a few words of caution regarding visually interpreting spectra.

    I am not the best person to do this so I welcome correction.

    Check the coordinates:

    Are you looking at wavenumbers or wavelengths?

    Wavenumbers vary according to frequency, the reciprocal of wavelength.

    More importantly are you looking at

    Transmission (%)
    Absorption (%)
    Cross-sections (cm^2/molecule)
    Line Strengths (cm/molecule)?

    It is the last one that can give rise to the most misleading of visual interprestations. Line Strength (Integrated Intensity) lacks the line shape component. It is a useful abstraction as it gives a measure of the total “strength” (the area under the curve of the line shape) which is a measure of the dipole strength of the transition. It can be misleading as such a spectrum has pin-sharp (zero width) lines which makes it look like there are big non-absorbing gaps between the lines which is not the case.

    The actual units vary but line strengths (as in HITRAN line lists) should boil down to cm/molecule after manipulation and scaling.

    Alex

  17. The failure of catastrophic climate change as an idea is not in the basic physics.
    Just as the failure of eugenics was not in the theory of evolution.

  18. Dr. Curry said:
    “However, if you can specify the relevant conditions in the atmosphere that provide inputs to the radiative transfer model, you should be able to make accurate calculations using state-of-the art models. The challenge for climate models is in correctly simulating the variations in atmospheric profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols.”

    What about ocean cycles?

    • Ocean cycles may influence what the clouds and temperatures actually are, but once you can specify, predict, whatever the clouds etc, the radiative transfer models are up to the job of predicting the radiative fluxes. Actually predicting the clouds, water vapor, etc. is at the heart of the problem (the radiation codes themselves are not the problem, which is what this thread is about).

      • Discussing those topics further wil be very interesting. You are moving through this issue in an efective, fascinating way.
        Your students are very fortunate.
        Well, I guess many of us here are your students, in effect….

      • Dr. Curry
        Ocean cycles models using past record are inherently flawed. This is applicable to both Pacific and the Atlantic oceans’ currents.
        According to my findings there are the short term semi-periodic oscillations with an uncertain frequency (decades) and the long term (centuries) components which may or may not be reversible. None of these appear to be predictable.
        North Pacific has number of critical points; here I show what I think are the most important.
        http://www.vukcevic.talktalk.net/NPG.htm
        All of them to certain extent may contribute to the PDO with an unspecified time factor and weighting. You may indeed notice that one (red coloured) is possibly the major contributor to the PDO, some 10-12 years later.

      • i’m preparing a thread next week on decadal scale prediction, i will be referencing your site

      • Thanks; that’s fine as long as you think it merits mention. I am still working on the SSW; some interesting results there, and I may have a possible answer to the Antarctica’s 2002 SSW riddle
        http://www.knmi.nl/~eskes/papers/srt2005/png/o3col20020926_sp.png
        that the papers on the subject missed.

  19. Anyone,
    Can I conclude from Dr. Curry’s post that rise in temperature from radiative forcing is 1.2 C when the concentration of CO2 is doubled?
    I’m not asking about the others haved posted, just if what Dr. Curry says is correct, this is the result.

    Thanks

  20. Dr. Curry, you make a very strong statement:
    “Atmospheric radiative transfer models rank among the most robust components of climate model, in terms of having a rigorous theoretical foundation”

    I am not sure that we already have such a theory. Atleast it is not used in GCMs. The “theoretical” models of spectral line shapes and their behaviour are just fitted semiempirical anlogy models. Closest to a theoretical model are M. Chrysos & al.

    http://blogs.physicstoday.org/update/2008/07/collisions_between_carbon_diox.html
    http://prl.aps.org/abstract/PRL/v100/i13/e133007
    http://pra.aps.org/abstract/PRA/v80/i4/e042703

    Your reference gives a good agreement 2% between models. The NBM and other simplified methods that I have seen have been satisfied wiyh 10% accuracy compared to LBL. When we take into account that the HITRAN database claims 5% accuracy, is that accurate enough?

  21. This is really interesting. I am not an expert on the nuances of HITRAN or line by line codes, I would like to learn more about how accurate you think these are. My statement was relative to other climate model components. What is accurate enough in terms of fitness for purpose? I would say calculation of the (clear sky, no clouds or aerosol) broadband IR flux to within 1-2 W m-2 (given perfect input variables, e.g. temperature profile, trace profiles, etc.). Also calculation of flux sensitivity to the range of CO2 and H2O variations of relevance, e.g. water vapor ranging from tropical to polar atmosphere, and doubling of CO2 within 1-2 W m-2.

    This is a very good topic for discussion, thank you for bringing it up.

  22. Michael,

    The isothermal cavity is basically an idealized thought experiment. In application to radiative transfer, it is not so much about heat transfer as it is about establishing the statistical population distribution of the molecular vibrational-rotational energy states under conditions of full thermodynamic equilibrium. When the absorption spectrum of a gas is measured in the laboratory, it is done under carefully controlled pressure and temperature conditions so that both the amount of gas and its thermodynamic state are accurately known.

    A similar gas parcel in the free atmosphere is said to be in local thermodynamic equilibrium (LTE) because conditions are not isothermal, there being typically a small temperature gradient. But the population of its energy states will be close enough to those under thermodynamic equilibrium conditions that the spectral absorption by the gas will be essentially what was measured in the laboratory. It is only at high altitudes (higher than 60 km) that molecular collisions become too infrequent to maintain LTE, then corrections have to be made for a different population of energy states under non-LTE conditions. Also, water vapor condensed into a cloud droplet, or CO2 in the form of dry ice, will have very different absorption characteristics compared to the gas under LTE conditions.

    The isothermal cavity along with Kirchhoff’s radiation law is a useful concept to demonstrate that emissivity must be equal to absorptivity, that only a totally absorbing (black) surface can emit at the full Planck function rate, and that the emissivity of a non-black surface will be one minus its albedo, in order to conserve energy.

    • Michael Larkin

      A Lacis,

      Thank you for this refinement of what you said earlier. It helps a lot in transitioning conceptually from “the ideal” to the real world. I really am most grateful for your help in improving my understanding.

  23. The stratosphere is predicted to cool with increased CO2 concentrations in the troposphere.

    Is this because less IR leaves the troposphere?

    I think this is wrong. Does anyone have a conceptual explanation for this?

    Thanks

    • CO2 will increase equally in the stratosphere, so it is a local effect there where it radiates heat more efficiently with more CO2. Heat there comes from ozone absorption of solar radiation, not surface convection.

    • Just to elaborate slightly on Jim D’s explanation, an increase in a particular greenhouse gas molecule such as CO2 will increase the ability of a given layer of the atmosphere to absorb infrared (IR) radiation – the layer’s “absorptivity” – and equally increase its ability to emit IR – its “emissivity”. If that type of molecule is the only factor operating, absorptivity and emissivity will increase commensurately, and the net effect turns out to be a slight warming. On the other hand, most of the absorptivity in the stratosphere derives from the ability of ozone to absorb solar radiation in the UV range, where CO2 does not absorb. This is responsible for most of the stratospheric heating, and so CO2 contributes little, because its absorption of IR from below is a lesser source of heating. In other words, additional CO2 does not increase stratospheric absorptivity substantially. On the other hand, most radiation from the stratosphere at the temperatures there is in IR wavelengths, where CO2 is a strong emitter. As a result, CO2 increases stratospheric emissivity more than absorptivity, with a resultant cooling effect.

  24. Miklos Zagoni

    Dr. Curry,

    May I address my question also to you: According to your data and calculations, how much is the annual global mean atmospheric longwave absorbed radiation?

    Thanks in advance

  25. I think we should welcome Miklos Zagoni to this website and I hope he gets a reply to his question from Andy Lacis and Judith Curry.

    Welcome Miklos, and thank you for joining this debate.

  26. “how much is the annual global mean atmospheric longwave absorbed radiation?” ? ?

    What exactly is this a question really supposed to be asking? If we are talking about the outgoing LW radiation at the top of the atmosphere – all of that radiation is emitted radiation, some of it having been emitted by the ground surface, most of it having been emitted from some point in the atmosphere, depending on the wavelength and atmospheric temperature and absorber distribution.

    If we are talking about the LW radiation emitted by the ground surface – some of that radiation will escape directly to space without being absorbed by the atmosphere depending on the atmospheric absorber distribution. Under cloudy sky conditions, virtually all of the radiation emitted by the ground surface will absorbed by the atmosphere, unless the clouds are optically thin cirrus clouds.

    LW radiation get emitted and absorbed multiple time within the atmosphere. That is what radiative transfer is all about. What is important for climate applications is calculating the heating/cooling that takes place at the ground and within each atmospheric layer, and the total energy that is radiated out to space – all required to keep a detailed energy balance/budget at the ground surface, in each atmospheric layer, and for the atmospheric column as a whole. One can in addition keep some spectral information in the process of doing the atmospheric radiative transfer, that might be useful for diagnostic comparisons with observational data.

    Otherwise, the question by Miklos Zagoni makes no sense.

  27. chriscolose:

    Dear Chris,

    Thank you, but you must refer to another quantity. LW atmospheric absorption, according to KT97(=IPCC2007 WG1 energy budget) is about 350 W/m2, while the updated distribution (TFK2009) gives 356 W/m2.

    My question is: are these values generally accepted here, amongst radiative transfer specialists.

    Thanks

  28. Dear Andy,

    We are talking about the greenhouse effect here (or, at least, we have it in mind in the background), so I think my question is about the quantification of the general (~global annual mean) effect of the presence of IR absorbers (=GHG’s) in the air…

    Thanks, Miklos

    • Miklos,
      I am sorry that I totally misunderstood what your question was about.

      With respect to the IPCC2007 KT97 figure with 350 W/m2 of atmospheric absorption versus 356 W/m2 in an updated TFK2009 version, I would say that both figures are there primarily for illustrative purposes, rather than presenting technical results.

      Note that the KT97 figure implies a planetary albedo of 0.313 = 107/342, as the ratio of reflected to incident solar energy, with 235 W/m2 as the corresponding LW outgoing flux. This figure also illustrates a somewhat stronger greenhouse effect of 390 – 235 = 155 W/m2. This compares to the often cited nominal greenhouse effect value of 390 – 240 = 150 W/m2, which corresponds to a planetary albedo of 0.3, with absorbed solar radiation of 240 W/m2. Both cases imply a global mean surface temperature of 288 K (390 W/m2).

      In our recent Science paper using the GISS ModelE, we reported 152.6 W/m2 for the total atmospheric greenhouse effect. In the Schmidt et al. (2006) paper describing ModelE, three different model versions are shown with planetary albedos of 0.293, 0.296, and 0.297. These will produce slightly different outgoing LW fluxes. Observational evidence put the likely planetary albedo of Earth between 0.29 and 0.31. This uncertainty exists because it is very difficult to make more precise measurements from satellite orbit with existing instrumentation.

      However, this uncertainty does not adversely affect climate modeling results and conclusions. But this is one reason why climate modeling studies are conducted by running a control run and experiment run simultaneously to subtract out biases and offsets that will e common to both runs.

      Similarly, these potential biases and uncertainties in planetary albedo will affect the values of LW fluxes. So, the absolute value of model fluxes may differ. Accordingly, it does not make sense to compare the “accuracy” of atmospheric fluxes in a absolute sense between different models, since the reason for the differences may be complex, and not really having an impact on conclusions drawn, since the effect of these differences will be largely subtracted out by differencing the experiment and control runs.

      Instead, the accuracy of atmospheric fluxes is better assessed by comparing model flux results with line-by-line calculations for the same atmospheric temperature-absorber structure.

      • David L. Hagen

        A. Lacis
        You note:

        All energy transports are properly being included in climate modeling calculations.

        Yet Kevin Trenberth notes 23%-69% discrepancy in energy budgets between observed and accounted for. See above.
        Is that level of discrepancy what is considered “properly” included?

        An increasing number of evaluations are finding climate models projecting temperatures substantially above global temperatures. e.g.
        McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290.
        How are we to understand/explain these substantial divergences?
        Errors in modeling? In data? In statistics?

  29. I posted a comment on Dec 6 at 7.10 am. I have replied to my own second comment. Let me put it here at the end of the comments for emphasis.

    Let me put this a little more strongly. Radiative transfer models deal with a real atmosphere. According to the IPCC definition of radiative forcing, one deals with a hypothetical atmosphere. Therefore whatever Myhre et al in 1998 estimated, it was NOT radiative forcing. And the same applies to all the other estimates that have been done ever since. What the numbers are, I have no idea. All I know is that they are NOT radiative forcing.

  30. Radiative transfer is a standard consideration in design of microwave, millimeter wave, and infrared band systems. Fire control, missile, and communication systems’ performance hinges on the electromagnetic absorption along the path from transmitter to reflectors to receiver. Clear sky spectral absorption data from published and other sources has proven up to the task. Sometimes low absorption is needed, as in long range operations, and sometimes high absorption is sought, as when designing systems to operate concealed behind an atmospheric absorption curtain. Regardless, an accurate prediction is important.

    Curry writes, “The challenge for climate models is in correctly simulating the variations in atmosphere profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols. These confounding effects beyond a standard dry atmosphere were too much for military and communication system design and analysis. Perfection was hopeless, and more complicated situations utterly unpredictable.

    Curry then says, “And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of addressing this issue”. Radiative forcing in a limited sense applies radiative transfer, but it is not the same. Radiative Forcing is the paradigm IPCC selected for its climate modeling, and staunchly defended against critics. In radiative transfer, a long path in the atmosphere, as in hundreds of kilometers horizontally, or from the surface to the top of the atmosphere, is modeled end-to-end by an absorption spectrum. It represents the entire path, one assumedly tolerant of the temperature lapse rate. It also avoids microscopic modeling of molecular absorption and radiation. IPCC models the atmosphere in multiple layers, each with its own peculiar temperature, and hence temperature lapse rate, with radiative forcing characteristics, and in some considerations, molecular physics.

    Radiative Forcing has severe limitations, some of which may be fatal to ever showing success at predicting climate. It doesn’t account for heat capacity, and therefore can provide no transient effects. A prime alternative to an RF model is a Heat Flow Model (a redundant term, though universally accepted — heat is already a flow). In a Heat Flow model, the environment is represented by nodes, each with its own heat capacity, and with flow resistance to every other node. A heat flow model can represent transient effects, and variable attenuation for cyclic phenomena, such as seasons, the solar cycle, and ocean currents.

    Radiative Forcing has no flow variable, no sinks, and no sources. A heat flow model does. Feedback is a sample of energy, displacement, mass, rate, or information from within a system that flows back to the driving signal to add to, or subtract from it. Without a flow variable, the RF model must account for feedbacks without representing them. Consequently, IPCC redefined feedback, and produced a most confused explanation in TAR, Chapter 7. To IPCC, a feedback loops are imaginary links between correlated variables. This is a severe restriction for the RF paradigm especially because IPCC has yet to account for major feedbacks in the climate system, including the largest feedback in all of climate, the positive and negative feedback of cloud albedo, and the positive feedback of CO2 from the ocean that frustrates IPCC’s model for accumulating ACO2. It doesn’t have the carbon cycle or the hydrological cycle right. The RF model looks quite unrepairable.

    Curry talks about “doubling CO2″. This is an assumption by IPCC that greatly simplifies its modeling task, while
    simultaneously exalting the greenhouse effect. IPCC declares that infrared absorption is proportional to the logarithm of GHG concentration. It is not. A logarithm might be fit to the actual curve over a small region, but it is not valid for calculations much beyond that region like IPCC’s projections. The physics governing gas absorption is the Beer-Lambert Law, which IPCC never mentions nor uses. The Beer-Lambert Law provides saturation as the gas concentration increases. IPCC’s logarithmic relation never saturates, but quickly gets silly, going out of bounds as it begins its growth to infinity.

    Under the logarithm absorption model, an additional, constant amount of radiative forcing would occur for every doubling (or any other ratio) of CO2 or water vapor or any other GHG. Because the logarithm increases to infinity, the absorption never saturates. This is most beneficial to the scare tactic behind AGW. Secondly, the additional radiative forcing using the Beer-Lambert Law requires one to know where the atmosphere is on an absorption curve. This is an additional, major complexity IPCC doesn’t face.

    Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. These data are at the core of radiation transfer, and that the “doubling CO2″ error slipped through is surprising.

    • The big guns are riding into town.

      Welcome Dr Jeff Glassman!

      Let the serious debate begin. :)

    • David L. Hagen

      Jeff Glassman
      Thanks for the physics/chemistry perspective:
      “The Beer-Lambert Law provides saturation as the gas concentration increases. . . .the additional radiative forcing using the Beer-Lambert Law requires one to know where the atmosphere is on an absorption curve.. . . CO2 appears to be in saturation in the atmosphere.”
      The quantitative Line By Line Planck weighted Global Optical Depth calculations by Miskolczi (2010) show remarkably low sensitivity to CO2, and even lower combined variability to both CO2 and H2O given the available 61 year TIGR radiosonde data and NOAA. See Fig. 10 sections 3, 4. I would welcome your evaluation of Miskolczi’s method and results.

    • Jeff – The images from Channels 1-7 in my Tyndall gas effect post illustrate directly that CO2 is not in saturation in the atmosphere.

    • I recommend the replies by Lacis and Moolten to you below. I will talk about radiative fluxes here, since your concern appears to be a lack of a flow variable. In fact radiation schemes do computations over multiple atmospheric layers, as you say, and what they compute for each level are upwards and downwards radiative fluxes (W/m2). It is the convergence or divergence of these fluxes that result in radiative heating or cooling in a layer, which also depends on the heat capacity of that layer. So in fact fluxes are central to these schemes, and their impact on the atmosphere.

    • Jeffrey,
      Why do you say “Radiative Forcing” doesn’t account for heat capacity? There’s an energy equation which enforces energy balance in each cell, including that which comes and goes via radiative transfer, And the internal energy is calculated via specific heat.

      In your proposed “Heat Flow Model”, do you really have flow resistance to every other node? Even between nodes far apart? With then a dense matrix to solve? What about transmission through the atmospheric window, say?

    • @glassman: Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. These data are at the core of radiation transfer, and that the “doubling CO2″ error slipped through is surprising.

      When the same person posts ten paragraphs each eminently refutable, where do you begin? My theory, yet to be proved, is that the other nine paragraphs are best shot down by shooting down the tenth, which is the one quoted above.

      Since this paragraph is stated simply as a fact, that I know from the data to be blatantly false, let me simply ask Mr. Glassman to support his statement, which in the interests of decorum in this thread I’ve refrained from attaching any other epithet to than “false.”

  31. “Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. “

    Jeffrey – the misconception inherent in your comment dominated thinking about the role of CO2 until about 60 years ago, when geophysicists realized that the atmosphere could not be represented as a simple slab wherein a “saturating” concentration of CO2 precluded any further absorption and warming, but rather had a vertical structure. Within that structure, absorbed photon energy is subsequently re-emitted (up and down) until a level is reached permitting escape to space. For CO2, this is a high altitude in the center of the 15 um absorption band, but much lower as one moves into the wings, which are essentially unsaturable.

    This blog has a couple of informative threads on the greenhouse effect that address this phenomenon in some detail, and the links in the present thread are also valuable. I can see from your comment that you are well informed in some areas of energy transfer and radiation, but I suspect you have not had an opportunity to reconcile your knowledge with the principles of radiative transfer within the vertical profile of the atmosphere, and the resources I suggest may help. Others may be able to offer further suggestions.

  32. Jeff,

    I am sure that you will agree that Beer’s Law exponential absorption only applies to monochromatic radiation. When you have spectral absorption by atmospheric gases vary by many orders of magnitude at nearby wavelengths, you specifically have to take that fact into account. Line-by-line calculations do that. So do correlated k-distribution calculations (which is what is being used in many climate models). Calculating “greenhouse optical depths” averaged over the entire spectrum like Miskolczi does, makes absolutely no sense at all.

    You should take the time to become better informed on how climate models handle energy transports – radiative, convective, advective, etc. There are no heat capacity, no sinks, no sources, no flow variables , or feedbacks in radiative transfer calculations. It is only the temperature profile, surface temperature, atmospheric gas, aerosol, and cloud distributions (and their associated absorption and scattering parameters) that enter into radiative transfer calculations. Radiative energy transfer is incomparably faster than than energy transported by the other physical processes. Radiative transfer calculations provide the instantaneous heating and cooling rates that the climate GCM takes into account as it models all of the hydrodynamic and thermodynamic energy transports in a time marching fashion. All energy transports are properly being included in climate modeling calculations.

    IPCC does not assume that infrared absorption is proportional to the logarithm of GHG concentration. Radiative transfer is being calculated properly for all atmospheric gases. The absorption behavior by some of the gases, for example CO2, happens to be close to logarithmic under current climate conditions, but the absorption for GHGs is nowhere close to being saturated except for the central portions of the strongest absorption lines. There are many more weak than strong lines.

    Radiative forcings need to be understood for what they are, and what they aren’t. A radiative forcing is simply the flux difference between two different states of the atmosphere. It helps if the atmospheric state that is used as the reference is taken to be an atmosphere that is in radiative/convective equilibrium. The second atmospheric state may be the same atmosphere but with the CO2 profile amount doubled. A comparison of radiative fluxes between the two atmosphere states will show flux differences from the ground on up to the top of the atmosphere. The flux difference at the tropopause level is typically identified as the “instantaneous radiative forcing” (which for doubled CO2, happens to be about 4 W/m2). Since doubled CO2 decreased the outgoing LW flux to space, this is deemed to be a positive radiative forcing since the global surface temperature will need to increase to re-establish global energy balance. If no feedback effects were allowed, an increase of the global-mean surface temperature by about 1.2 C would re-establish global energy balance. In the presence of full atmospheric feedback effects, the global-mean surface temperature would need to increase by about 3 C before global energy balance was re-established. And by the way, the climate feedbacks are not prescribed, they are the result of the physical properties of water vapor, as they are modeled explicitly in a climate GCM.

    • David L. Hagen

      A. Lacis Re:

      Calculating “greenhouse optical depths” averaged over the entire spectrum like Miskolczi does, makes absolutely no sense at all.

      Please read and understand Miskolczi before again mischaracterizing him.
      Miskolczi actually does the detailed LBL calculations that you advocate:

      When you have spectral absorption by atmospheric gases vary by many orders of magnitude at nearby wavelengths, you specifically have to take that fact into account. Line-by-line calculations do that.

      Miskolczi also performs the LBL calculations with much finer spatial and frequency resolution, and in much greater detail than you have described in posts here. After he calculates the detail, he then performs a Planck-weighted global integration to evaluate the global optical depth. That now gives a manageable parameter to quantitatively track global absorption over time.

      Did I understand your objections or what am I missing in what you so strongly disagree/advocate?

  33. David,

    The detail of Miskolczi’s line-by-line results is not the issue. It is how he uses his line-by-line modeling results to come to the erroneous conclusions about the greenhouse effect and the global warming due to increasing CO2. That is where the problem is.

    Here is a simple test to see how really useful Miskolczi’s greenhouse method is. Can Miskolczi reproduce the well established 4 W/m2 radiative forcing for doubled CO2 with his methodology, and its equivalent 1.2 C global warming?

    • Years spent on case studies in th History of Science taught me that “well established” doesn’t provide a logical guarantee of being correct.

      That it isn’t seen as a possibility that it could be otherwise by a community of scientists is the whole reason it is “well established”.

      Yet the wailing and the gnashing of teeth over Trenberth’s “missing heat” cold be an indication that the atmospheric window may be wider open than has been previously thought. Even that it might also vary, a la Iris theory of Lindzen.

      Anyone looking up at the ever varying cloudscape would conclude that variation might be the rule rather than the exception.

      • Yet the wailing and the gnashing of teeth over Trenberth’s “missing heat” cold be an indication that the atmospheric window may be wider open than has been previously thought.

        This is an excellent point.

        Even that it might also vary, a la Iris theory of Lindzen.

        I like the way the Wikipedia article on the iris hypothesis says “However, there has been some relatively recent evidence potentially supporting the hypothesis” and then cites papers by Roy Spencer and Richard Lindzen.

        Very noble of the foxes to volunteer for sentry duty at the hen house.

      • Heh, given the presence of the gatekeepers guard dogs on Wikipedias global warming section I’m surprised the foxes managed to sneak it in. ;)

      • Right, I reckon Connolley either dozed off or neglected to put that article on his watch list. Even alarmists like me can’t get past Connolley and Arritt.

    • David L. Hagen

      Thanks for clarifying. As I understand your objection to Miskolczi, it is with the methodology and conclusions of his greenhouse planetary theory, not with the LBL evaluation of atmospheric profiles leading to the Planck-weighted Global Optical Depth.
      As I understand Miskolczi, his method of evaluating the Global Optical Depth can be applied to any atmospheric profile, including doubled CO2 etc from which you can prescribe insolation and other parameters to evaluate conventional forcing methodology.

      See my detailed response to you above. Please see Miskilczi’s (2010) detailed evaluation of numerous sensitivities. Of particular interest is his evaluation of actual Co2 and H2O sensitivities derived from the available 61 year empirical TIGR radiosonde data.

      As to his overall model, how do you evaluate how well he has fit the actual optical absorption measurements to various atmospheric fluxes?

      Do his simplified correlations between those fluxes reasonable approximations to the actual ratios of those flux curves?

      How do you evaluate Bejan’s constructal law approach to modeling climate with thermodynamic optimization? See:
      Thermodynamic optimization of global circulation and climate
      INTERNATIONAL JOURNAL OF ENERGY RESEARCH
      Int. J. Energy Res. 2005; 29:303–316 DOI: 10.1002/er.1058

  34. Andy,

    As I can see, your approach to the greenhouse problem is through Ramanathan’s G (= Su – OLR), or g (= G/Su) greenhouse functions. Empirically, it gives you the 396-239 = 157 (g=0.4) all-sky and 396-264=132 W/m2 (g=1/3) clear-sky factors, with about 33 (and 27) K greenhouse temperatures.

    The question is how you get these numbers for G, or g (with OLR given) from the measured amounts and distributions of GHG’s and temperature. This is the task of radiative transfer calculations. The result will depend on the global average atmospheric absorbed LW, or on the surface transmitted (St, “Atmospheric Window”) radiation. According to the (monochromatic) Beer law, the global average frequency-integrated tau is a given (logarithmic) function of St/Su .

    As we all want to have exact numbers for the greenhouse effect, we must calculate precisely the global average infrared absorption, or the “window”. Having this, one can establish a theoretically sound G(tau), or, if you like, a G(St) function.

    That’s why approximate flux estimations are not acceptable, and that’s why I ask the radiative experts here to present their most accepted numbers for LW absorption, window and downward radiation.

    When we aggree on the actual fluxes, we can step forward to the possible effects of future composition changes.

  35. A. Lacis wrote: “The line-by-line codes require significant computer resources to operate, but they are the ones that give the most precise radiative transfer results.”

    This sounds very reassuring. Few questions are however in order. Form what I learned, if atmosphere is hypothetically isothermal, what would be the “radiative forcing” from CO2 doubling? Zero, right?

    Now, if atmosphere would go with the same lapse rate for all 45km up, what would be the radiative forcing from 2xCO2? Probably a lot; M. Allen says 80-98W/m2 :-) Or 50W/m2.

    Further, if certain strongly-absorbing band emits from stratosphere, wouldn’t the forcing be negative?

    What we see is that the forcing from CO2 doubling can be anything from negative to about 50W/m2, which fundamentally depends on the shape of vertical temperature profile. Therefore, the whole reassuring precision of RT codes comes down to how accurately do we know (or represent) the “global temperature profile”. So, my question is: how accurately do you know it?

    Another question: is the resulting OLR linear with respect to T(z)?

    There are more question…
    Cheers,
    – Al Tekhasski

  36. A. Lacis wrote: “… reproduce the well established 4 W/m2 radiative forcing for doubled CO2″

    That’s another good question. Do you mean it is well established by Myhre et.al (1998) where “three vertical profiles (a tropical profile and northern and southern hemisphere extratropical profiles) can represent global calculations sufficiently”?

  37. I gave up on Miskolczi when I saw him using the factor of 2 for the ratio of average potential energy to average kinetic energy of the molecules of a planetary atmosphere. That’s valid for gravitationally bound bodies that collide very infrequently. For air molecules that assumption is so far from true as to be a joke. The mean free path of air molecules is around 70 microns near the Earth’s surface, and with that value the ratio is not 2 but 0.4.

    Miskolczi is simply pulling the wool over people’s eyes by writing impressive sound rubbish.

  38. You accuse Ferenc Miskolczi of deliberately trying to deceive us by “pulling the wool over people’s eyes”. That’s a very serious charge to level against a theoretical physicist. I hope you have good strong evidence, or you are going to look petty and vindictive in the eyes of many.

    If you think the ratio is 2 then you’re the one looking stupid.

    If I complained about your use of F = 3ma for Newton’s second law would you call me vindictive? You’re out to lunch, guy.

  39. I don’t know, which is why you see the question marks in my comment. This blog is a wonderful opportunity for nonviolent interaction between scientists on both sides of the debate. Why not ask Miklos Zagoni, who is intimately acquainted with Miskolczi’s work, if he can help explain the apparent problem, rather than jumping in with both feet making unwarranted accusations?

  40. Why not ask Miklos Zagoni, who is intimately acquainted with Miskolczi’s work, if he can help explain the apparent problem, rather than jumping in with both feet making unwarranted accusations?

    What’s your basis for “unwarranted?” I watched Zagoni’s video a couple of months ago. He was simply parroting Miskolczi. You seem to be doing the same. Stop parroting and start thinking. This is junk physics.

  41. Oh dear Vaughan.
    I’m not “parroting” anyone. As I pointed out above, I ask questions and think about replies. I recommended you ask Miklos how Ferenc arrived at the ratio of energy you had an issue with. But instead, you seem content to make unsupported assertions about the quality of his work. You fear the consequences of his theory, so you attack details without exploring how the whole fits together.

    Ah well. Be happy with whatever you believe.

    • I recommended you ask Miklos how Ferenc arrived at the ratio of energy you had an issue with. But instead, you seem content to make unsupported assertions about the quality of his work.

      There’s nothing “unsupported” about it, see e.g. this paper which should confirm what I said (and there are even shorter proofs and moreover applicable to more general situations than considered by Toth).

      Putting morons on pedestals like this only makes you a moron. Miskolczi and Zagoni are heroes only to climate deniers.

      • I’d be interested in a discussion of the virial theorem component to this. I’ve encountered another (unpublished) paper that addresses the virial theorem in context of the earth’s atmosphere that i found intriguing. Don’t ask me to defend this (I’m not up on this at all), but would be interested in a discussion on the relevance of the virial theorem.

      • David L. Hagen

        Judy
        On the virial theorem relative to Miskolczi & planetary greenhouse theories, see:
        Viktor T. Toth, The virial theorem and planetary atmospheres
        arXiv:1002.2980v2 [physics.gen-ph] 6 Mar 2010
        http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.2980v2.pdf

        He derives the atmospheric virial theorem for diatomic molecules in a homogeneous gravitational field valid for varying temperature, where the ratio of potential energy U to kinetic Energy K = gas constant R divided by the product of heat capacity cv times the molar mass Mn
        U/K = R /(cV*Mn) Equation 34.

        Hence we were able to demonstrate without having to invoke concepts such as “hard core” potentials or intramolecular forces that the virial theorem is indeed applicable to the case of an atmosphere in hydrostatic equilibrium. However, it must be “handled with care”: the nature of the atmosphere and the fact that the horizontal (translational) and internal (rotational) degrees of freedom of the gas molecules are unrelated to the gravitational potential cannot be ignored.

        See also:
        PACHECO A. F. ; SANUDO J. The virial theorem and the atmosphere, Geophysics and space physics ISSN 1124-1896, 2003, vol. 26, no3, pp. 311-316, 6 page(s) (4 ref.)

        In our atmosphere, most of the energy resides as internal energy, U, and gravitational energy P, and the proportionality U/P = ev/R = 5/2 is maintained in an air column provided there is hydrostatic equilibrium. In this paper we show that this result is a consequence of the virial theorem.

        The most detailed extention of the virial theorem to a full thermodynamic model for a planetary atmosphere column with bas absorption that I know of is by:

        Robert H. Essenhigh, Prediction of the Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere by Solution of the (S−S) Integral Equations of Transfer and Evaluation of the Potential for Profile Perturbation by Combustion Emissions, Energy Fuels, 2006, 20 (3), 1057-1067 • DOI: 10.1021/ef050276y
        http://pubs.acs.org/doi/abs/10.1021/ef050276y

        These results provide a platform for future numerical determination of the influence on the T, P, and F profiles of perturbations in the gas concentrations of the two primary species, carbon dioxide and water, and it provides, specifically, the analytical basis needed for future analysis of the impact potential from increases in atmospheric carbon dioxide concentration, because of fossil fuel combustion, in relation to climate change.

        Essenhigh addresses water and CO2 absorption bands as well etc.

        Miskolczi (2008) applied the classical virial theorem:

        Applying the virial theorem to the radiative balance equation we present a coherent picture of the planetary greenhouse effect. . . .
        (g) — The atmosphere is a gravitationally bounded system and constrained by the virial theorem: the total kinetic energy of the system must be half of the total gravitational potential energy.

        (Part the difficulty some readers have with Miskolczi 2008 is his use of astronomical language etc from other applications of the virial theorem. His statement: “the radiation pressure of the thermalized photons is the real cause of the greenhouse effect” got bloggers off onto the force of photons and satellites, rather than recognizing Miskolczi’s effort to explain the atmospheric pressure by application of the virial theorem, together with absorption of solar radiation to surface temperature. Need to check if there is a small difference in the virial coefficient versus the gas between Miskolczi, Toth, Pacheco, and Essenhigh.)

      • David L. Hagen

        Victor Toth’s paper has been published:
        The virial theorem and planetary atmospheres
        Időjárás – Quarterly Journal of the Hungarian Meteorological Service (HMS), Vol. 114, No. 3, pp. 229-234
        http://www.met.hu/download.php?id=2&vol=114&no=3&a=6

      • David L. Hagen

        For virial connoisseurs see:
        Lambert M. Surhone, Miriam T. Timpledon, Susan F. Marseken, Virial Theorem,
        206 pages, Betascript Publishing (August 4, 2010) ISBN-10: 6131111472; ISBN-13: 978-6131111471

        In mechanics, the virial theorem provides a general equation relating the average over time of the total kinetic energy,The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered in statistical mechanics; this average total kinetic energy is related to the temperature of the system by the equipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not in thermal equilibrium. The virial theorem has been generalized in various ways, most notably to a tensor form.

        It would help if some reader could check this out and review what it has to say on the application to a planetary atmosphere with diatomic and multiatomic gases, per the issues on Toth, Essenhigh & Miskolczi.

        For history buffs:
        Henry T Eddy, An extension of the theorem of the virial and its application to the kinetic theory of gases, 1883

        See a common lecture on the Virial theorem
        http://burro.cwru.edu/Academics/Astr221/LifeCycle/jeans.html

      • David L. Hagen

        For the astronomic applications of the virial theorem, see:
        James Lequeux, The Interstellar Medium
        ISBN 3-540-21326-0 Springer Berlin Heidelberg NewYork
        http://astronomy.nju.edu.cn/~ygchen/courses/ISM/Lequeux2003.pdf

        14.1.1 A Simple Form of the Virial Theorem
        with No Magnetic Field nor External Pressure p 330 – p 323
        Equations (14.1) to (14.11)

        14.1.3 The General Form of the Virial Theorem
        (This includes bulk velocity, density, pressure and gravitational potential as pertinent to a planetary atmosphere, -as well as magnetic field which may not be significant for planets.)

        14.1.4 The Stability of the Virial Equilibrium
        (Note the use of a polytropic equation of state.)

        14.1.5 The Density Distribution in a Spherical Cloud
        at Equilibrium
        (Adapt for a gas around a planet with a given radius and mass.)

      • @curryja: I’d be interested in a discussion of the virial theorem component to this. I’ve encountered another (unpublished) paper that addresses the virial theorem in context of the earth’s atmosphere that i found intriguing. Don’t ask me to defend this (I’m not up on this at all), but would be interested in a discussion on the relevance of the virial theorem.

        Would it be interesting enough to start up a separate thread on the virial theorem on your blog? Although I’ve been reluctant to be a guest on any other topics, that’s because the ratio of my time required for a guest post, divided by the expertise of others on that topic, has not been large enough so far.

        In the case of the virial theorem, from what I’ve read so far in the literature my impression is that no one alive on the planet really understands it. A guest post in which I pretend to explain it might be the most effective way of pulling the real experts on the virial theorem out of the woodwork, if there are any. Viktor Toth could do this ok, but I imagine I could do it at least as well.

        I would be delighted to be let off that hook if Viktor volunteered for that duty (I love nothing more than being let off hooks).

      • YES!!! please send me an email, lets start a thread on virial. I will send you an email also.

      • David L. Hagen

        Judy, Nick Stokes, & Vaughan Pratt
        Regarding the virial coefficient for the atmosphere
        (2 vs 3/2 vs 5/2 for Kinetic Energy/Potential Energy) See the following stating another coefficient of 6/5 for hydrogen as a diatomic gas:
        “The virial theorem which applies to a self-gravitating gas sphere in hydrostatic equilibrium, relates the thermal energy of a planet (or star) to its gravitational energy as follows:
        alpha * Ei + Eg = 0
        with alpha = 2 for a monoatomic ideal gas or a fully non-relativistic degenerate gas, and alpha = 6/5 for an ideal diatomic gas. Contributions arising from interactions between particles yield corrections to the ideal EOS (see Guillot 2005). THe case of alpha = 6/5 applies to the molecular hydrogen outer regional of a giant planet.”
        Jonathan J.Fortney et al. ,Giant Planet Interior Structure and Thermal Evolution Invited chapter, in press for the Arizona Space Science Series book “Exoplanets” Ed. S. Saeger
        arXiv:0911.3154v1
        http://arxiv.org/PS_cache/arxiv/pdf/0911/0911.3154v1.pdf

        Ref Guillot 2005 Annual Review of Earth and Planetary Sciences, 33, 493

        How did they calculate their 6/5 for a diatomic gas? What steps and assumptions did they use?

      • David,
        The first thing to note there is that the factor 2 carries the opposite sign. That is significant, because the gravitational energy referred to is the energy relative to infinity, not ground.

        The factor 6/5 arises through the same logic as in Toth’s paper. Monatomic gases have just translational KE with 3 degrees of freedom. Diatomic gases have two extra dof of rotational KE, making 5. The ratio of PE to translational KE is still -2, but with equipartitioning, there is thus 5/3 times as much KE in total, and the ratio is -2 * 3/5= -6/5.

      • Ferenc Miskolczi

        Poor Pratt, seems you do not know much about the virial concept. For the global average TIGR2 atmosphere the
        P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0) You may compute it yourself if you are able to, but you may ask Viktor Toth about the outcome of our extended discussions about the topics (after I his paper).

      • Welcome Ferenc – I very much look forward to Mr Pratt engaging you directly on the substance of his remarks.

      • Um, I’ve just re-read this. Vaughan, I apologise if I used the wrong term to address you.

      • Welcome Ferenc – I very much look forward to Mr Pratt engaging you directly on the substance of his remarks.

        Um, I’ve just re-read this. Vaughan, I apologise if I used the wrong term to address you.

        Not a problem, at Harvard they would call me “Mr Pratt,” at MIT and Stanford “Vaughan.” So your first address would be fine for Harvard, while on your second you’ve inadvertently used the correct form of address for the only two institutions I’ve taught at for more than a decade each.

        However I recently took a CPR course as part of the autonomous vehicle project I’m the Stanford faculty member on (Stanford has liability worries about the car lifts and heavy machinery we use), so feel free to call me “Dr Pratt” in case you need the Heimlich maneuver or cardiopulmonary rescuscitation. (Both my parents were medical doctors. If you’re lucky that’s hereditary, if not then the Good Samaritan law kicks in to render me innocent of your premature demise, so either way I’m safe even if you aren’t.)

      • Ferenc, welcome to the blog. Please could you answer a question for me?
        Do you think that the convergence of the results on your stable value for Tau confirms the validity of the empirical radiosonde data? If your theory is correct, would it enable you to correct or assign error bars to the empirical data?

        Thanks

      • Ferenc, thank you very much for stopping by to discuss your work.

      • My goodness Judith, you are attracting some top people here.

        Long may it continue.

      • seems you do not know much about the virial concept.

        Clearly one of us doesn’t, since we get such wildly different results.

        For the global average TIGR2 atmosphere the
        P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0)

        Neglecting mountains (which increases K very slightly by postulating atmosphere in place of the mountains) your value for P is easily computed from the center of mass of a column of atmosphere, which on any planet is at the scale height of that planet’s atmosphere, suitably averaged as a function of temperature. If the centroid is at height h then P for that column is mgh where m = 10130 kg, g = 9.81, and h starts at 8500 at sea level and drops to 7000 or less at the troposphere depending on the temperature. A good average figure for h is around 7600, and so P = mgh = 10130*9.81*7600 = 755.3 megajoules. This is essentially what you got so we’re in excellent agreement there.

        Now if Toth and I are getting 0.4 to your 2 then your figure for K must be about 20% of what we’d imagine it to be. Now the specific heat of air at constant pressure is 0.718 kJ/K/kg. Our column has mass m = 10130 kg/m2 as noted above so we have .718*10130 = 7.27 MJ/K/m2. In order to reach your figure of 376 MJ/m2 you would need the temperature of the atmosphere (suitably averaged) to be 376/7.27 = 51.7 K (°K, not Kinetic energy of course).

        I don’t know how you calculated the KE of the Earth’s atmosphere, but at that temperature every constituent of it would be solid. Check your math. I would be more comfortable (literally!) with a KE of 1.885 GJ/m^2 corresponding to a typical atmospheric temperature of 250 K and then you’d get the 0.4 that Toth and I got.

        The virial theorem’s ratio of 2 is exact for any collision-free gravitationally bound system of point particles. When collisions are frequent, as in the atmosphere where typical mean free path at sea level is 70 microns, or when the particles are large, as with a satellite orbiting Earth, the ratio changes significantly. With frequent collisions, air molecules quickly lose track of whatever orbit they were briefly on and their dynamics is completely different from that of a solitary air molecule orbiting an otherwise airless planet. And with a big particle like Earth one must define potential energy with respect to the Earth’s surface otherwise particles in a ground energy state have absurd PE’s, but in that case the KE of a satellite in orbit is a great many times its PE (imagine it in an orbit 1 meter about the surface of an airless spherical planet).

        You now have me wondering what you think the virial theorem means.

      • David L. Hagen

        Vaughan Pratt
        Kindly do us all a favor by readying the actual papers detailing the virial theorem by Toth etc. in the references cited.
        Please apply the Virial Theorem as derived by Toth.

      • No, I think you should read Toth’s paper. Vaughan’s arithmetic agrees with it, and not with Ferenc. The point of Toth’s paper is that the ratio PE/KE, where KE includes rotational energy, is 0.4, not 2 as Ferenc claims. If you restricted to translational KE, the ratio would be 2/3 (still not 2).

        The paper you quote by Pachudo gives the same ratio as Toth. I’m surprised that you quote these results without noticing that they contradict FM’s claim.

        You might also like to note Toth’s tactful aside
        “whether or not the postulate was correctly applied in [1] is a question beyond the scope of the present paper”.
        Indeed the biggest mystery is what the PE/KE ratio has to do with IR fluxes. This has never been explained.

      • Indeed the biggest mystery is what the PE/KE ratio has to do with IR fluxes. This has never been explained.

        I’d formed the impression that FM was trying to gradually back away from that mystery. His problem is how to do so in a no-fault way. He’s not handling this very well on this blog.

      • Stop ascribing motive and insinuating unscientific behaviour!

        We’ve seen more than enough of it over the last 20 years. Pack it in!

      • Stop ascribing motive and insinuating unscientific behaviour!

        I was ascribing motive to FM? All I said was that he was trying to back away from his claimed applicability of the virial theorem, which even Zagoni has not been able to apply.

        We’ve seen more than enough of it over the last 20 years. Pack it in!

        What happened 20 years ago? I can only think of the George C. Marshall Institute starting up. Did you have something else in mind?

      • David L. Hagen

        Nick Stokes & Vaughan Pratt
        Thanks Nick for clarifying the issue. Mae culpa. I was reacting to language, not checking the substance.

        In my post above giving references in response to Curry’s query on the virial theorem, I noted:
        “Need to check if there is a small difference in the virial coefficient versus the gas between Miskolczi, Toth, Pacheco, and Essenhigh.”

        Thanks for raising the issue of the PE/KE coefficient: “The point of Toth’s paper is that the ratio PE/KE, where KE includes rotational energy, is 0.4, not 2 as Ferenc claims.”

        Re: “biggest mystery is what the PE/KE ratio has to do with IR fluxes. ”
        I assume that may affect the atmospheric profile of pressure, temperature, density and composition. See Essenhigh above where he shows differences between relative pressure and relative density with elevation.

        A full analysis to <0.01% variation would need to account for variations of heat capacity with temperature, with corresponding variations in composition, pressure, temperature, and gravity with elevation.

      • David L. Hagen

        Nick Stokes & Vaughn Pratt
        See my note above regarding a coefficient of 6/5 stated for diatomic hydrogen. http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-20326

      • Now the specific heat of air at constant pressure is 0.718 kJ/K/kg.

        Sorry, I meant constant volume (the number I gave is correct). Constant pressure measures just kinetic energy, constant volume measures kinetic and potential energy. This is because work is done in the constant pressure case which puts the potential energy to work, but not in the constant volume case which leaves it in the system, like a wound-up spring. In the case of the atmosphere the work done at constant pressure is used to raise the air above, which then becomes potential energy mgh again.

      • Christopher Game

        Christopher Game posting about the virial theorem. It seems I am a bit late to post the following, but here it is.

        We are here interested in Miskolczi’s kinetic-to-potential energy ratio between time averages of potential energy and of kinetic energy, namely
        2 = .

        The principle of equipartition of energy can be stated
        The average kinetic energy to be associated with each degree of freedom for a system in thermodynamic equilibrium is kT / 2 per molecule.

        The virial theorem of Clausius (1870, “On a mechanical theorem applicable to heat”, translated in Phil. Mag. Series 4, volume 40: 122-127) states on page 124 that
        The mean vis viva of the system is equal to its virial.

        The virial theorem is about a spatially bounded system, that is to say, a system for which all particles of matter will stay forever within some specified finite region of space.

        Clausius allows a diversity of definitions of kinetic energy. For him, it was allowable to define a kinetic energy for any specified set of degrees of freedom of the system. We are here acutely aware that various writers use the permitted diversity of definitions of kinetic energy, and get a diversity of statements as a result.

        The virial theorem of Clausius makes no mention of potential energy. Potential energy is about forces. Under certain conditions, the virial of Clausius turns out to be very simply related to a potential energy.

        Clausius (1870) makes it clear that the terms of his proof may refer to all or to selected degrees of freedom of the system, as defined by Cartesian coordinates.

        Because of its generality, the virial theorem can relate to a theoretical atmosphere of fixed constitution sitting on the surface of a planet, and how this is so is indicated in the original paper of Clausius (1870).

        Remembering to be careful about appropriately specifying the degree of freedom of interest, and the potential energy of interest, the reader of this blog will find that Miskolczi’s formula for the atmosphere
        2 = is correctly derivable from the virial theorem of Clausius. Much of the physical content of the formula can be seen in a simple model, to be found in various books, papers, and on the internet, as follows.

        An elastic ball of mass m is dropped from rest at an altitude h, with g being constant over the altitude up to h (near enough). It will bounce on a perfectly elastic floor at altitude 0 at time T.
        The ball’s velocity (positive upwards) at time t in [0,T] is – gt.
        Its kinetic energy at time t is mg^2t^2 / 2 .
        Its average kinetic energy over the time [0,T] , the time it takes to fall from altitude h to altitude 0, is
        = Integral(0,T) (mg^2t^2 / 2 dt) / T .
        = mg^2T^2 / 6 .
        We recall that T = √(2h / g).
        Thus the average kinetic energy over time [0,T] is
        = mg^2(2h / g) / 6 = mgh / 3 .
        Referred to altitude zero for the zero of potential energy,
        the ball’s potential energy at time t is mgh − mg^2t^2 / 2 .
        Its average potential energy over time [0,T] is
        = mgh – Integral(0,T) (mg^2t^2 / 2 dt) / T
        = 2 mgh / 3
        Thus we have 2 = .

      • Christopher Game

        Christopher Game about the virial theorem again. Typographical problem. Please excuse.

        I didn’t use the angle brackets safely, and the formula was lost.
        I meant to write

        2 average( kinetic energy) = average(potential energy)

        I think it will be obvious how to put in the proper terms.

        Christopher Game

      • Christopher,
        Toth, in the paper cited frequently here, deals with exactly this example. But as he says, in a real gas, there are three degrees of translational motion, and the KE is equipartitioned. Your argument still works for the vertical component, but now the total KE is three times larger, and the ratio is 2:3, not 2:1.

        He goes on to argue that for diatoms, rotational KE should be counted, so the ratio is 2:5.

      • Christopher Game

        Christopher Game replying to Nick Stokes’ post of December 8, 2010 at 7:34 am, about the virial theorem.

        Dear Nick,

        Thank you for your comment.

        With all due respect, I am referring to the virial theorem of Clausius.

        Clausius is quite clear, and one can check his proof, that for the purposes of his definition of vis viva, one can work with one degree of freedom at a time, in Cartesian coordinates. What we are here calling ‘the potential energy’ is dealt with by Clausius in terms of components of force, and that is to be matched against the degree(s) of freedom chosen for the vis viva. I think the proof used by Clausius is appropriate for the present problem. Clausius does not require the use of the total kinetic energy.

        For his version of “the virial theorem”, Victor Toth cites, not Clausius, but Landau and Lifshitz. They give only a very brief account of the theorem and their proof is less general than the one that Clausius offered.

        As I noted in my previous post, Clausius allows diverse choices for the definition of the vis viva, appropriate to the problem. And various choices of definition lead to various results. I think the choice made by Miskolczi, though different from the one you are considering in your comment, is appropriate for the present problem. Your choice might perhaps be relevant to a different problem.

        Here we are interested only in the gravitational degree of freedom, and the appropriate component of vis viva has also just one degree of freedom. As allowed by Clausius, we are not interested in the other degrees of freedom. The vertically falling bouncing ball really does tell the main physics here.

        I think you will agree with me about this when you check the method used by Clausius.

        Yours sincerely,

        Christopher Game

      • Christopher,

        Before starting to discuss, which form is appropriate “to this problem” somebody should give a reason that some form of virial theorem is approprite. As far as I can see nobody has ever presented any reason for that, including Miskolczi.

      • David L. Hagen

        To evaluate the radiation fluxes, we need to know the atmospheric profile of temperature, pressure and composition. The virial theorem gives a basis for modeling temperature and pressure vs elevation with gravity. Anyone have a better explanation?

      • No, but I don’t like that one. You’d have to explain how “The virial theorem gives a basis for modeling temperature and pressure vs elevation with gravity.“. I can’t see it. And as for “the atmospheric profile of temperature, pressure and composition“, that’s what this radiosonde database is supposed to tell you.

        But the 2007 FM paper just plucks numbers out of the virial theory and puts them into IR flux equations. There’s nothing about the results mediated through atmospherical variables. But no other explanation either.

      • David is correct about the relevance of the virial theorem. Here’s the explanation he asked for.

        In the search for Trenberth’s missing heat, one wants to know how much of the total energy absorbed by the planet goes into heating the ocean, the land, and the atmosphere.

        For the atmosphere, if you assume that all the heating energy is converted into kinetic energy of the molecules, which are moving at some 450 m/sec, and calculate this energy from the elevation in temperature of the atmosphere, you will get an answer that is only 5/7 of the correct answer. This is because you neglected the increase in potential energy resulting from raising the center of mass of the atmosphere when it expands due to the warming.

        Since this is a significant error, one might wonder why everyone neglects this increase in potential energy. The answer is that they don’t, it is simply hidden in the difference between the specific heat capacity of air at constant volume, vs. that at constant pressure. The first line of this table gives the former as 20.7643 joules per mole per degree K, and the latter as 29.07. Notice that 29.07*5/7 = 20.7643 in the table. (It is most likely that the latter was simply computed in this way from the former; one would hardly expect agreement with theory to six decimal places from observation, particularly since the former is given only to four places, and the composition of air is more variable than that of any particular constituent such as nitrogen or argon.)

        Heating any region of the atmosphere, whether a cubic meter or a cubic kilometer, is done at constant pressure because the pressure is determined by the mass of air above the region, which is unchanged by heating, whether or not the heating is applied to the whole atmosphere or just the region in question. Hence the applicable heat capacity is the one for constant pressure.

        But this choice automatically factors in the elevation in altitude of the air, since a gas heated at constant pressure must increase its volume. The work done in moving the boundary against the equal and opposite pressure outside the heated region is in this case converted to the potential energy of the raised atmosphere.

        So a virial theorem is at work here, namely the one in Toth’s paper that gives the ratio of PE to KE as 2/5. But this is ordinarily buried in the distinction between constant pressure and constant volume, and so passes unnoticed. Toth makes this connection at the end of his paper.

        But even if one were to neglect the potential energy and only account for the increase in kinetic energy of the atmosphere, the heat capacity of the atmosphere is equivalent to only 3 meters of ocean, whence the error from omitting PE is tiny and cannot possibly make a dent in the amount of missing heat.

        I know of no other relevance of Toth’s virial theorem to the climate. In particular I cannot imagine any role for it in Miskolczi’s claimed effect that CO2-induced warming is offset by water-vapor-induced cooling.

      • Christopher Game

        Dear Pekka Pirilä,
        I am addressing the problem of calculating the ratio of a kinetic to potential energy.
        Yours sincerely, Christopher

      • Hi Christopher,

        It has been a while. It seems to me that according to some of these analysis (Toth’s included) that a 5Kg parcel moving along a certain trajectory at a given velocity will have a different kinetic energy than a cannon ball of same mass and trajectory and temperature.

        Of course it’s harder to convince oneself that the kinetic energy in lattice vibrations of the cannon ball is in any way relevant to the kinetic energy that we are interested in.

      • @CG: Of course it’s harder to convince oneself that the kinetic energy in lattice vibrations of the cannon ball is in any way relevant to the kinetic energy that we are interested in.

        The joules coming from the Sun wind up in the cannon ball. However the “cannon ball” is really an N-atom molecule. Each atom has 3 DOF (degrees of freedom) whence the molecule in principle has 3N DOF. However quantum mechanics forbids certain DOFs as having too low energy, so for example the diatomic O2 with 6 DOF’s in principle only has 5 DOFs below around 500 K.

        The relevance of the non-translational DOFs is that n watts of heat from the Sun distributes itself equally between all non-forbidden DOFs, whence the specific heat capacity of any given gas allows it to absorb more watts than would would expect from the translational DOFs alone.

        In particular if it has 2 non-translational or bound DOFs then 5 watts from the Sun will distribute itself 1 watt to each of those 5 DOFs.

        If the applicable virial theorem promises p joules of PE to every 1 joule of KE then you need to supply 1+p joules of energy to the system in order to raise its time-averaged kinetic energy by 1 joule. In the case of the atmosphere Toth has shown p = 0.4 (which I noticed independently of Toth but several months later so there is no question as to his priority).

        The significance of this for global warming is that if the atmosphere gains K joules of kinetic energy when its temperature is raised 1 °C, 1.4K joules must have been supplied to achieve that effect since potential energy must consume the other 0.4K joules, namely by raising the average altitude of the atmosphere as a consequence of expanding due to the temperature rise.

      • Christopher Game

        Christopher Game replying to Vaughan Pratt’s post of December 9, 2010 at 3:31 am.
        Dear Vaughan Pratt,
        Thank you for your comment. I was referring specifically to the virial theorem of Clausius (1870). It is a rather general theorem, and can be used for many purposes. It makes no mention of potential energy, and uses a particular definition of vis viva for a specified degree of freedom, not quite the same as the modern definition of total kinetic energy. You are interested in its “significance for global warming”. I was referring to its use for the gravitational degree of freedom as allowed by Clausius.
        Yours sincerely,
        Christopher Game

      • I understand that the virial theorem relates one component of kinetic energy to the same component of potential energy. Thus one can e.g. relate the height (or density profile) of the atmosphere to its temperature profile.

        I fail, however, to see any such connection between radiation and other variables that would justify the equations that Miskolczi has presented. For me those equations are just some random formulae without any given justification. (Years ago I was doing research in theoretical physics and I have also teached some aspects of thermodynamics. Thus a good justification should be understandable to me.)

      • Christopher Game

        Christopher Game replying to the post of Pekka Pirilä of December 9, 2010 at 2:41 pm.
        Dear Pekka Pirilä,
        Thank you for your comment.
        Again I say that it would be a pity if the observation of empirical fact were ignored for lack of an accepted theoretical explanation.
        Yours sincerely, Christopher Game

      • Christopher Game

        Christopher Game replying to Pekka Pirilä’s post of December 9, 2010 at 8:29am.
        Dear Pekka Pirilä,
        Thank you for your comment. I am glad it seems that we agree that one may consider one degree of freedom at a time. We are not alone in this. Chandrasekhar (1939) in “An Introduction to the Study of Stellar Structure” on page 51 writes as his equation (92)

        2 T + Ω = 0 ,

        where T denotes the kinetic energy of the particles and Ω denotes their potential energy. He is referring the potential energy to a zero at infinite diffusion, and is using Newton’s law of gravity, and this accounts for the sign difference of the ratio. He writes: “Equation (92) expresses what is generally called the ‘virial theorem’ “.

        This is far as my post went.

        You and Nick Stokes and I think others raise also the further question of the connection between this and Miskolczi’s empirical observation that Su = 2 Eu. This observation was made in figure 20 of Miskoczi and Mlynczak 2004. The fit between
        the curve y(tG) = σ tG^4 / 2 = Su(tG) / 2, and the data
        points plotted as y(tG) = Eu(tG) is not too bad, as you may see by looking at the figure. The data points are each from a single radiosonde ascent, from 228 sites scattered around the earth. Perhaps a little surprisingly, Miskolczi was not the first to notice a relationship like this. Balfour Stewart’s 1866 steel shell atmosphere gave the same result. It is simple to explain Balfour Stewart’s result, but not so simple to explain Miskoczi’s observation. I think that Miskolczi noticed the likeness of Chandrasekhar’s theoretical result (92) to the phenomenological formula Su = 2 Eu that was empirically indicated by the radiosonde data that he had analyzed.

        One may distinguish between enquiry as to factual question of the goodness of fit between the data and Miskolczi’s empirical phenomenological formula, and as to the theoretical question of its physical explanation. It is not clear to me how far people distinguish these two questions. I have not seen an explicit attempt, based on other data, or another analysis of the same dataset, to challenge the empirical fact of goodness of fit. Perhaps you can enlighten me on that point?

        The earth has only about half cloud cover, so Balfour Stewart’s theoretical explanation will hardly work for the earth. Perhaps you know of others who have done better? Perhaps your expertise in theoretical physics will enable you to better? It would be a pity to see the observation of empirical fact ignored for lack of an accepted theoretical explanation.

        Yours sincerely, Christopher Game

      • There exists all kind of empirical regularities. Some of them are precise and some approximate. For me it is incomprehensible that somebody picks one formula from a completely different physical context and proposes without good justification that it provides the reason for the observation. All the present discussion on virial theorem has been related to kinetic energy and potential in gravitational field. It is for me completely obvious that these theories have absolutely no bearing on the behaviour of electromagnetic radiation or radiative energy transfer. These are obviously very different physical processes that follow their on physical laws. The fact that both have their connections to the thermal energy of the atmosphere does not change this fact.

        Until somebody presents me a valid reason to reconsider the issue, my justment is that all these formulae of Miskolczi are complete nonsense and lack all physical basis. The fact that Miskolczi’s theory has been used in deriving results that contradict well known physics does not help.

      • Andy Lacis, Pekka Pirilä and I all view Miskolczi’s paper along the lines of Pekka’s succinct summary:

        all these formulae of Miskolczi are complete nonsense and lack all physical basis. The fact that Miskolczi’s theory has been used in deriving results that contradict well known physics does not help.

        But we’re obviously biased by virtue of being fans of the global warming team. Judging by those commenting here on Judith’s somewhat nicely organized website, the other team has at least as many fans.

        Now when attending a football game or ice hockey match or cricket tournament, it has never occurred to me to question the enthusiasm of the two teams’ respective supporters in terms of their understanding of the principles of the sport in question. The only thing that matters is how strongly they feel about their adoptive teams.

        By the same reasoning I see no need to do so here. We should judge the merits of the respective sides on the basis of their enthusiasm, not on whether they have even half a clue about the science, which is nothing more than a feeble attempt at social climbing by those with Asperger’s syndrome.

        Henceforth anyone bringing up scientific rationales for anything should be judged as simply not getting it.

        I may do so myself, which will make it clear I don’t get it.

      • David L. Hagen

        Vaughan Pratt
        “Putting morons on pedestals like this only makes you a moron.”
        Please desist from ad hominem attacks and raise your performance to professional scientific conduct. You demean both yourself and science by such diatribe. Address the substance, don’t attack the messenger. Otherwise you are not welcome for your demeaning the discussion and wasting our time.

      • Happy to oblige. (I was trying to adapt the adage “arguing with idiots makes you an idiot” by replacing “arguing with” with “put on pedestal” but misremembered the epithet. Dern crickets.)

        In less inflammatory language, my point in case you missed it was that whoever you put on a pedestal says something about you regardless of their ability. If they’re capable then you deserve some credit for recognizing and acknowledging that. If they’re not then you’ve shown yourself to be a poor judge of ability.

        I’ll leave it to others to judge whether my point constitutes an ad hominem attack (Baa Humbug seemed to think so). If Miskolczi has indeed come up with an impossibly low kinetic energy for the atmosphere, by a factor of 5 as I claim, my point would then seem to apply, and I would then be disinclined to take seriously anyone who takes Miskolczi seriously. One could predict remarkable properties of any atmosphere with that little energy. If that counts as an ad hominem attack then that’s what I’ve just gone and done.

      • Vaughan,
        As I said to Fred Moolten, I keep an open mind on all the competing theories, because they all have their assumptions, uncertainties, lacunae and insights. Also, because I’m not an expert in the field, I’m cautious about throwing in my lot with anyone’ particular theory, including my own.

        My higher academic qualification and training is in assessing these theories by looking at their conceptual basis, methodology, data gathering practice, logical consistency and a number of other factors. That is why I’m interested in delving into the work of Ferenc Miskolczi, along with a specific interest in trying to find out if the radiosonde data might be better than previously thought. It could be for example, that something valuable can come from ferenc’s work, even if he turns out to be wrong in some specific detail.

      • My higher academic qualification and training is in assessing these theories by looking at their conceptual basis, methodology, data gathering practice, logical consistency and a number of other factors

        Oh, well then we’re on level ground here since that’s my area too. Physics is just something I used to do many decades ago before I found these other things more interesting.

        It could be for example, that something valuable can come from ferenc’s work, even if he turns out to be wrong in some specific detail.

        Lots of luck with that. Usually this only happens when the worker can keep the details straight. As Flaubert said long ago, Le bon Dieu est dans le detail.

      • True, but nonetheless, even if a fatal flaw is found which falsifies a theory, and the jury is still hearing evidence in Miskolczi’s case as far as I can tell, it is always possible that a new technique or method used in some supporting aspect of a paper can contribute something which may be valuable elsewhere.

        Which is why I wasn’t impressed by your;
        “I gave up once I spotted what I thought was an error”

        By the way, my previous vocation was engineering in the fluid dynamics field, so I’m sure we’ll be able to argue well into the future. ;)

      • Which is why I wasn’t impressed by your;
        “I gave up once I spotted what I thought was an error”

        I wasn’t either, which is why I returned to Misckolczi’s paper to see whether that was the only problem and whether there might be something more fundamentally wrong with his conclusion, that more CO2 is offset by less water vapor whence global warming couldn’t happen on account of rising CO2. The problem with this line of reasoning is that if there is also less flow of water vapor into the atmosphere, that reduces the 80 w/m² of evaporative cooling, raising the temperature. Although Miskolczi mentions this he does nothing to show the flow is not reduced. That seems to me a more substantive flaw in the paper than quibbling over potential energy.

      • De l’oeuvre de Flaubert, I’m affraid I missed an enormous part. Moreover, my skills in the field of radiative physics are so low that I’m ashamed to send a public letter here (not to mention Judith’s demand for technical relevance…) Yet I’m quite sure Dr. Vaughan Pratt made at least one basic mistake here, for we Frenchies say:

        “Le diable est dans les détails”.

        Laissons donc le bon dieu à sa place. ;)

      • The quote is only attributed to him. (Mises said it for sure.)

        What Flaubert is sure to have said (to George Sand) is that

        > Est-ce que le bon Dieu l’a jamais dite, son opinion ?

        which means, roughly, what Sam just said.

        Interestingly, Flaubert said that to justify his realism, i.e. his urge to describe without judging.

        May we all have Flaubert’s wisdom.

      • For all I know the French may have gotten this from the English, who in a moment of devout atheism may have paraphrased “God is in the details,” which came from von Mises, who may have gotten the idea from Flaubert.

        Or it may have been “in the air” at the time, which is not unheard of.

        What goes around comes back with someone else’s lipstick on its collar.

      • Vaughan,
        Do you really mean the Austrian economist von Mises or the German born architect Mies van der Rohe mentioned in your link?

      • Interesting sequence there: willard writes “Mises” and without thinking my subconscious puts “von” in front instead of correcting it to “Mies” and putting “van” after. In my original post I was going to attribute it to Ludwig Mies (Mies van der Rohe) but then it occurred to me to ask the Oracle at Google whether it was original with him, and I learned for the first time about Flaubert’s alleged priority.

        Sam’s four-word conclusion, “probably we’ll never know” follows the old style of giving the proof before enunciating that which is to be proved. The new style, theorem before proof, saves the bother of reading the proof when you feel confident you could easily prove it yourself should the need ever arise, and more succinctly to boot.

        The question of the number of gods seems uppermost on many people’s minds. From what I’ve read it appears to be generally believed to be a very small perfect square, but more than that seems hard to say. In future I’ll punt on the question and say “Sam is in the details,” of which there can be no doubt.

      • Sam is in the details

        I love that one! Thanks a lot.
        Sounds like welcome. :) Isn’t it a nice compliment to get from a scientist? By the way, I just had a quick look at your CV, really impressed. Particularly the logical stuff. Wow.

        Now, if you please, the conclusion was mainly some kind of humour, despite my wish to keep coherent with cool logic here… You’re right, though: except from that gentle story about an imaginary quotation, the object partly remains to be explicited. That’s why you can consider the 4-words was a foreword.

      • Dr. Pratt, you wrote:

        “God is in the details,” which came from von Mises, who may have gotten the idea from Flaubert

        Seems a remarkably piece of history research… Rarely heard of such a rigorous work.

        For a start, talking about details, I would say that, in your formula, both commas are in excess.

        Secondly, what von Mises are you talking about? Richard Edler von Mises (1883 – 1953), scientist (fluids mechanics, aerodynamics, statistics)? Ludwig Von Mises (1881 – 1973), economist? Another one? Anyway, It’s a shame I couldn’t find any von Mises who happened to write down “God is in the details”, “Devil is in the details” or anything of the kind, nor even be told to having said so.
        Besides, the very source you quoted only tell us about a certain Ludwig Mies van der Rohe (1886–1969). Is it worth noting that the difference also makes a pair of details?…

        As for Flaubert’s words, you too have noticed that nobody has ever been able to find the sentence in his writings. The hell… (or that version supposed to get inspiration from Heaven).

        Moreover, had Flaubert ever pronounced the mysterious formula, it seems to me highly doubtful that he’d have chosen the word “God” instead of “the Devil”. As for Flaubert’s use of the “good God” term in a similar sentence, that would have been even more amazing, knowing what were his views. Flaubert’s opinions regarding religions, “God” and “the Devil” were longly expressed in his writings, and widely commented since. You’ll quickly find a lot of delectable ones on the web.
        In the first place, I thought better than disturbing this scientific debate much longer with that – we all know it could last forever (‘Les voies du seigneur sont impénétrables’… aren’t they?).
        But next I thought it was probably worth drawing at least a coarse snapshot here.

        I’d say the very idea of a “good God” was simply absurd to Flaubert. And so was that of a God being up to no good.
        Any possibly God would have no imperfection Himself, the simple fact to be stressed by desire being one, of course.
        Nor is God playing with men. See what Flaubert wrote when he was 17 (my own translations, very sorry): “One often spoke about Providence and celestial kindness; I hardly see any reason to believe in it. God who would have fun tempting men to see up to which point they can suffer, wouldn’t He be as cruelly stupid as a child which, knowing as the cockchafer will die some day, tears off its wings first, then the legs, then the head?

        So you can be sure Flaubert’s thought is merely ironical even at the first degree whenever he used the “good God” term (like in the sentence Willard quoted, which I indeed think is the most relevant, “tout bien pesé”). In other words: in Flaubert’s mind, “good God” precisely indicates… a non-existent god – (“This word which one calls God, what a highly niaise and cruelly buffoon thing! ” (1838).
        Could be one of those many idols the detested religions intend to serve with one of those hateful dogmas: “I’ll soon have finished with my readings on magnetism, philosophy and the religion. What a heap of silly things! […] Such pride in any dogma! – 1879)”; “Priests: Castration recommended. Sleep with their housemaid and have children with them, whom they call their nephews. Now, some of them are good, all the same. ” (Dictionary of the generally accepted ideas). See also the many scenes of horror prevailing in Salammbô (crucifixion of men and animals, terrible diseases, carnages, and in particular the atrocious one Flaubert pleasantly called “the grill of the kids”, where the Carthaginians throw their small children in the belly burning of Moloch)…

        As for what possible thing deserving the name of God in Flaubert’s mind, it would precisely be invisible in the details… to men.
        The following is a larger quotation (for precision stake) than the one usually made of a famous passage (correspondance with George Sand): “”As for my “lacks of conviction”, alas! the convictions choke me. I burst with angers and sunken indignations. But in the ideal view I have about Art, I believe that one the artist should show nothing of his person and not more appear in his work that God does in nature.
        Anyway, wouldn’t it seem amazing that God was said to be in the details, if not in a context putting God in the whole, in the first place? I’d add: needless to be Flaubert to avoid such a strange position.

        Remains the Devil (who will of course finish the story with a great laugh…) And of him, contrary to God, I’m quite sure Flaubert’s would have expected to be in the details. Yet we’re still waiting for the evidence… so here we could finish saying: probably we’ll never know. ;)

  42. Can I interest anyone in a knock-down argument that global warming is happening?

    The reason we can’t see it happening is that there are many contributors to global temperature besides CO2. Obviously there are the seasons on their highly predictable 12-month basis, but there are also the much less predictable El Nino events and episodes sporadically spaced anywhere from 2 to 7 years apart. Then there are the solar cycles on more like a 10-year period, also quite variable though not as much as El Nino.

    There are furthermore the completely unpredictable major volcanoes of the past couple of centuries, a couple of dozen at least, each of which tends to depress the temperature by up to 0.4 °C for 2 to 10 years depending on its VEI number.

    Last but not least, there is the Atlantic Multidecadal Oscillation or AMO. This appears to be running on a relatively regular period of around 65 years, and can be seen centuries back by looking for example at tree rings.

    Now anthropogenic CO2 supposedly began in earnest back in the 18th century. Today we’re putting around 30 GtC (gigatonnes of carbon) into the atmosphere, around 40% of which remains there, with nature sucking back the remaining 60%, a constant that has not changed in over a century. This amount has been increasing in proportion to the population, which as Malthus pointed out a while back is growing exponentially.

    But so is our technology. Hence even if it takes 60-80 years to double our population, it takes something like half that to double our CO2 output, or around 30-40 years.

    Ok, so let’s look at all these time constants. The 65 years for the AM oscillation dwarfs everything except the CO2 growth, which has been going on for centuries.

    Let’s now digress into mathematics land for a bit. If you smooth a sine wave of period p with a moving average of exactly width p, you flatten it absolutely perfectly (try it). If it’s not exact then traces remain.

    Furthermore all frequency components of period less than p/4 or so also vanish almost entirely no matter what their period.

    So: here’s how to look at the climate. Smooth it with a moving average as close to the period of the AMO as you can get. This will kill the AMO as argued above. And it will also kill all those other contributors to variations in global temperature, none of which have a time constant more than a quarter of the AMO. (Solar cycles are about 1/6, ENSO events and episodes even less, and of course the seasonal variations are 1/65 of that.)

    This would be heavy lifting were it not for UK programmer Paul Clark’s wonderful website Wood for Trees. When you click on this link, the 785-month period of the AMO will have been filled in for you, and you can see for yourself what happens to the global climate when smoothed in a way calculated to kill not only the AMO but everything else below it, by the above reasoning. (The phrase “killAMO” is all you need to remember for this URL, which can be reached as http://tinyurl.com/killAMO .)

    Now I claim that this curve is very closely following Arrhenius’s logarithmic dependency of Earth’s surface temperature on CO2 level, under the assumption that nature is contributing a fixed 280 ppmv and we’re adding an amount that was 1 ppmv back in 1790 and has been doubling every 32.5 years. (These numbers are not mine but those of Hofmann et al in a recent paper but if this is behind a paywall for you, you can more easily access Hofmann’s earlier poster session presentation.)

    All that remains is to specify the climate sensitivity, and I’m willing to go out on a limb and say that for the instantaneous observed flavor of climate sensitivity (out of the very wide range of possibilities here), it is 1.95 °C per doubling of CO2. (The IPCC has other definitions involving waiting 20 years in the case of transient climate response, or infinity years in the case of equilibrium climate sensitivity, etc. Here we’re not even waiting one nanosecond. The IPCC definitions get you closer to the magic number of 3 degrees per doubling of CO2 depending on which definition you go for.)

    Unfortunately Paul Clark’s website is not flexible enough to allow you to enter a formula. This is unfortunate because my own website shows that theory and practice are incredibly close. This is a far more accurate fit than one is accustomed to seeing in climate science, but this is how science makes progress.

    The key to this knockdown argument is the regularity of the 65-year-period AMO cycle and the much shorter time constants of all other relevant factors. Any doubt about this can be dispelled by using some other number than 785 for the WoodForTreees smoothing at http:tinyurl.com/killamo.

    • From IPPC4

      “For example, in a model comparison study of six climate models of intermediate complexity, Brovkin et al. (2006) concluded that land-use changes contributed to a decrease in global mean annual temperature in the range of 0.13–0.25°C, mainly during the 19th century and the first half of the 20th century, which is in line with conclusions from other studies, such as Matthews et al. (2003). ”

      Deforestation in tropical zones, the most common form of deforestation recently, does not have a cooling effect but rather a warming effect. https://e-reports-ext.llnl.gov/pdf/324200.pdf

      There is also urbanization to take into account.

      All together, it is quite possible that by correcting early land use changes and later land use changes you can change the difference in temperature from the 19th century to the present by up to 0.3C. Chronology matters.

      If the proper adjustments have been made regarding the known land use changes this argument would be invalid. I am not aware of a study that would indicate by the percentage of adjustments higher vs lower that it has been properly accounted for.

      I understand this is off topic andso leave it to the moderator if she would like to eliminate the comment or not.

      • I should clarify that this is only true regarding hypotheses involving a very long time period to reach equilibrium. The equilibrium temperature would be above the current temperature so you affect the rates of warming much more than the actual temperature achieved.

    • Vaughan, I assume that ‘lb’ is a typo for ‘ln’?

      At first glance the fit looks impressive but you have to be careful about fitting.
      The smoothed Hadcrut curve is only changing slowly so in terms of its Taylor expansion it can be described by a quadratic term (it is easy to check that it can be fitted just as well by a quadratic as by your function) so it has 3 ‘degrees of freedom’. One of these you have chosen one with your choice of 1.95, which as you admit comes from a wide range of sensitivity values. Also you have fitted the additive constant in your graph to get the curves to match up (fair enough, it is a plot of anomalies). So of the three parameters in the smoothed data, you have chosen two of them to fit, which makes the fit not quite so impressive.

      • I assume that ‘lb’ is a typo for ‘ln’?

        Two lines before the formula I wrote “using binary rather than natural logs, lb rather than ln,”. This has been a standard abbreviation for log base 2 for I think at least a couple of decades. If used ln I would then have to multiply the result by ln(2) to convert to units of degrees per doubling instead of degrees per multiplication by e, which is what ln gives. The latter is of course more natural mathematically but it’s not what people use when talking about climate sensitivity.

        The smoothed Hadcrut curve is only changing slowly so in terms of its Taylor expansion it can be described by a quadratic term (it is easy to check that it can be fitted just as well by a quadratic as by your function) so it has 3 ‘degrees of freedom’

        Ah, thanks for that, I realize now that I should have drawn more attention to what happens with even more aggressive smoothing. If you replace 65-year smoothing with 80-year smoothing you get a curve that would require a fourth-degree polynomial to get as a good a fit as I got with 60 years.

        So of the three parameters in the smoothed data, you have chosen two of them to fit, which makes the fit not quite so impressive.

        Not true. First, I had no control over the choice of curve, which composes the Arrhenius logarithmic dependency of temperature on CO2 with the Hofmann dependency of CO2 on year, call this the Arrhenius-Hofmann law. (My syntax for Hofmann’s function slightly simplifies how Hofmann wrote it, but semantically, i.e. numerically, it is the same function.)

        If I’d had the freedom to pick a polynomial or a Bessel function or something then your point about having 3 degrees of freedom would be a good one. However both these functions are in the literature for exactly this purpose, namely modeling CO2 and temperature. I did not choose them because they gave a good fit, I chose them because theory says that’s what these dependencies should be.

        Since we agree about anomalies the one remaining parameter I had was the multiplicative constant corresponding to “instantaneous observed climate sensitivity” which can be expected to be on the low side compared to either equilibrium climate sensitivity or transient climate response as defined by the IPCC.

        Now I had previously determined that 1.8 gave the best fit of the Arrhenius-Hoffman law to the unsmoothed HADCRUT data, with of course a horrible fit because the latter fluctuates so crazily, but it is the best fit and so I’ve been going with it.

        65-year smoothing completely obliterates all the other contributors to climate change (though 80-year smoothing puts back a chunk of the AMO as I said earlier, though nothing else), but it also transfers some of the high second-order-derivative on the right of the temperature curve over to the left, which the slight increase from 1.8 to 1.95 was to compensate for.

        So I really didn’t have any free parameters, other than the exact amount needed to compensate for the smoothing that artificially makes the left of the curve seem to increase faster than it actually does.

        If I’d had not only two free parameters but also the freedom to pick any type of curve I wanted such as a polynomial, then as you say it wouldn’t be so impressive. However that would miss the further point that with 80-year smoothing you don’t get anywhere near as good a match to the Arrhenius-Hofmann law. That there exists any size window yielding a log-of-raised-exponential curve can be seen to be something of a surprise when you consider that neither 80-year nor 50-year smoothing do so.

    • I stupidly wrote: major volcanoes of the past couple of centuries, a couple of dozen at least, each of which tends to depress the temperature by up to 0.4 °C

      Another darned cricket behind the chair keeping me awake all night, causing me to slip a decimal point. Should have been 0.04 °C. (Krakatoa and Mt Pelee seemed to be closer to 0.06 but plenty of volcanoes can easily cool the climate by one or two hundredths of a degree, easily observable in the HADCRUT temperature record for most significant volcanoes after subtracting the AMO, global warming, solar cycles, and everything shorter.)

    • randomengineer

      Hi Proffesor Pratt, I’m putting on my skeptic hat for this, mainly due to being unconvinced that +2C is necessarily a *bad* thing. Please be kind.

      Your graph would seem to show that GHE works as theorized. What it doesn’t show is the human footprint.

      If you were to correlate your graph with human population and/or acreage under plow or some other *reliable* historical data then it could or might or maybe show anthropogenic cause. The idea that there’s a smooth upswing when human technology runs in fits and starts and is interesting, especially when the count of anthros at the left side of the graph is N and exponentially higher at the other. How exactly DOES one impute anthropological cause again?

      Moreover, your graph doesn’t say much about technology, which is always the bogeyman. Data on coal fires? Trains? Anything? One could just as easily claim human population exhalation and every other guy has a fire and the numbers would look the same. To prove that this is a clean anthropological signal, wouldn’t we need to see the corresponding tech and outputs in GTonne, etc. ??

      Would you mind please explaining how the human footprint part works?

      Thanks

      • Would you mind please explaining how the human footprint part works?

        That’s in Hofmann’s papers (the singly-authored poster session version and the journal version with two coauthors). Hofmann explains that his formula for CO2, which replaces the quadratics and cubics that NOAA had previously used, was based on considerations of exponentially exploding population deploying exponentially improving technology. His doubling period of 32.5 years is a reasonable match to a doubling period of 65 years for each of population growth and technology growth.

        We are currently adding 30 gigatonnes of CO2 to the atmosphere each year, of which nature is removing some 17-18 gigatonnes, kind of a leaky-bucket effect. The rest stays up there, which we know because we have monitoring stations like the one at Mauna Loa that keep tabs on the CO2 level. All these numbers are known fairly accurately.

        The logarithmic dependence of temperature on CO2 has been known since Arrhenius first derived it in 1896.

        One could just as easily claim human population exhalation

        Good point. Around 1700 the human population had reached 1/10 of what it is today, a level at which human exhalation was breaking even with volcanic CO2. Human exhalation is now an order of magnitude more than volcanic CO2. However that’s less than ten percent of all human CO2 production from power stations, vehicles, etc.

        mainly due to being unconvinced that +2C is necessarily a *bad* thing.

        I’m not objecting to the rise, merely interested in improving our ability to predict it. I think +2C would be a neat experiment, the last time temperatures were that high was many millions of years ago. If it does really bad things to the environment we (that is, our great-grandchildren) might need to get really creative really fast, but meanwhile let’s do the experiment. Also drop ocean pH to remove Antarctic krill and copepods, who ordered them?

        Capitalism and communism were neat experiments, today we know capitalism works better than communism. This was somewhat less obvious in the 19th century. At least communism Russian-style, it doesn’t look like China-style communism has been disproven yet, but then is it really communism?

      • Posted by TRC Curtin on another thread:

        “What is missing from all statements on growth rates of CO2 emissions (by the IEA et al. and the IPCC) is any statistical analysis of the uptakes of those large gross increases in CO2 emissions (other than the misleading and error strewn section in AR4 WG1 Ch.7:516). These provide a large positive social externality by fuelling the concomitant increases in world food production over the period since 1960 when the growth in CO2 emissions began to rise.
        Thus although gross emissions have grown by over 3% pa since 2000, the growth of the atmospheric concentration of CO2 has been only 0.29% p.a. (1959 to 2009) and still only 0.29% p.a. between October 1958 and October 2010 (these growth rates are absent from WG1 Ch.7). The growth rates of [CO2] from October 1990 to now and from October 2000 to now are 0.2961 and 0.2966 respectively, not a very terrifying increase when one has to go to the 4th decimal point to find it, but to acknowledge this would not have got one an air fare to Cancun.
        These are hardly runaway growth rates, and fall well (by a factor of 3) below the IPCC projections of 1% p.a. for this century. The fortunate truth is that higher growth of emissions is matched almost exactly by higher growth of uptakes of [CO2] emissions, because of the benevolent effect of the rising partial pressure of [CO2] on biospheric uptakes thereof (never mentioned by AR4).
        You will of course NEVER see these LN growth rates of [CO2] in any IPCC report or in any other work by climate scientists.”

      • These are hardly runaway growth rates, and fall well (by a factor of 3) below the IPCC projections of 1% p.a. for this century. The fortunate truth is that higher growth of emissions is matched almost exactly by higher growth of uptakes of [CO2] emissions, because of the benevolent effect of the rising partial pressure of [CO2] on biospheric uptakes thereof (never mentioned by AR4).

        Actually nature was taking back 60% a century ago and is still doing so today. The remaining 40% continues to accumulate in the atmosphere as it has been doing all along, easily confirmed by the Keeling curve data, which shows an exponentially growing rise superimposed on the natural level of 280 ppmv. 60% is not “almost exactly” in my book, and the estimate of 40% remaining in the atmosphere is well borne out by the data.

      • randomengineer

        Thanks, Professor. I have a few comments to make, after which I may have a followup question.

        Hofmann explains that his formula for CO2, which replaces the quadratics and cubics that NOAA had previously used, was based on considerations of exponentially exploding population deploying exponentially improving technology.

        This is the interesting part, which is essentially an equation based on correlation and an assumption, not necessarily something determined via historical data, i.e. hard data evidence, etc. It would seem to also have some presumptions about how long CO2 hangs in the atmosphere.

        During the early 1900s there really wasn’t much in the way of carbon emitting tech in the world. The Occident was using trains and burning coal, sure. In the Orient though… not so much. Man didn’t really have widespread worldwide adoption of major carbon tech (read: enough cars to even cause a blip) though until nearly 1930, and this was still mostly a western world phenomenon. (And I’d argue that you’d at least need 1950’s levels of cars to be able to have any effect at all, only due to the numbers.)

        And yet temps rose, as did presumably CO2. Now what’s interesting here that this rise was already occuring BEFORE the adoption of automobiles as we see today, and so on. If you look at this particular hockey stick —

        http://en.wikipedia.org/wiki/File:Extrapolated_world_population_history.png

        It shows what I’m referring to. There’s a correlation of human population to temp rise to CO2. And yet… the technology we have to emit carbon didn’t really start ratcheting exponentially until after WWII. A tech explosion, if you will. There’s no “knee” that one would expect to see on the graphical representation of CO2 as the result of all of this rapid carbon release. It just keeps rising, nice and slow.

        What I get from looking at the big picture here is that the correlation of CO2 and temp pre-1930 doesn’t imply anthropogenic cause based solely on CO2 release given the lack of a knee when human CO2 emissions from (energy and vehicle) technology adoption really kicked in. It could be argued that pre-1930 anthropogenic influence includes land use and deforestation, i.e. changing the nature of CO2 sinks, but it doesn’t seem that correlating hypothetical formula derived CO2 emissions and temp records is worth much prior to the modern era. In fact the data by itself is ambiguous; you could just as easily use it to say that pre-modern era CO2 rise was reaction to temp rise.

        I’m very much interested in the notion of the A in AGW being solely tied to CO2 emission when clearly CO2 and temp was rising before CO2 emission was at a level that could be detectable. I find this doubly interesting given the data showing that MWP temps were close to that of today (or above, depending on whose work you trust.) Clearly there’s a lot that isn’t understood.

        Yes, professor, I know this is very un-PC for one who gets GHE and thinks that we’re adding to GHG’s. I get the physics part. And I’m aware that we’re running an open ended experiment re emission. Yes I agree that we need to e.g. adopt nuclear energy etc and decommission coal plants.

        So, my followup query is as follows: it seems that Hoffman’s formulae are incorrect and imply causation unsupported by fact — there’s no slow adoption of technology as implied. There ought to be a modern era CO2 knee *if* the A in AGW is based on the modern era explosion in emissions, and there’s no knee. Why?

      • randomengineer

        Quick followup note:

        Just to be ridiculously clear, the assertion that the climate was sensitive enough in 1890 to show temp rise following CO2 also says there was no sink of the extra CO2 (otherwise, why would it rise at all?)

        At the time of massive CO2 belching starting in the 1940s’-50’s this same “sensitive” climate unable to sink CO2 in 1900 would still be unable to sink. CO2 should have gone right through the roof, as would he temp.

        Didn’t happen. Tres confusing. The climate is sensitive or it is not. But there was no linear rise in human tech and emission of CO2.

      • Just to be ridiculously clear, the assertion that the climate was sensitive enough in 1890 to show temp rise following CO2 also says there was no sink of the extra CO2 (otherwise, why would it rise at all?)

        (Can’t remember if I replied to this.) Who’s asserting that? Although CO2 was raising the temperature in 1890, by an amount computable from Hofmann’s formula, it was not doing so discernibly because the swings due to natural causes such as AMO, solar cycles, and volcanoes were much larger swings. Today CO2 has outpaced all these natural causes.

      • During the early 1900s there really wasn’t much in the way of carbon emitting tech in the world. The Occident was using trains and burning coal, sure. In the Orient though… not so much. Man didn’t really have widespread worldwide adoption of major carbon tech (read: enough cars to even cause a blip) though until nearly 1930, and this was still mostly a western world phenomenon. (And I’d argue that you’d at least need 1950′s levels of cars to be able to have any effect at all, only due to the numbers.)

        I fully agree. How much less CO2 did you have in mind for 1900? 10% of what it is today? That’s what Hofmann’s formula gives.

        If you think it was even less than 10% then you may be underestimating the impact of coal-burning steam locomotives, which were popular world-wide: the first locomotive in India was in 1853, in Brazil 1881 or earlier, Australia 1854, South Africa 1858, etc. etc.. Everyone used them: without automobiles, rail was king. The transcontinental railroad was the Internet of 1869, connecting the two coasts of the US and paying for the school I teach at. My wife’s book club is reading Michael Crichton’s The Great Train Robbery about the theft in 1855 of £12,000 worth of gold bars from a train operated by an English train company founded in 1836.

        You also didn’t mention electric power stations, which were largely coal powered in those days and were introduced by Edison and Johnson in 1882. By the early 20th century electricity had caught on big time around the world. Today electricity accounts for some 5 times the CO2 contributed by automobiles. In 1900 it was obviously much more than automobiles.

        And you didn’t mention the transition from sail to steam, which happened early in the 19th century. Ships today produce twice the CO2 of airplanes, and obviously far more than that ratio in 1920 and infinitely more in 1902.

        There is also human-exhaled CO2, which worldwide today is only about 60% of automobile-exhaled CO2 but in 1900 was obviously a far bigger percentage.

        And there is one cow for every five people, and cows belch a lot of methane, which has many times the global warming potential of CO2.

        Rice paddies produce even more methane, and predate even steam ships.

        So I don’t think 10% of today’s greenhouse-induced warming is an unreasonable figure for what humans were responsible for in 1900.

        And yet temps rose, as did presumably CO2. Now what’s interesting here that this rise was already occuring BEFORE the adoption of automobiles as we see today, and so on. If you look at this particular hockey stick –

        What you’re seeing there is a correlate of the Atlantic Multidecadal Oscillation, which dwarfed global warming prior to mid-20th-century. In 1860 it was raising global temperature 8 times as fast as CO2. In 1892 CO2 had grown very slightly but the AMO meanwhile was on a downward swing that the CO2 reduced by only about 20%. In 1925 CO2 warming was at twice the rate it had been in 1860, which therefore added 25% to the AMO rise.

        Not until 1995 did CO2 raise the temperature at the same rate as the AMO. From now on CO2 is going to dominate the AMO swings assuming business as usual.

        It is interesting to consider what would happen if we continued to add 30 gigatonnes of CO2 a year to the atmosphere. The CO2-induced rise in temperature would slow down and eventually become stationary, with the CO2 stopping at perhaps 500 ppmv. That’s because the system would then be in equilibrium. Well before then the AMO would have regained its title as world champion multidecadal temperature changer.

        Fortunately for those of us interested in seeing the outcome of this very interesting experiment, this isn’t going to happen. With business as usual CO2 will continue to be added to the atmosphere at the same exponentially increasing rate as over the last two centuries, pushing it to 60 gigatonnes of CO2 a year by 2045. (In 1975 we were only adding around 15 gigatonnes of CO2 a year.)

      • Would add that by 1900, two of the largest fossil-fuel-based fortunes, Carnegie’s and Rockefeller’s, were solidly in place. Those homes that were not being illuminated with natural gas lighting (made from coal,) were being illuminated with Rockefeller’s kerosene. He had a booming global lighting business by the 1870s. As for Carnegie’s steel, how did he make it? So there was a pretty significant bloom in fossil-fuel CO2 before 1900.

      • According to Encyclopedia Britannica (link in Wikipedia) (based on 1911 data) the world coal production was very close to 1000 million tons in 1905. In 2009 the world coal production was 6940 million tons, oil production 3820 million tons, and natural gas production 2700 mtoe (million tons oil equivalent).

        In 1905 oil and gas were negligible compared to coal. Thus the annual CO2 emissions from fossil fuels were in 1905 8-9% of their present level.

      • In 1905 oil and gas were negligible compared to coal. Thus the annual CO2 emissions from fossil fuels were in 1905 8-9% of their present level.

        Add some for slash and burn, exhalation from humans and their livestock, methane from rice paddies which degrades to CO2, and that should get us reasonably close to Hofmann’s formula, which gives total anthropogenic CO2 as being 10.7% of its level today.

      • ” At least communism Russian-style, it doesn’t look like China-style communism has been disproven yet, but then is it really communism”

        Who controls and uses the guns?

    • Vaughan, firstly I am on the side of AGW and certainly have also long supported the idea of the log CO2 temperature rise. My only thought about this knock-down AMO argument is that you give the AMO too much credit. My own sense of things is that 1910-1940 rises faster than this log because of a solar increase at that time, and the 1950-1975 flattening is due to global dimming (aka aerosol haze expansion near industrial/urbanized areas due to increasing oil/fossil burning). I don’t think the AMO amplitude is much compared to these other factors that give the illusion of a 60-year cycle in the last century. The last 30 years is behaving free of these influences and is parallel to a growth given by 2.5 degrees per doubling.

      • Very much appreciate your feedback, Jim, as it will steer me towards what needs more emphasis or more clarification if and when I come to write up some of these ideas.

        My own sense of things is that 1910-1940 rises faster than this log because of a solar increase at that time

        Can you estimate the contribution of this increase to global warming over that period? That would be interesting to see. If it’s large enough I need to take it into account. It does seem to be sufficiently sustained that 12-year smoothing isn’t enough to erase it.

        and the 1950-1975 flattening is due to global dimming (aka aerosol haze expansion near industrial/urbanized areas due to increasing oil/fossil burning).

        This question of whether it’s aerosols or the AMO downswing would make a fascinating panel session. I would enthusiastically promote the latter. (But I’m always enthusiastic so one would have to apply the applicable discount.) I just recently bought J. Robert Mondt’s “The History and Technology of Emission Control Since the 1960s” to get better calibrated on this.

        I estimate that the RAF’s putting Dresden, Pforzheim, and Hamburg into the tropopause, plus those interminable air sorties by all sides during WW2, had the cooling power of three Krakatoas. One Krakatoa per German city perhaps.

        I don’t think the AMO amplitude is much compared to these other factors that give the illusion of a 60-year cycle in the last century.

        I don’t put much trust in estimates based on 30-year windows of the temperature record. I much prefer every parameter to be estimated from the full 160 year HADCRUT global history. The NH record goes back a couple of centuries further and it would be interesting to coordinate that with the 160 year global record for an even more robust estimate.

        Currently I estimate the AMO amplitude in 1925 at around 0.125 °C, meaning a range of 0.25 °C, and rolling off gradually on either side, being around .08 in 1870 and 1980. The r^2 for this fit is a compelling 0.02, rising to 0.024 if you don’t let it roll off suggesting the roll-off is significant. Not only is the roll-off a better fit but it’s consistent with the disappearance of the AMO signal in the 17th century inferred from tree ring data.

        The last 30 years is behaving free of these influences

        That’s the CO2 speaking. ;)

        You have to remove the CO2 to see it because the CO2 is so steep by then.

        and is parallel to a growth given by 2.5 degrees per doubling.

        Using Hofmann’s doubling time of 32.5 years for anthropogenic CO2, from a base of 280, the current 390 ppm level should be over 1000 ppmv by 2100, which is 1.39 times the current level. 2.5*lb(1.39) is a rise of 3.6 degrees over this century. Is that what you’re expecting, or do you expect less CO2 than that in 2100?

        I’m projecting +2C in 2100 but considerably more if this warming releases a significant amount of methane before then. I’m neither a pessimist nor an optimist when it comes to global warming, I’m just an uncertaintist.

        I’m not wedded to any of this, if my perspectives shift so may my estimates of these parameters.

        I’m not enthusiastic about introducing more parameters, but methane considerations will certainly force at least one more. Anyone here with a model that has a clue about likely methane emissions in 2030? (Just asking, I’d love it if there were.)

      • If you gents are interested in the solar contribution, you might consider the cumulative nature of the retention of solar energy in and dissipation of energy from the oceans (which controls atmospheric temperature in the main), and have a look at this post on my blog.

        http://tallbloke.wordpress.com/2010/07/21/nailing-the-solar-activity-global-temperature-divergence-lie/

      • have a look at this post on my blog.

        On your indicated blog, tallbloke, you write “what a load of rubbish.”

        Had I written that, global warming deniers would be all over it in a flash and Willard would be agonizing about how I’ll never live that down.

        Something should be done about the hypocrisy in this debate.

        At the very least you could have written “what a load of rubbish (pardon my French)” as an exculpatory measure.

      • > Something should be done about the hypocrisy in this debate.

        Speaking of which:

        > I wonder why tallbloke is commenting on this blog, after accusing me of dishonesty on his own.

        Source: http://scienceofdoom.com/2010/12/04/do-trenberth-and-kiehl-understand-the-first-law-of-thermodynamics-part-three-and-a-half-%E2%80%93-the-creation-of-energy/#comment-8015

      • Vaughan, I agree with your CO2 formula. Mine goes 280+90*exp[(year-2000)/48] which also gets to 1000 ppm at 2100. Using 2.5 C per doubling this gives 2100 warmer than 2000 by 3.5 C. Like I said, the gradient fits the last 30 years very well.
        Regarding solar effects in 1910-1940, I estimated this is +0.2 C, and for aerosols 1950-75 -0.4 C. This gives 20th century attribution as 0.7 C total = 0.9 C due to CO2 + 0.2 C due to solar – 0.4 due to aerosols.
        The solar estimate comes from the TSI estimate on climate4you, but has to assume a reasonable positive feedback to get from 0.2 W/m2 to 0.2 C, but somewhat similar to what is required to explain the Little Ice Age with the same TSI estimate.

      • Vaughan, I agree with your CO2 formula. Mine goes 280+90*exp[(year-2000)/48]

        Ah, excellent, thanks for that! (Actually it’s not mine, it’s ESRL’s David Hofmann’s at NOAA ESRL Boulder, who came up with it I think a couple of years ago.) Your formula is extremely close to his, here’s yours minus his at the quarter-century marks.

        1950 1.235
        1975 1.446
        2000 1.355
        2025 .4444
        2050 -2.38
        2075 -9.35
        2100 -24.8 (Hofmann is 1027.65 then)

        Those differences are insignificant for predictive purposes, and moreover are a fine fit to past history. But as a purely academic question the differences disappear essentially completely when you change your 90 to 89 and 48 to 47.

        As it happens I do have a formula for CO2, namely 260 + exp([(y - 1718.5)/60]. I came up with this before seeing Hofmann’s and switching to his. Mine fits the Keeling curve more exactly especially in the middle. Its derivative is also a better fit to the slope; in particular the derivative of your and Hofmann’s formula at 2010 predicts a rise of 2.3 ppmv between July 2009 and July 2010 while mine predicts only 2.1 ppmv. The latter is much closer to what we actually observed.

        I have no evidence for CO2 being 260 in July 1718 (the meaning of those numbers) than the goodness of fit to the Keeling curve, which when extrapolated backwards as an exponential seems to asymptote more to 260 than 280. Absent any more compelling reason for 260 I figured I’d just switch to Hofmann’s formula since I didn’t want to undermine my uses for the formula with questionable choices of parameters.

        Mine incidentally predicts only 840 ppmv for 2100, which I suppose makes me an AGW denier in the same sense that reducing the ocean’s pH from 8.1 to 8.0 makes it acidic.

      • I think 840 is close to the A2 scenario used by IPCC which is their most extreme one. Our others are pessimistic compared to this.

      • I think 840 is close to the A2 scenario used by IPCC which is their most extreme one. Our others are pessimistic compared to this.

        The IPCC is in an unenviable position. The slightest error brings a hail of protest. The science and politics of global warming live on opposite sides of a widening ice crevasse, while the IPCC stands awkwardly with one foot on each side. The scientists can afford to err on the side of pessimism, the politicians optimism.

        One wants to throw the IPCC a skyhook. They cope by hedging their bets. Don’t expect the IPCC to pick the scientifically most probable number when there’s a wide selection, they will prefer the most expedient for the circumstances.

        The only way they can make everyone happy is to make no one happy.

      • We can only hope that the running down of oil burning due to reducing resources is not compensated by an increase in coal, gas, shale oil, etc., fossil-fuel burning, otherwise we are headed for 1000 ppm by 2100.

    • Allow me to throw one small fly in the ointment. If we had data and could do a similar temp series as killAMO from 1780-1870 I suspect we would get a very similar slope. This is simply based on historical, geological, and archeological evidence that NH glaciers and polar ice were retreating faster during the1800s (ref: John Muir) than they are today. This period was well before significant influence from anthropogenic CO2.

      Here is my question: Given similar warming trends, What caused the rapid warming of the 1800s? If not CO2, then what?

      • If we had data and could do a similar temp series as killAMO from 1780-1870 I suspect we would get a very similar slope. This is simply based on historical, geological, and archeological evidence that NH glaciers and polar ice were retreating faster during the1800s (ref: John Muir) than they are today

        (killAMO is what to append to tinyurl.com/ to see the graph in question.)

        But the curve that the smoothed fits so well is not merely a slope, it bends up, and moreover in a way consistent with it having the form log(1+exp(t)) for suitable scales of time and temperature. This curve asymptotes to a straight line of slope 1, which in more familiar units corresponds a few centuries hence to a steady rise of 1 °C every 18 years (assuming business as usual meaning unlimited oil and continued population growth).

        The odds of the 60-year-smoothed record for 1780-1870 fitting log(1+exp(t)) that well are zip. If it did it would strongly imply an exponential mechanism of comparable magnitude to global warming, which would be extraordinary.

      • Nice dance around the question. A key question it is when attempting to determine if our current warming is “unprecedented” and if indeed atmospheric CO2 is the primary culprit. Allow me to restate for clarification:

        So if the observed NH ice meltdown was indeed faster in the 1800s than current NH ice meltdown both in terms of volume and extent, as indicated by historical record, why the rapid meltdown then? Do those same unknown climate forcings exist today? If not CO2, then what?

      • A key question it is when attempting to determine if our current warming is “unprecedented”

        By “unprecedented” I mean hotter. The 10-year-smoothed HADCRUT record shows no date prior to 1930 that was warmer than any date after 1930. Furthermore the temperature today is 0.65 °C hotter than in 1880, which was the hottest temperature reached at any time prior to 1930.

        So if the observed NH ice meltdown was indeed faster in the 1800s than current NH ice meltdown both in terms of volume and extent, as indicated by historical record, why the rapid meltdown then? Do those same unknown climate forcings exist today? If not CO2, then what?

        What are you talking about? The Northwest Passage has been famously impassable for 500 years, ever since Henry VII sent John Cabot in search of a way through it in 1497.

        Here’s some relevant reading. If you have equally reputable sources that contradict these stories as flatly as you claim I’d be very interested in seeing them.


        European Space Agency News 2007

        BBC News 2007

        Kathryn Westcott, BBC News, 2007

        Joshua Keating, Foreign Policy (need to register but the account is free)

      • David L. Hagen

        Vaughan Pratt
        1) Re: “By “unprecedented” I mean hotter.”
        Yet the caveat: “hotter than any prior temperature in the
        the 10-year-smoothed HADCRUT record”
        See definitions for: unprecedented
        e.g. Webster’s 1913:

        Un*prec”e*dent*ed (?), a. Having no precedent or example; not preceded by a like case; not having the authority of prior example; novel; new; unexampled. — Un*prec”e*dent*ed*ly, adv.

        Macmillan

        “never having happened or existed before”

        Unprecedented does not have the same scientific meaning as your caveat.

        The global Medieval Warm Period would qualify as a precedent.
        See: The Medieval Warm Period – a global phenomenon, unprecedented warming, or unprecedented data manipulation?

        See also the Vikings settling in Greenland.
        “By the year 1300 more than 3,000 colonists lived on 300 farms scattered along the west coast of Greenland (Schaefer, 1997.)”

        2) Re:”The Northwest Passage has been famously impassable for 500 years, ever since Henry VII sent John Cabot in search of a way through it in 1497.”
        False!
        e.g., see articles on “Northwest passage” at WUWT
        It was sailed in 1853 by Robert McClure, the HMS Investigator

        Norwegian explorer Roald Amundsen traversed the NW passage between 1903 to 1906.
        etc.

        Regards

      • Unprecedented does not have the same scientific meaning as your caveat.

        Quite right, David, I freely admit that I was merely following the example set for us by the president two weeks ago. Mea culpa, henceforth I vow to faithfully serve the dictionary instead of the president. You have your priorities right.

        But with the scientific definition why should you and I have to quarrel over whether Easterbrook was telling the truth when we can go further back to a time where we can amicably agree over a beer that the present level is far from unprecedented?

        Rejoice, we are on our way back to the ice-free temperatures that obtained before the Azolla event 49 million years ago. During the 800,000 years of that event, CO2 plummeted from a tropical 3.5‰ (3500 ppmv) to a bone-chilling 0.65‰, a decline of 3.6 ppmv per millennium.

        To the best of our understanding of geology, that sustained rate of decline was unprecedented in your—sorry, the dictionary’s—scientific sense.

        Today CO2 is rising at a rate of 2100 ppmv per millennium. Comparing that with the scientifically unprecedented 3.6 ppmv per millennium of 49 million year ago, I call for scientific recognition of a new world record for planet Earth, of the fastest rate of climb of CO2 in the planet’s 4.5 billion year history.

        Ferns were the previous record holder. Humans have proved themselves superior to ferns. And it only took us 49 million years.

        My wife the botanist has been using the Internet to monitor the fern traffic. She suspects they’re plotting a rematch. If they can break our record she’s figured that any such whiplash event will break our collective necks.

      • During the 800,000 years of that event, CO2 plummeted from a tropical 3.5‰ (3500 ppmv) to a bone-chilling 0.65‰, a decline of 3.6 ppmv per millennium.

        Those numbers don’t look right I think you might have dropped a zero somewhere

      • Those numbers don’t look right I think you might have dropped a zero somewhere

        1% = .01, 1‰ = .001, i.e. parts per thousand. I prefer ‰ to % because it groups digits in threes. I’ve often seen people convert 389 ppmv to .389%, that mistake is harder to make when using ‰ .

        The decline was indeed from 3500 ppmv to 650 ppmv, look it up.

      • I didn’t notice you were using parts/thousand

      • Still dancing I see. Perhaps we should clarify a few terms. Warming implies rate of temp increase. Hotter implies higher measured temperature. Rapid meltdown implies rate of ice mass loss.
        Clearly we have been in a step warming trend since emerging from the LIA circa 1800. One does not need advanced degrees in climatology to understand that in a step warming trend over a period of 200 years it will likely be “hotter” towards the end of the warming period. Very little of this trend has been attributed to AGW. What exactly did cause that first 150 years of warming if not CO2? We also expect there to be much less NH ice mass after this 200 year warming trend. As the earth warms, ice melts. No surprises.

        Re: RATE of ice mass loss… Your links regarding the current cryosphere simply do not address the question of 19th century rapid rate of ice mass loss, or it’s causes at all. It was greater between 1850-1890 and briefly 1940-1950 than it is today. Relevant reading as to 1800 ice mass loss and historical temp record? Why yes I believe we do:

        Historical evidence:
        http://www.nps.gov/glba/historyculture/people.htm

        Paleo evidence:
        http://westinstenv.org/wp-content/postimage/Lappi_Greenland_ice_core_10000yrs.jpg

        Re: The Northwest passage… It has been choked with ice from the LIA for the last 600 years. Of course no one has been sailing through there. Perhaps this year, after 200 years of melting ice, we will discover additional archeological evidence of Viking explorers who were navigating these high arctic waters 1000 years ago during the MWP.

        Yes indeed. I do understand and agree with the physics of radiative transfer but there are still many flies in the AGW ointment.

      • Still dancing I see. […] Clearly we have been in a step warming trend since emerging from the LIA circa 1800. One does not need advanced degrees in climatology to understand that in a step warming trend over a period of 200 years it will likely be “hotter” towards the end of the warming period. Very little of this trend has been attributed to AGW. What exactly did cause that first 150 years of warming if not CO2?

        What are you talking about, ivpo? One glance at the gold standard for global land-sea temperature, the HADCRUT3 record, for the 45-year period 1875-1920 with 16-year smoothing, shows that the temperature was plummeting during the period CO2 was having no influence.

        Seems to me you’re the one who’s dancing fancy numbers in front of us that don’t hold up under examination.

        (Only those who’ve been following my killAMO stuff will see the trick I’m pulling here. Fight fire with fire.)

      • So at the end of all that dancing, all those scientific links, you still have no explanation for the extraordinary NH ice melt off during the 1800s. Nor can you differentiate the cause of the 1800s melt off from our current ice melt off. We don’t really know why. And there is no sound evidence that those same forcings are not in effect today. I think you made my point Vaughn.

        I actually believe your killAMO smoothing has merit but it is still very much open to misinterpretation. It does demonstrate long term warming. It does not tie that warming to CO2 until we can isolate and quantify all other causes for long term warming (such as the rapid NH ice melt off during the 1800s).

      • you still have no explanation for the extraordinary NH ice melt off during the 1800s.

        You may have missed my answer the other day to this. I cannot explain what did not happen.

    • It was the quality of the hindcast that got me over the line on this one :)

  43. Hi Vaughan,
    fresh start for us?

    A few observations about your ‘knockdown argument’, in no particular order:

    1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.

    2) The recovery of global temperature from the little ice age started around 1700

    3) Even if the match between co2 and temperature were good (it isn’t). Correlation is not causation.

    4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.

    5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.

    6) My own solar model matches the temperature data better, without the need for heavy smoothing of data.

    7) You haven’t considered longer term cycles such as the ones producing the Holocene climatic optimum, the Mycean, Roman, Medieval and Modern warm periods. These give you the real reason for the rise in temperature from the low point of the little ice age to now, though we don’t yet fully understand the mechanism, bt we’re working on it.

    8) I’ll stop here, because I reached the smiley number.

    • Hi Vaughan,
      fresh start for us?

      Deal.

      1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.

      Since 1870 is 13 years before my graph begins, how is this relevant here?

      3) Even if the match between co2 and temperature were good (it isn’t).

      Define “good.” Are you looking for an accuracy of one part in a thousand, or what? I would have thought any normal person would have seen my match as fantastic. I could hardly believe it myself when it saw it.

      Correlation is not causation.

      I never claimed otherwise. Maybe the temperature is driving up the CO2. Or maybe Leibniz’s monads are at work here. (Remember them?)

      4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.

      What are you talking about? You seem wedded to the concept that CO2 cannot raise temperature. Do you imagine either Miskolczi or Zagoni believes that?

      5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.

      I have two problems with this. You can fix the first one by correcting the Wikipedia article, which says that the law “states that there is a logarithmic dependence between the transmission (or transmissivity), T, of light through a substance and the product of the absorption coefficient of the substance, α, and the distance the light travels through the material (i.e. the path length), ℓ.”

      For the second one, gases don’t follow Beer Lambert, radiation does. Beer Lambert is applicable to any material through which radiation might pass, whether solid, liquid, gas, or plasma.

      6) My own solar model matches the temperature data better, without the need for heavy smoothing of data.

      Fantastic. Your point?

      7) You haven’t considered longer term cycles such as the ones producing the Holocene climatic optimum, the Mycean, Roman, Medieval and Modern warm periods.

      Excellent point. Which of these are hotter than today?

      These give you the real reason for the rise in temperature from the low point of the little ice age to now, though we don’t yet fully understand the mechanism, bt we’re working on it.

      Good to know you’re working on it. Let me know how it turns out. (I’m not holding my breath.)

      • Ferenc Miskolczi

        Pratt, you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation? By the way, I do not believe, but I know, and I can prove that under the conditions on the Earth the atmospheric absorption of the IR radiation is not increasing. The CO2 greenhouse effect does not operate the way you (or the IPCC) thinks.

      • Pleasure meeting you here on JC’s blog, Ferenc. Hopefully you’re sufficiently used to unkind words from others as not to mind mine, for which I can offer condolences, though the only apology I can offer is that we Australians can be disconcertingly blunt at times.

        you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation?

        Your “try to come up with” implies that the world is waiting with bated breath for someone to show that CO2 can elevate global temperature. I’d explain it except Tyndall already did so a century and a half ago and it would be presumptuous of me to try to add anything to Tyndall’s explanation at this late date.

        Those who’ve used the HITRAN data to estimate the impact of CO2 on climate more accurately, taking pressure broadening as a function of altitude into account, have added something worthwhile. If I think of something equally worthwhile at some point I’ll write it up. (I’ve been meaning to write up my thoughts on TOA equilibrium for some time now, and was pleased to see Chris Colose expressing similar thoughts along those lines on Judith’s blog, though I was bothered by his reluctance to take any credit for them, instead attributing them to “every good textbook” which AFAIK is not the case.)

        Now, I have a question for you. Regarding your argument that the heating effect of increasing CO2 is offset by a decrease in the flow of water vapor into the atmosphere, I would be very interested in two things.

        1. An argument worded for those like me with only half Einstein’s IQ as to how that would happen.

        2. An explanation of why reduced water vapor flow would cool rather than heat. Figure 7 of the famous 1997 paper by Kiehl and Trenberth shows more loss of heat by “evapotranspiration” than by net radiation, namely 78 vs. 390-324 = 66, in units of W/m^2. In other words the same mechanism by which a laptop’s heatpipe carries heat from its CPU to its enclosure is used by evaporation to carry heat from Earth’s surface and dump it in clouds, thereby bypassing the considerable amount of CO2 between the ground and the clouds, and this mechanism removes even more heat from the Earth’s surface than net radiation! Any impairment of that flow will tend to heat the surface (but cool the clouds).

        It is certainly the case that the *level* of atmospheric water vapor regulates heat, by virtue of water vapor consisting of triatomic molecules and therefore being a greenhouse gas. However flux and level are to a certain extent independent: you can lower the flow of water vapor from the ground to the clouds without changing the level of atmospheric water vapor simply by raining less. The water vapor then continues to heat the Earth as before, but now you’ve lost the cooling benefit of the heat pipe.

        Your problem, Ferenc, is that you have very complicated explanations that no one can follow, perhaps not even you. Granted climate science is rocket science, but rocket science starts with the idea that you can increase the momentum and hence velocity of a rocket by ejecting a little burnt fuel with equal and opposite momentum. Climate science needs to start with equally simple ideas, and you’re not helping.

      • though the only apology I can offer is that we Australians can be disconcertingly blunt at times.

        I’m an Australian and I can distinguish between being blunt and being rude. We especially frown upon playing the man – not the ball and pushing in the back is always penalized heavily.

        Lift your game mate

      • Should that be lift your game Pontin!!

      • lol The 2 decade domination was bound to end

      • Since we’re into concern troll territory, it would be interesting to know how to interpret this one:

        > Poor Pratt [...]

        Let’s not forget this one too:

        > You may compute it yourself **if you are able to** [...]

        Source: http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-19575

        Since these two come from a short post with three sentences or so and that the longest one is an appeal from authority, that’s not bad a ratio, don’t you think?

        Being coy does not help to sound anything else but rude, here. Quite the contrary.

        ***

        In any case, it’s not about lifting the game, but picking an efficient strategy. Vaughan Pratt picked to be open and state his feelings candidly. This is fair game when speaking among friends. This is fairly normal considering the scientific background.

        This strategy will play against him here. It’s not a friendly discussion. It’s not even a scientific one, at least not for most readers, I surmise. Let’s not forget that these words might get read and repeated for ever and ever.

        Decorum matters. The strategy to pick should be a little more closed. For the chessplayers: think 1. Nf3 with Kramnik’s repertoire, not 1. e4 with Shirov’s.

      • @willard This strategy will play against him here. It’s not a friendly discussion. It’s not even a scientific one, at least not for most readers, I surmise.

        David Hagen astutely observed (and BH complained in like vein) that I had “attacked the messenger” (by whom I suppose DH could have meant either himself or Miskolczi, or both) when I objected to his putting Miskolczi on a pedestal with the reasoning that doing so dragged him down to Miskolczi’s level. While not pleading either guilty or not guilty, I interpreted Andy Lacis’s comment to DH,

        Your unmitigated faith in Ferenc nonsense does not speak well of your own understanding of radiative transfer.

        as essentially the same “attack,” minus my gratuitous “morons” epithet. Since no such objection was raised to Andy’s comment, I am led to wonder whether it was really my trash-talking that bothered them rather than this alleged ad hominem attack.

        But just because Andy and I are in agreement on the quality of Miskolczi’s work doesn’t make us right. For unassailable evidence of “Ferenc nonsense” we need go no further than the two numbers Miskolczi offered yesterday.

        For the global average TIGR2 atmosphere the
        P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0) You may compute it yourself if you are able to

        (These two numbers are of course PE = 754 and KE = 376 whether expressed in megajoules per square meter or joules per square millimeter. I tend to think in terms of the latter, and to include “E” for disambiguation when using ASCII. Had Unicode been an option I’d have used the respective Khmer symbols ព and គ for these two consonants if the morons programmers at Redmond hadn’t made them so ridiculously tiny.)

        One infers from Miskolczi’s second sentence a commendable passion for the sort of attention to detail that analyzing 2000 layers of atmosphere must entail. God is in the details, after all. Miskolczi’s thought that I might be incapable of mustering such passion is right on the money: my brain scrambles at the mere mention of even 1000 layers.

        But unless you belong like me to the small sect that worships the back of an envelope, God is not where I scribbled PE = mgh = 10130*9.81*7600 = 755.3 MJ/m2 where m = 10130 kg is the mass of a 1 m^2 air column, g = 9.81 is the acceleration due to gravity, and h = 7600 is a reasonable estimate of the altitude of the center of mass of a column of air, which a little calculus shows is the same thing as the scale height of the atmosphere (integrate x exp(-x) from 0 to infinity and beyond). While I may well be unable to duplicate the many thousands of calculations Miskolczi must have needed to arrive at PE = 754 megajoules from 2000 layers of TIGR2 atmosphere, third grade must have been the last time I could not multiply three numbers together, and the outcome in this case gave me no cause to doubt Miskolczi’s imputed Herculean labors in his computation of PE.

        Room remained on the envelope for two more multiplications: KE = 0.718*10.13*250 = 1818 MJ/m2 where 0.718 is the constant-volume specific heat capacity of air, 10.13 is the mass in tonnes of a square meter column of air, and 250 K (see why we needed KE?) is a rough guess at the average temperature of the atmosphere.

        This is about five times what Miskolczi calculated.

        Multiplying my figure by the 510 square megameters of the planet’s surface gives 927 zettajoules, just under one yottajoule as the total kinetic energy of Earth’s atmosphere.

        With Miskolczi’s number we get only 192 zettajoules.

        Hmm, maybe there’s an error in my math. Let’s try a completely different back-of-the-envelope way. At a room temperature of 300 K, air molecules move at an RMS velocity of 517 m/s (and a mean velocity of 422 m/s but for energy we want RMS, and the Maxwell-Boltzmann distribution makes this quite different). Since much of the atmosphere is colder than this, a more likely RMS velocity averaged over the whole atmosphere would be something like 465 m/s or 1040 mph, twice the speed of a jet plane. The atmosphere has mass m = 5140 teratonnes, allowing us to compute the translational portion of KE as ½mv² = 0.5*5140*465² = 557 zettajoules. But translation is only 3 DOF, air molecules have two more DOFs so we should scale this up by 5/3 giving 5*557/3 = 928 zettajoules.

        Ok, I admit it, I cheated when I “guessed” 465 m/s for the typical RMS velocity of air molecules. But to get Miskolczi’s 192 zettajoules the velocity would have to be 257 m/s or 474 mph. If air molecules slowed to that speed they’d be overtaken by jet planes and curl up on the ground in a pile of frozen oxygen, nitrogen, and humiliation.

        What kind of climate scientist would discount the energy of the Earth’s atmosphere to that extent?

        Ok, you say, so Miskolczi merely overlooked a factor of 5 in some tremendously complicated calculation of the kinetic energy of the atmosphere. So what else is new, people make these sorts of mistakes all the time in complicated calculations. If that’s the only mistake Miskolczi ever made he’s well ahead of the game.

        Except that (i) a climatologist really does need to be able to calculate the energy of the atmosphere more accurately than that, and (ii) Miskolczi has been claiming this for years, even when the mistake is pointed out to him. Instead he has been trying to convince Toth that the error is on Toth’s side, not Miskolczi’s. And that for a paper that Toth wrote eight months ago and that has now been accepted for publication.

        By my standards I think Andy was very kind to limit himself to “Ferenc nonsense.” Being me I would have used stronger language like “idiot” or “moron.” (Hmm, come to think of it, I did.)

        Let’s not forget that these words might get read and repeated for ever and ever.

        I wish. Just so long as their meaning is not changed by misquoting them or taking them out of context. ;)

        I like learning new stuff, and for that reason I prefer being proved wrong over right. One learns nothing from being proved right, I’m not invincible and am always happy to be vinced. On the other hand being contradicted is not the same thing as being proved wrong. But one also learns nothing from being proved wrong when one is deliberately wrong. (“I’m shocked, shocked to find that lying about climate science is going on in here.” “Your Heartland grant, sir.”)

        Decorum matters. The strategy to pick should be a little more closed. For the chessplayers: think 1. Nf3 with Kramnik’s repertoire, not 1. e4 with Shirov’s.

        Can’t be chess or we could have ended this vicious little game long ago with either one of the threefold repetition rule or the fifty-move rule (no material progress after fifty moves).

      • Unfortunately wordpress turns out not to offer the overstrike capability. Please read the first word of “morons programmers” in my preceding comment as having been overstruck.

      • Vaughan,

        Just look at what you wrote!

        > While I may well be unable to duplicate the many thousands of calculations Miskolczi must have needed to arrive at PE = 754 megajoules from 2000 layers of TIGR2 atmosphere, third grade must have been the last time I could not multiply three numbers together, and the outcome in this case gave me no cause to doubt Miskolczi’s imputed Herculean labors in his computation of PE.

        This is WAY better than saying that FM is a moron, don’t you think? Style! Zest! Gusto! A really entertaining rejoinder to his low jab, in my most humble opinion.

        If only I had a professor like that when I was younger, I would too worship the back of the envelope! Too late for me, I now prefer the back of a napkin:

        http://www.thebackofthenapkin.com/

        More seriously, here is how your trash-talking get recycled into editorials:

        http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-19979

        So now “scientists are mean.” This is good news, if we’re to compare to “scientists are not even wrong” or “scientists are endoctrinators.” Still, this means that your back of the envelope shows numbers that can’t be contested. The only way out is to play the victim: see how scientists treat us, mere mortal!

        Please do not let that way out.

        Hoping to see more and more back-of-the-envelope doodles,

        Best regards,

        w

        PS: The chess analogy works better if we separate the bouts. It’s not impossible to make a career in repeating the same openings, over and over again. Imagine a tournament, or a series of tournament, not a single game of chess… Besides, if one repeats oneself too much, he becomes predictable and loses, unless one’s simply driving by to cheerlead and leave one’s business card with Apollo on it ;-)

      • Vaughan Pratt is nothing but fun. It would be an honor to be called a moron by Vaughan Pratt. If only I could rise to that level.

        Willard, why do people harp on Aristotle’s fallacies? To me they’re quaint and all, but just the cowboy in me, I’m bringin’ my ad hominem attacks and my tu quoques to a bar fight. This appears to be a dust up.

      • > It would be an honor to be called a moron by Vaughan Pratt.

        Agreed.

        Nonetheless, one must then pick up on the editorializing that ensues. Just below here, for instance:

        http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-20271

        Or elswhere, not far from here:

        http://judithcurry.com/2010/12/06/education-versus-indoctrination-part-ii/#comment-20019

        There are many other places. In fact, simply put this into your G spot:

        site: judithcurry.com arrogance

        Yes, even Judith is using that trick.

        Damn rhetorics!

      • David L. Hagen

        Vaughan Pratt |
        Re: “as essentially the same “attack,” minus my gratuitous “morons” epithet. Since no such objection was raised to Andy’s comment, I am led to wonder whether it was really my trash-talking that bothered them rather than this alleged ad hominem attack.”
        I did raise an objection to A. Lacis.
        BOTH your “trash-talking” AND your ad hominem attacks are unbefitting of professional scientific conduct. I address again to you what I said to A. Lacis

        Will you rise to the level of professional scientific conduct?
        Or need we treat your comments as partisan gutter diatribe?
        Your diatribes (against Mikilczi) do not speak well of your professional scientific abilities or demeanor.

        Please raise your conduct to professional scientific discourse rather than waste our time and distort science.

        I acknowledged I misunderstood the core of your concerns over the coefficient in the virial theorem.
        Please read up on how astronomy applies the classic virial theorem to calculate gas pressure, temperature and density versus radius. e.g., Advanced Astrophysics, by Nebojsa Duric Section 2.4.2 p 35

      • > Please raise your conduct to professional scientific discourse **rather than waste our time and distort science**. [Our emphasis]

        How professional and befitting.

      • Vaughan Pratt

        Re: “I like learning new stuff,” – My compliments.
        Re: “But just because Andy and I are in agreement on the quality of Miskolczi’s work doesn’t make us right.” Agreed:
        The one who states his case first seems right,
        until the other comes and examines him.
        Proverbs 18:17 ESV

        Re: “ and for that reason I prefer being proved wrong over right.”
        OK per your desire:
        Re: “Unless you belong like me to the small sect that worships the back of an envelope,” That is too small an object to worship.

        The danger of worshiping your envelope is in missing critical big picture details. You observe: “This is about five times what Miskolczi calculated.”

        Your error appears to be in calculating the conventional TOTAL thermal energy that you thought Miskolczi’ had calculated – RATHER than the single degree of freedom vertical component of the kinetic thermal internal energy that Miskolczi had actually calculated.
        See further comments on my post: of Aug 16, 2011

        Professional courtesy includes first asking the author to first see if I made an error, before trumpeting “his “error”. Miskolczi and Toth communicated back/forth with each other and colleagues and concluded that they agreed with each other’s virial theorem for a diatomic gas within an algebraic conversion factor. So If i have made an error, please clarify and link to the evidence & solution.

        Best wishes on our continued learning.

      • “However flux and level are to a certain extent independent: you can lower the flow of water vapor from the ground to the clouds without changing the level of atmospheric water vapor simply by raining less.”

        And this works the other way too. It’s possible to have less water vapour resident in the atmosphere without lowering the flow or precipitation.

        And your point about Miskolczi’s theory was?

      • And this works the other way too. It’s possible to have less water vapour resident in the atmosphere without lowering the flow or precipitation. And your point about Miskolczi’s theory was?

        That he didn’t say which.

        (Don’t complain that I set you up, you did it to yourself.)

      • Given the context of his theory, he doesn’t need to spell out which.

        Except to you apparently. ;)

        You are the one claiming that your observation constitutes disproof, I am merely pointing out that it doesn’t.

      • Given the context of his theory, he doesn’t need to spell out which.

        It would appear you’re not following. If it’s one then Earth’s surface cools, if it’s the other it warms. Why do you believe he doesn’t need to spell out which?

  44. 1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.

    Since 1870 is 13 years before my graph begins, how is this relevant here?

    In your original post you said:
    “Now anthropogenic CO2 supposedly began in earnest back in the 18th century.”
    This is potentially misleading. You could say that human emission of co2 began in earnest with the start of the industrial revolution in C18th Europe, but it’s not thought the atmospheric level was much affected by humans until the late C19th or even early C20th. So the problem for your explanation of temperature rise is accounting for the rising temperature from circa 1700 to circa 1880. What do you propose? Longer cycles with as yet unknown causation? (I won’t hold my breath for your explanation), or solar activity? Or something else?

    Let’s do these one at a time so the posts don’t get too long.

    • the problem for your explanation of temperature rise is accounting for the rising temperature from circa 1700 to circa 1880.

      How is this a problem? If you believe Arrhenius’s logarithmic dependence of the temperature of the Earth’s surface on atmospheric CO2, and Hofmann’s raised-exponential dependence of atmospheric CO2 on year, then a simple calculation at a couple of years, say 1800 and 1900, confirms the impression of those who doubt, as you correctly say, that “the atmospheric level was much affected by humans until the late C19th or even early C20th”.

      Using n = 280 ppmv for the natural base (the part of Hofmann’s formula that raises the exponential), o = 1790 as the onset year in which Hofmann says human CO2 remaining in the atmosphere reached 1 ppmv, and d = 32.5 as the number of years it then took to double to 2 ppmv, all due to Hofmann, and using binary rather than natural logs, lb rather than ln, for Arrhenius’s formula so that the answer comes out in units of degrees per doubling rather than degrees per increase by a factor of e, we have lb(n + 2^((y-o)/d)) = 8.136 and 8.182 for y = 1800 and 1900 respectively. That’s an increase of .046 during the whole of the 19th century. If we use a climate sensitivity of 1.8, which is what’s needed for this formula to account for the temperature rise during the 20th century, then the rise during the 19th century would have been 1.8*0.046 = .083 °C, of which 0.021 °C would (assuming this formula) have been in the first half of that century and .062 °C in the second half.

      Your impression of what people either observed or believed is confirmed by the theory.

      The same formula applied to the period from 1957 to 1990 predicts a rise in temperature of 0.28 °C. Consulting the 12-year (144-month) smoothed HADCRUT3 temperature curve at WoodForTrees.org, we see a rise of exactly 0.28 °C.

      Coincidence? Well, let’s back up to an intermediate period: 1892 to 1925. The formula promises a rise of .08 °C. Consulting the same smoothed data confirms that the rise was exactly that.

      One more try: 1957 should be 0.15 °C hotter than 1925. Bingo! On the nose yet again.

      Caveat: these dates are at (or near, pace Judy) zero crossings of the Atlantic Multidecadal Oscillation or AMO. Other dates don’t match the formula as well unless the AMO is incorporated into the formula.

      One should also take into account the larger volcanoes, along with the extensive aerosols created when the RAF transplanted entire cities like Hamburg, Pforzheim, and Dresden into the atmosphere during World War II, not to mention the emissions from the interminable air sorties, tanks, etc., in order to get truly accurate results.

      World War I on the other hand consisted largely of 70 million infantry running around and a few bricks being thrown from planes, whose aerosols had no impact on climate whatsoever while the combined heavy breathing of the infantry may have raised the CO2 a tiny amount. (World War II might be described as World War I “done right” in the view of its instigators, with the benefit of the great advances in military technology in the intervening quarter century.)

      El Nino, solar cycles, seasonal fluctuations and other short term events, episodes, and phenomena can also be neglected because the 144-month smoothing completely removes their influence from the temperature record. This is not to understate their influence, which is large, and noticeably more traumatic by virtue of happening more suddenly!

      • All very interesting, so co2 rise does follow temperature rise quite closely, once we remove the rising and falling of the AMO, the solar cycles and other natural oscillations . Presumably it’s residence time flattens out these shorter fluctuations?

        Ok onto number two.
        2) The recovery of global temperature from the little ice age started around 1700

        No reply.

        Number three:
        3) Even if the match between co2 and temperature were good (it isn’t). Correlation is not causation.

        You agreed to this, which is great.

        4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.

        No reply, as Ferenc Miskolczi so kindly pointed out to you.

        5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.

        I see S.o.D. has refuted this claim, which I picked up off Jeff Glassman. No doubt there are gritty details tucked in here, but I’m happy to leave it for now.

      • Look at:

        Vaughan Pratt | December 7, 2010 at 12:48 pm

        you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation?

  45. May I point out the obvious; that radiative forcing cannot be measured with current technology. So a Michelson/Morley type event has not occurred, and is unlikely to occur into the indefinite future. Neither side of this theortical argument can prove their case.

    However, the IPCC cannot use the “scientific method” to prove it is right. By the same token, the opposing arguments cannot prove that the IPCC is wrong. It is just that, with billions of dollars at stake, it seems to me that we need to wait for proof that the IPCC is right. Which is what most of our politicians have NOT done.

    • David L. Hagen

      Jim Cripwell
      Regards to “proving”, there are methods to check. Scientists are now checking how well IPCC projections match subsequent temperatures etc. – They show increasing divergence.
      Miskolczi (2010) above is a method of evaluating if the Global Optical Depth is increasing as expected from IPCC models. His results suggest not.

      It will also help to move IPCC to adopt stringent Principles for Scientific Forecasting for Public Policy. See
      http://www.forecastingprinciples.com/index.php?option=com_content&task=view&id=26&Itemid=129/index.html

      • David, Thank you for the support. The reason I think that this is important is that the next discussion is with respect to the rise in global temperature as a result of the change of radiative forcing, without feedbacks. Here the lack of experimental data is even more definite; it is impossible to measure this temperature rise with our atmosphere. The whole IPCC estimation of climate sensitivity has no experimental basis whatsoever. Yet, somehow, we are supposed to genuflect and pretend that the science is solid.

  46. People wondering whether climate science just doesn’t understand the basics might wonder whether Jeff Glassman on December 6, 2010 at 6:37 pm is pointing out some clear flaws.

    I’ll pick one claim which is easily tested:

    IPCC declares that infrared absorption is proportional to the logarithm of GHG concentration. It is not. A logarithm might be fit to the actual curve over a small region, but it is not valid for calculations much beyond that region like IPCC’s projections. The physics governing gas absorption is the Beer-Lambert Law, which IPCC never mentions nor uses. The Beer-Lambert Law provides saturation as the gas concentration increases. IPCC’s logarithmic relation never saturates, but quickly gets silly, going out of bounds as it begins its growth to infinity.

    Have a read of 6.3.4 – 6.3.5 of the IPCC Third Assessment Report (2001) – downloadable from http://www.ipcc.ch:

    Here is the start of 6.3.5:

    IPCC (1990) used simplified analytical expressions for the well mixed greenhouse gases based in part on Hansen et al. (1988). With updates of the radiative forcing, the simplified expressions need to be reconsidered, especially for CO2 and N2O. Shi (1992) investigated simplified expressions for the well-mixed greenhouse gases and Hansen et al. (1988, 1998) presented a simplified expression for CO2. Myhre et al. (1998b) used the previous IPCC expressions with new constants, finding good agreement (within 5%) with high spectral resolution radiative transfer calculations. The already well established and simple functional forms of the expressions used in IPCC (1990), and their excellent agreement with explicit radiative transfer calculations, are strong bases for their continued usage, albeit with revised values of the constants, as listed in Table 6.2.

    The paper that the IPCC refers to – New Estimates of Radiative Forcing due to well-mixed Greenhouse Gases by Myhre et al, GRL (1998) has same graph and the logarithmic expression – you can see these in CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers.

    And you can read the whole paper for yourself.

    Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.

    The IPCC curve in the 2001 TAR report follows the values established by Myhre et al. Myhre et al simply use the radiative transfer equations to calculate the difference between 300ppm and 1000ppm in CO2. Then they plot these values on a graph. They make no claim that this represents an equation which can be extended from zero to infinite concentrations.

    Myhre et al and the IPCC following them are not making some blanket claim about logarithmic forcing. They are doing calculations with the radiative transfer equations over specific concentrations of CO2 and plotting the numbers on a graph.

    Beer-Lambert law of absorption:

    The radiative transfer equations do use the well-known Beer-Lambert law of absorption along with the well-known equations of emission. You can see this explained in basics and with some maths in Theory and Experiment – Atmospheric Radiation.

    So the claim: “the Beer-Lambert Law, which IPCC never mentions nor uses” is missing the point of what the IPCC does. You won’t find the equations of radiative transfer there either. You will hardly find an equation at all. But you will find many papers cited which use the relevant equations.
    In fact, in publishing results from people who use this Beer Lambert law, the IPCC does use it.

    So when people write comments like this it indicates a sad disinterest in understanding the subject they appear so passionate about.

    I recommend people to read the relevant sections of the IPCC report and the Myhre paper. Well worth the effort.

  47. Perhaps the most critical scientific question relevant to future climate behavior as a function of CO2 concentration involves “climate sensitivity” – the temperature response to changing CO2, typically expressed in terms of a CO2 doubling. This combines both the initial forcing as determined with the aid of the radiative transfer principles discussed in this thread, and the feedbacks initiated by the temperature response to the forcing. In addition to the Planck response (a negative feedback dictated by the Stefan-Boltzmann equation but part of the calculations implicit in the response to the forcing alone ), the most salient, at least on relatively short timescales, are the water vapor, lapse rate, the ice-albedo, and cloud feedbacks. The last of these has been a subject of some controvery but has generally been estimated as positive, the ice-albedo feedback is generally considered positive but quantitatively modest, the lapse rate feedback is negative, and the water vapor feedback is generally thought to be the dominant positive feedback responsible for significant amplification of the temperature response to CO2. This is positive response of water vapor is expected based on the increase in water vapor generated by the warming of liquid water.

    Expectations aside, and despite the above, the sign of the water vapor feedback has been challenged. A negative feedback due to declining water vapor in response to CO2-mediated warming would have major implications for climate behavior. Dr. Curry has stated that she plans a new thread on climate sensitivity, and so extensive discussion might be withheld until that appears. However, water vapor is relevant to some upthread commentary asserting a high degree of climate self-stabilization based on negative water vapor feedbacks. In anticipation of the future, I think it’s worth pointing out here that substantial observational data bear on this question. These data include satellite measurements demonstrating that tropospheric water vapor at all levels is increasing in relation to increasing temperatures. In addition, longer term data are available from radiosonde measurements. Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful. At this point, the convergence of evidence strongly supports a positive water vapor feedback capable of amplifying rather than diminishing the initial effects of CO2 changes alone. The validity of theories dependent on a countervailing negative response (a decline in atmospheric water) cannot be excluded with absolute certainy, but appears to be highly improbable. I’ll provide more data and extensive references in the relevant upcoming thread.

    • Hi Fred,
      I see plenty of uncertainty so i keep an open mind on all theories in play, plus I have one of my own, which is that humidity might not be dancing to co2’s tune either positively or negatively, but might be dancing to the beat of a different drum.

      Nothing conclusive yet, but I made this plot of the NCEP reanalysis of the radiosonde data for specific humidity at the 300mb level, around the height where most radiation to space occurs, and solar activity.

      http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png

      I’ve been touting it around in the hope someone might have something worthwhile to say about it, so please do.

      • I’ll discuss the NCEP-NCAR reanalysis in the upcoming thread. It’s an outlier, with the other reanalyses, plus the satellite data, all showing increasing specific humidity. A major problem was changing instrumentation, which improved (i.e., shortened) the response time of the instruments, so that more recent data based on measurements at a high altitude were increasingly less contaminated with residual measurements from lower, wetter altitudes.

        I think we can agree that the issue isn’t settled with absolute certainty, but attempts to conclude that humidity has not been increasing will have to overcome a rather large body of evidence to the contrary, particularly as satellite-based trends have begun to supplement the radiosonde data.

      • Thanks Fred,
        My feeling is we should try to salvage what we can from the radiosonde data since it goes back twice as far as satellite data. Speaking of which, can you point me to any nice sites with easily accessible satellite data for such things as specific humidity?

        Thanks again.

      • You’ll have to forgive me for not replying in detail, but I will try to review more of the references a bit later. In the meantime, you might check out some of the references in AR4 WG1, Chapter 3. There’s nothing there since early 2007, but the chapter does include some interesting text and references, including the brightness temperature data comparing O2 signals with water vapor signals, based on the relatively unchanging atmospheric O2 concentration as a baseline.

        Any time data on this or any other unsettled topic are cited, it’s important to ask whether the data cited are inclusive, or whether they omit important sources that imply a different interpretation. To the best of my knowledge, the NCEP/NCAR reanalysis is the only source of extensively sampled data that conflicts with the other sources (I’m referring to observational data, not theoretical arguments, although these of course tend to go mainly with increasing humidity). If there are other important sources of observational data reinforcing the NCEP/NCAR conclusions, I hope someone will call attention to them.

      • Fred,
        This is why I asked Ferenc Miskolczi whether he believed his analysis of the radiosonde data which found a constant value for Tau confirmed the validity of the data. I hope he calls back to reply.

        Global rainfall records are hard to come by, but Australia has seen a decline in precip since 1970.

      • My feeling is we should try to salvage what we can from the radiosonde data since it goes back twice as far as satellite data.

        I agree with tallbloke on this point.

    • David L. Hagen

      Fred Moulton
      Thanks for the overview. You note:

      longer term data are available from radiosonde measurements. Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful.

      1) Per this thread, any comments on the confidence/uncertainty on those radiation evaluations?
      2) You note increasing water vapor. Yet Miskolczi (2010) above applies the available radiosonde data showing decreasing water vapor. That appears to be one major issue over his finding of stable global optical depth. Similar declining humidity trends were reported by:
      Garth Paltridge & Albert Arking & Michael Pook
      Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data Theor Appl Climatol 4 February 2009, DOI 10.1007/s00704-009-0117-x

      Gilbert gives theoretical basis to support this:
      William C. Gilbert THE THERMODYNAMIC RELATIONSHIP BETWEEN SURFACE TEMPERATURE AND WATER VAPOR CONCENTRATION IN THE TROPOSPHERE
      Energy & Environment Vol. 21 No 4, 2010
      http://www.eike-klima-energie.eu/uploads/media/EE_21-4_paradigm_shift_output_limited_3_Mb.pdf#page=105

      The theoretical and empirical physics/thermodynamics outlined in this paper predict that systems having higher surface temperatures will show higher humidity levels at lower elevations but lower humidity levels at higher elevations. This is demonstrated in the empirical decadal observational data outlined in the Introduction, in the daily radiosonde data analysis discussed above and explained by classical thermodynamics/meteorology relationships. . . .The key to understanding the actual observed atmospheric humidity profile is to properly take into account the physics/thermodynamics of PV work energy in the atmosphere resulting from the release of latent heat of condensation under the influence of the gravitational field. . . .
      The contents of this paper may also have relevance for the work of Ferenc
      Miskolczi [6, 7] in that his radiosonde analysis shows that over the decades, as CO2 concentrations have increased, water vapor concentrations at higher elevations have decreased yielding offsetting IR absorbance properties for the two greenhouse gases. That offset results in a constant Optical Depth in the troposphere.

      Appreciate any references addressing those differences. (With the main discussion for Curry’s future post on water vapor.)
      PS Judy – feel free to move these last two to a different post.

      • Dear David and Fred,

        Let me repeat here: It is often said that with increasing CO2 the water vapor amount must decrease to support Ferenc’s constant. No, not the amount of GHG’s but their global absorbing capacity determines the greenhouse effect. The stability of tau can be maintained by changes in the distribution of water vapor and temperatures too. If there is a physical contraint on it (as Ferenc states), the system has just enough degrees of freedom to accomodate itself to this limit, and will ‘know’ which one is the energetically more ‘reasonable’ way (satisfying all the necessary minimum and maximum principles) to choose.

        Thanks,
        Miklos

      • Theoretically, I agree that water vapor could be redistributed so as to reduce optical thickness despite rising overall levels. Hower, the distribution of increased humidity includes the mid to upper troposphere where the greenhouse effect is most powerful, and the increased optical thickness (tau) is already evident from direct measurements relating water vapor to OLR. There will be more discussion of this in the upcoming climate sensitivity thread, but the link I cited in my first comment in this thread (not quite halfway down from the top) provides one piece of evidence.

        At this point, the climate sensitivity issue revolves mainly around the cloud feedback values (generally thought to be positive, but controversial). The positive water vapor feedback appears to be on solid ground and unlikely to be overturned, in my view, for reasons I’ve stated above.

  48. SoD wrote:
    “Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.”

    Actually, you are talking about wrong paper, it contains nearly no information nor details. Better look at their previous paper, with some details on the method:

    http://folk.uio.no/gunnarmy/paper/myhre_jgr97.pdf

    Then pay attention to Figure2. I don’t know where did they get that the “choice of tropopause level” makes only difference of 10%. What I see from their Figure2 that depending on where do you stop calculating (and which profile is used, polar or tropical), the CO2 forcing may differ by 300%. That’ why I asked the question, to what extent it was “established” that global CO2 forcing is 3.7W/m2 (which, BTW, implies the accuracy of determination of better than 3%).

  49. Al – Not sure where you’re getting your 300%. For example, the OBIR mls curve in Fig. 2 of Myrhe and Stordal (1997) has a an irradiance increase of 0.11 W/m2 at an altitude of 8 km for a 5 ppm increase in CO2, and an irradiance increase of 0.10 W/m2 at an altitude of 20 km, a difference of 10% rather than 300%. 8 km is a minimum tropopause height, and 20 km is a maximum tropopause height, so values within that altitude range are the only ones that matter.

    In practice, plausible choices for global-mean tropopause height do not have nearly so broad a range, so the sensitivity of CO2 radiative forcing to tropopause height choice is much less than 10%. Myrhe and Stordal’s 10% refers to the sensitivity with CFC’s and other low-concentration Tyndall gases, not with CO2.

    Finally, this is not a physical uncertainty, but an uncertainty associated with an arbitrary definitional choice. The “tropopause” is an arbitrary concept with multiple definitions, so the precise value of radiative forcing at the tropopause will depend upon what definition one chooses for the tropopause. None of this affects the vertical profile of altered irradiance, the physical response of the atmosphere to the altered irradiance, nor any model-simulated response to the altered irradiance.

  50. ??? For polar profile, the “forcing” (difference in two OLRs) is less than 0.035W/m2 for a distant observer. If you stop calculations at 10km for tropics, you have 0.115W/m2. This is a ratio of 3.28, or 328%. Or 228%, whatever.

    What do you mean “not a physical uncertainty”? The planet gets some SW radiation, then it emits some LW to outer space, as a distant observer would measure. To get a steady state, the OLR must have certain magnitude, all regardless of your definitions. If composition is altered, new OLR will be established, and the system must react to re-establish the balance. How it could be an arbitrary concept if system needs to react either to 0.035W/m2, or to 0.115W/m2? I understand that you can throw in various mix of profiles (three of them :-)), and it will lead to a narrower range of possible forcing, but stopping calculations at 8km is not justifiable when it is known that the rest of atmosphere would cut this imbalance in half, as Figure2 suggests.

  51. Al Tekhasski said on December 7, 2010 at 6:14 pm

    “SoD wrote:
    “Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.”

    Actually, you are talking about wrong paper, it contains nearly no information nor details. Better look at their previous paper, with some details on the method..

    How can it be the “wrong paper”?

    It is the paper cited by the IPCC for the update of the “famous” logarithmic expression. It doesn’t explain the radiative transfer equations because these are so well-known that it is not necessary to repeat them. The paper does contain references for the line by line and band models.

    Imagine coming across a paper about gravitation calculations 50 years and 5,000 papers after the gravitational formula was discovered. People in the field don’t need to derive the formula, or even repeat it. .

    Where did they get that the choice of tropopause definition makes only 10% difference?

    From Greenhouse gas radiative forcing: Effects of averaging and inhomogeneities in trace gas distribution Freckleton et al, Q. J. R. Meteorol. Soc. (1998).

    And as John N-G correctly says on December 8, 2010 at 1:52 am:

    ” The “tropopause” is an arbitrary concept with multiple definitions, so the precise value of radiative forcing at the tropopause will depend upon what definition one chooses for the tropopause.

    Picture it like this.

    Suppose you live some distance north of New York City. Some people define the center of New York City as the Empire State Building. Some define it as the location of City Hall.

    Suppose the distance from your house to New York City is 50 miles with the center defined as the Empire State Building. If someone wants to know how far from your house to New York City with the center defined as City Hall, they just have to add 3 miles.

    The choice of “center of NY” is arbitrary. The distance from your house to the Empire State Building is always the same. The distance from your house to the City Hall is always the same.

  52. I dont know if others feel this way, but I cannot see how these theoretical discussions can ever get us anywhere. The proponents of CAGW will always want to believe that there are sound theoretical reasons for believing that a high value for a change of radiative forcing exists, when CO2 doubles. Skeptics will want to believe the opposite. Without any observed data, I cannot see how the argument can be resolved. And this, of course, is the weakenss of the IPCC approach. Without hard measured data, they can never use the “scientific method” show that CAGW is real.

    • I dont know if others feel this way, but I cannot see how these theoretical discussions can ever get us anywhere. The proponents of CAGW will always want to believe that there are sound theoretical reasons for believing that a high value for a change of radiative forcing exists, when CO2 doubles. Skeptics will want to believe the opposite. Without any observed data, I cannot see how the argument can be resolved. And this, of course, is the weakenss of the IPCC approach. Without hard measured data, they can never use the “scientific method” show that CAGW is real.

      My feeling exactly, as I’ve said repeatedly on Judy’s blog.

      Only when one sees the temperature increasing logarithmically with CO2 level can one possibly begin to believe all these cockamamie theories that it “ought to.”

      What impresses me is the number of people who will deny seeing exactly that when it’s pointed out to them. Their response is, “Oh, that could be any curve,” without actually offering an alternative curve to the logarithmic one.

      Deniers are wedged into denier mode, data will not change their minds no matter how good the fit to theory.

      • Richard S Courtney

        Vaughan Pratt:

        You assert:
        “Deniers are wedged into denier mode, data will not change their minds no matter how good the fit to theory”.

        But
        Catastrophists are wedged into catastrophist mode, data will not change their minds no matter how bad the fit to theory.

        So, your point is?

        Richard

      • Nothing you will read in this thread will change your opinion one jot.

  53. When I was a young staff engineer, long before personal computers or even remote terminals, I worked on a large campus of Hughes Aircraft Company. We had a wildly popular football pool of a dozen or so games per week, handicapped with point spreads. I had a computer in my office that I used for my picks, which I posted on my office door midweek. People from different buildings would gather at my door, pads and pencils in hand, to get the computer picks. I didn’t tell them that I had used a random number generator.

    Myhre, et al. (1998) did the same thing, making the picks look more genuine by graphing some with lines and some as if they were data points for the lines. To make it more convincing, they labeled a couple of curves “approximation” and a couple of them “fit”, as if to say approximation to data or fit to data. Gunnar Myhre was an IPCC Lead Author for both the TAR and AR4.

    The equation scienceofdoom gives on his blog is ΔF =K*ln (C/C0), where K = 5.35 W/m^2 and C0 = 278 ppm. It is the bottom curve labeled “IPCC type fit to BBM results” with an rms fit of 1.08E-14 W/m^2, digitized uniformly from end-to-end with 15 points.

    The curve “IPCC type fit to NBM results” is for all practical purposes the same as the NBM results, which is logarithmic with K = 5.744 and C0 = 280.02, with an rms error = 2.46E-16 over 13 points.

    The curve “IPCC (1990) approximation” has K = 6.338, C0 = 278.555, with an rms error = 8.88E-16 over 16 points.

    The curve “Hansen (1998) approximation” has K = 6.165, C0 = 288.328, with an rms error = 7.74E-15 over 19 points.

    The data set “BBM results” has K = 5.541, C0 = 281.537, with an rms error of 6.427E-15 over all 11 legible points.

    The data set “NBM TREX atmosphere” has K = 5.744, C0 = 280024, with an rms error of 2.463E-16 over all 13 legible points.

    In summary, these are all models of the failed conjecture that RF depends on the logarithm of the CO2 concentration. Who among the authors stands first to take the credit? Hansen? Scienceofdoom says,

    “Myhre et al and the IPCC following them are not making some blanket claim about logarithmic forcing.”

    Yes, they are. The message is just disguised with ornaments.

    Sod says,

    “They make no claim that this represents an equation which can be extended from zero to infinite concentrations.”

    Of course not — not explicitly. The result is obvious, and besides that observation makes the claim of the logarithm quite silly.

    Using the least IPCC curve, the one for which sod gives an equation, when the concentration gets to 1.21E16, the RF is equal to the entire radiation absorbed on the surface, 168 W/m^2. IPCC, AR4, FAQ 1.1, Figure 1.1. When the concentration gets to 1.61E30, the RF is 342 W/m^2, IPCC’s TSI estimate. Id.

    Of course, the most the CO2 concentration can be is 100%, or 1E6 parts per million, but the log equation doesn’t know that! And furthermore, the most the RF from CO2 can be is less than the ratio of the total of all CO2 absorption bands to the total band of blackbody radiation, and much less than one. But the log equation doesn’t know that either. Clearly the logarithm has limited value. How should the relation behave?

    Beer and Lambert took these and other considerations into account when they developed their separate theories that jointly proved to be a law. The band ratio is but one conservative, upper bound to the saturation effect, which can be calculated from the Beer-Lambert Law, line-by-line or band-by-band, as one might have the need and patience for accuracy and resolution.

    The logarithm model makes CO2 concentration proportional to the exponent of absorption. The Beer-Lambert Law makes absorption proportional to the exponent of CO2 concentration.

    None of Myhre’s traces comprise measurements, not even the output of the computer models, which will surprise some. You can depend on these models, the NBM and BBM, to have been tuned to produce satisfactory results in the eyes of the modeler, and in no way double blind simulations. Just like GCMs.

    Certainly, no tests have ever been conducted over the range of the graphs, 300 to 1000 ppmv. No theory supports the logarithm function, the long held, essential but false conjecture of AGW proponents. The applicable physics, the Beer-Lambert Law, is not shown by Myhre, et al., of course. And if any investigator ever made estimates qualitatively different than the logarithm, IPCC would not have selected his results for publication, nor referenced a work that did. He probably couldn’t have been published anyway, but if he did, the folks at IPCC would deem the journal not to have been peer-reviewed so the journal and its papers could be shunned.

    Likewise, when challenged with physics, sod turns ad hominem:

    “So when people write comments like this it indicates a sad disinterest in understanding the subject they appear so passionate about.”

    “scienceofdoom, thanks for this lucid clarification.”

    • Jeff – You are arguing the equivalent of “the global temperature cannot possibly be increasing linearly at the rate of 0.1 C/decade, because that would mean that 2881 years ago the mean global temperature would have to have been below absolute zero“.

    • Jeff,
      Scientistofdoom, Chris Colose, Vaughan Pratt and several other ‘proper scientists’ on this blog use petty insult and high handed sarcasm quite freely, as if it somehow their right as ‘proper scientists’ when talking to ‘mere sceptics’. Many of whom are equally well or better qualified to discuss the topics Judith has been posting about…

      • Here is Jeff Glassman’s first paragraph:

        > When I was a young staff engineer, long before personal computers or even remote terminals, I worked on a large campus of Hughes Aircraft Company. We had a wildly popular football pool of a dozen or so games per week, handicapped with point spreads. I had a computer in my office that I used for my picks, which I posted on my office door midweek. People from different buildings would gather at my door, pads and pencils in hand, to get the computer picks. I didn’t tell them that I had used a random number generator.

        Let’s wonder what is the purpose of this story.

      • Scientistofdoom, Chris Colose, Vaughan Pratt and several other ‘proper scientists’ on this blog use petty insult and high handed sarcasm quite freely, as if it somehow their right as ‘proper scientists’ when talking to ‘mere sceptics’

        I’ll never live that down. ;)

        But do I want to? I try my derndest to treat “mere sceptics” with the utmost kindness and discretion. But we’re not dealing with mere sceptics here, but bad ones.

        There are two ways to be a bad sceptic. One is to insist that scientific facts are not subject to democratic voting since they are immutable. The other is to acknowledge the need for scientific consensus but to insist that facts run for re-election periodically, just as we make senators run for office periodically, as a measure of protection against their becoming “inconvenient.”

        Scientific facts are neither of these, they are more like Supreme Court justices who must run the gamut of Congress but are appointed if at all for life. Theories are proposed, sceptically examined, and eventually rejected or accepted.

        Any subsequent successful impeachment is likely to lead to a Nobel prize.

        The former kind of sceptic states their version of the facts and when it is pointed out that this is not the scientific consensus objects that science is not a voting matter.

        The latter kind forces re-elections in order to vote on the matter so they can “kick the bums out,” namely the incorrect laws of science, and replace them what they insist are the true laws. To this end they form their own contrarian party which they populate somehow with enough scientists proclaiming the new order to create the appearance of a majority opinion. There is a wide range of types of such scientists: TV weathermen, the nonexpert, Wall Street Journal subscribers, those who haven’t kept up with the literature, the confused, the willfully wrong, the uncredentialed, the dead, the pretend scientists, the fictional, and so on. And some (only a few I hope) have discovered that the trick that worked so well for getting good grades, to give the answer the professor wanted in preference to the one in the text when there’s a conflict, generalizes to research funding . (I should stress that the converse is definitely false: there are those who take oil industry money who nevertheless insist that anthropogenic global warming is a serious concern—but which of the 84 on that last list claims this and if so how did they end up on that list?)

        I view both kinds as being mean to science. I therefore have no compunctions about being mean to them.

        But not everyone who does not believe in global warming is necessarily a sceptic. They might simply not understand the material and would like it explained to them. Scientists (those in the teaching profession anyway) should not be mean to students having difficulties, they should be friendly and helpful and insightful and patient and understanding and give everyone A’s (erm, not the last)

        But students and others who willfully feign ignorance, or insist on contradicting you with data you know from years of experience is completely wrong, count for me as sceptics. They are obstructing science, and then scientists start to act more like the police when dealing with someone who’s obstructing justice. The more egregious the offence, the meaner a scientist can get in defending what that scientist perceives to be the truth. Not all scientists perhaps, but certainly me, I can get quite upset about that sort of behaviour.

        I do try to be tolerant when I’m not 100% sure which of these two types I’m dealing with, whether interested students having difficulties or willful obstructers of science. In the case of Arfur Bryant, with whom I had many exchanges elsewhere on Judy’s blog, I tried to remain patient with what I came to suspect was his pretense at being the former kind. We reciprocated each other’s politeness, but eventually PDA suggested I was wasting my time on him, just about the time I’d come to that conclusion myself.

        Ferenc Miskolczi is a very different case from Arfur Bryant. Unlike Bryant, who claimed ignorance in the subject, Miskolczi claimed competence when his paper suggested the opposite. His misapplication of the virial theorem was (for me) only the first hint of this.

        Andy Lacis found enough errors to satisfy himself that FM’s paper did not hold water. The big problem I had was Miskolczi’s failure to address the independence of the rate of the water cycle and the amount of water vapor held in the atmosphere. You might have a huge amount of vapor going from the ground to the clouds and immediately precipitating back down leaving virtually no water vapor up there, or it might hang around up there for a long time piling up like cars in a massive traffic jam and seriously blocking infrared. Or only a small trickle might go up, but again you have the choice of how long it hangs around.

        The flow is relevant because it acts like a laptop’s heat pipe, which transfers heat from the CPU to the enclosure via evaporation at the CPU which passes along the pipe to the enclosure where it condenses. In the atmosphere the roles of CPU and enclosure are played by the Earth’s surface and the clouds respectively. Less flow means less cooling.

        But the level of water vapor in the atmosphere is also relevant because it’s a greenhouse gas. A lower level means more cooling.

        Miskolczi claimed to have shown that more CO2 would be offset by less water vapor. But without also calculating how this would impact the rate of flow no conclusion can be drawn about the impact of the “Miskolczi effect” on global warming. This is because if the flow is also reduced then you lose the 78 W/m^2 heat pipe labeled “evapotranspiration” in the famous Figure 7 of Kiehl and Trenberth’s 1997 paper. That’s the biggest loss of heat from the Earth’s surface, the second biggest being the net 390 – 324 = 66 watts of difference between direct radiation up and back radiation down.

        A different kind of problem is that the claimed effect was simply not believable, which neither the paper nor http://www.youtube.com/watch?v=Ykgg9m-7FK4“>Miklos Zagoni’s you-tube explanation addresses. Usually when you have an unbelievable result you have an obligation to offer a digestible reason to believe it.

        Unfortunately climate science is not (in my view) able to raise that kind of objection to Miskolczi’s paper without being accused of the same thing. If the point of hours or months of computer modelling is to increase confidence that global warming is happening, it’s not working for those who look at the last 160 years of crazily fluctuating temperature and say “seeing is believing.” The significant fluctuations in 19th century temperature appear to them inconsistent with the offered explanations of global warming. The only ones not put off by those fluctuations are those already convinced of global warming. This needs to be fixed.

        But I digress. Getting back to the main point, after this I did not feel inclined to treat Miskolczi kindly. But I see in retrospect that when I wrote that “putting morons on pedestals makes you a moron” (paraphrasing “arguing with idiots makes you an idiot”) I should have expanded on it with “and putting obstructionists on pedestals makes you an obstructionist” so as to offer a wider selection. I didn’t actually commit to either of these.

        Admittedly this is a bit like Dr. Kervorkian loaning out the choice of his Thanatron or a loaded revolver. But then Kervorkian did not go to jail until he was caught injecting someone himself. I did not actually say either Miskolczi or Hagen was a moron, I left it up to them to complete the reasoning as they saw fit.

        One reasonable completion would be “putting someone who believes the Clausius virial theorem applies here on a pedestal makes you such a believer.”

      • This comment (with some background) would make an interesting guest post!

        All we need is to find the appropriate blog for that…

        B-)

  54. Al – You seem to be combining a misinterpretation with an exaggeration.

    Fig. 2 in Myhre and Stordal shows three sets of curves, one for a standard tropical sounding, one for a standard midlatitude sounding, and one for a standard polar sounding. If you want to see how the CO2 irradiance change differs across the globe, you might compare the result from the polar sounding with the result from the tropical sounding. This is the calculation you’ve done, except you’ve exaggerated the difference by comparing the irradiance at one altitude (60 km) for the pole with a different altitude (10 km) for the tropics. In reality, the tropopause in the tropics is higher than the tropopause at the pole, so a more apt comparison would be the irradiance at 20 km in the tropics (0.10 W/m2) with the irradiance at 8 km at the pole (0.072 W/m2), a difference of 30-50%.

    But that’s not the issue, and that’s where the misinterpretation comes in. Nobody is going to compute the radiative forcing using the coldest temperature profile possible, and nobody is going to compute the radiative forcing using the warmest temperature profile possible. Instead, you’re going to use the global mean temperature profile (if you’re being crude), or use a range of temperature profiles that together come reasonably close to the actual temperature structure. Freckleton et al. (1998) (their Fig. 2) showed that a weighted average of three profiles gets you to within 1-2% of what you’d get with the full range of atmospheric conditions. So that’s what Myrhe et al. (1998) did.

    • There is indeed an exaggeration, to draw your attention. I was trying to illustrate the possible range of magnitudes of the “forcing” effect. As I already said, you can use a mix of profiles, and the range of forcings can be smaller.

      Yet you need to explain your “unphysical” hint (slip?). I assert that the integration must be always done from surface to infinity, because this is the actual range where the final balance (steady state) is achieved for the system. There is no “definitional” choice. Granted, you can stop calculations at a certain height if you can show that the curve reaches some asymptote and does not change anymore. The Figure2 of Myhre and Stordal (1997) shows that if you continue to calculate way past the tropopause (whenever it could be), the calculated “forcing” gets smaller by about half. The effect of your “selection of tropopause” is exaggeration by 100%. Yet AGW proponents keep saying that 3.7W/m2 is “well established”. Figure2 shows that this is baloney.

      Please explain why do you stop calculations at “tropopause” while Fig2 shows that it leads to 2X inflation of the estimate (even if we assume that the RT calculations were done correctly, which also can be questioned).

      • Al – Indeed such a seemingly strange choice requires justification. Here’s the scoop:

        From the stratosphere on up, atmospheric temperatures are pretty well determined by the combination of radiation absorption/emission and the horizontal and vertical motions of the air. In contrast, the troposphere’s temperature structure is pretty strongly determined by exchange of heat with the ground and oceans in combination with the small-scale and large-scale motions that redistribute heat under the constraints of dry and moist adiabatic lapse rates.

        Because of this difference, everything we know, observe, and simulate about the stratosphere shows that it adjusts fairly quickly to a radiative perturbation (on the order of a few months). However, the tropopause takes much longer, in particular because the oceans are very slow to respond to radiative forcing changes.

        Myhre and Stordal (1997) in Fig. 2 show the instantaneous calculated changes in downward irradiance due to a CO2 change. As you’ve pointed out, the irradiance change is largest at the tropopause and is smaller higher up in the atmosphere. This means that the stratosphere would have a radiative imbalance, radiating away more energy than it’s absorbing. As a result, it will cool quickly, eventually equilibrating over a few months when the decrease of upward irradiance at the top of the atmosphere has become equal to the decrease of upward irradiance at the tropopause.

        So, all in all, the IPCC figured it would be simpler to consider the radiative forcing after the stratosphere equilibrates rather than before the stratosphere equilibrates, since that’s what’s ultimately going to determine what happens in the troposphere. Here’s what they say about it.

        Footnote 1: Myhre et al. (1998) refer to the instantaneous radiative change at the tropopause as “instantaneous” and the radiative change after the stratosphere equilibrates as “adjusted”.

        Footnote 2: The substantial decrease in instantaneous irradiance change from the tropopause to the top of the atmosphere due to a change in CO2, which is what caused all this discussion, is, as far as I know, shared only by O3 among Tyndall gases, and is thus one of the fingerprint elements for climate change attribution.

      • Thanks for the reply, I missed it in the noise. I am certainly familiar with IPCC/Hansen’s definition of “radiative forcings”, and I expected AGW proponents to bring it in. This definition begs few additional questions.

        (1) You said that stratosphere will “cool quickly, eventually equilibrating over a few months”. Given results of some radiative models I saw, the state of stratosphere has “cooling rate” of about one degree C per day. Therefore, without some compensating heat fluxes it would completely cool down to absolute zero in “few months”. It does not. Therefore, the concept of stratosphere “cooling quickly” and “readjusting to radiative equilibrium” does not exactly fit observations, would you agree?

        (2) When IPCC says “stratospheric temperatures to readjust to radiative equilibrium”, does it mean literally that they assume no substantial convection up there? This would be very odd, because we know that the CO2 got “well mixed” everywhere, more or less. The question would be, how CO2 could ever get into stratosphere if moleculat diffusion would take about 100,000 years to get the CO2 across 20km layer of motionless air? Would you agree that the time and mechanism of temperature adjustment would require some substantial account for “stratospheric dynamics” that was neglected so far?

        (3) You say, “the troposphere’s temperature structure is pretty strongly determined by exchange of heat with the ground and oceans”. While it sounds very reasonable on the surface, experience shows that surface-atmosphere system has a pretty fast response to direct radiative imbalances, say when clouds come by, or seasons of year change. Therefore, the concept of extremely slow response of surface-troposphere system (“typically decades”) must be also quite a stretch, would you agree?

        (4) IPCC defines: “In the context of climate change, the term forcing is restricted to changes in the radiation balance of the surface-troposphere system imposed by external factors, with no changes in stratospheric dynamics, without any surface and tropospheric feedbacks in operation … , and with no dynamically-induced changes in the amount and distribution of atmospheric water.”

        So, everything is fixed in troposphere (including temperature) except CO2 concentration. The forcing therefore is a discrepancy between instant change in concentration and underlying temperature. This forcing is expected to last “typically decades”, correct?

        Now, how would you physically bring in a CO2 jump into entire atmosphere? One would assume that the mechanism of turbulent mixing of convectively-stirred air is an essential mean to propagate the surface-injected CO2 into tropopause and above. Fine. This means that the new state of the system was created, where the temperature now is deviated from the new state of equilibrium, and must adjust. The temperature at emission height is now a perturbation. And must last “typically decades” to force and sustain process of global warming. Is this correct description?

    • From Freckleton (1998) abstract :
      “By comparison with calculations at a high horizontal resolution, it is shown that the use of a single global mean profile results in global mean radiative forcing errors of several percent for CO2 and chlorofluorocarbon CFC-12 (CCI2F2); the error is reduced by an order of magnitude or more if three profiles are used, one each representing the tropics and the southern and northern extratropics.” (Sorry, the article is behind a paywall)

      So, they had ONE “high horizontal resolution” MODEL of atmosphere, and they calculated “forcing from 2xCo2″, which is ONE NUMBER. Then they have THREE numbers from three standartized atmospheric MODELS. Then by mixing the three numbers with three fudge coefficients, they got close to the “high resolution” number within a percent. Fantastic. (Sorry, the article is behind a paywall, can’t do a more detailed “review”)

      I think I can do better than that, I could mix the three numbers to match the “high-resolution” forcing number to zero percent, with 20 zeros after the decimal point. [ I am sure they tried to fit to several GH gases at once, but the whole idea of parametric fudging does not fly in first place].

      Is this how the entire radiative forcing science operates, and how the confidence was “established” and AGW foundation was built? Who said that their first model has “actual temperature structure”? Few radiosondes at handful of convenient locations launched twice a day? Sorry, this doesn’t sound serious.

  55. I guess the HITRAN database does not handle continuum and far wing absorption particularly well, as it must have problems with weak absorption lines as well. Typical path lengths in the real atmosphere can be as long as several kilometers. On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity. Absorption at frequencies in these bands (like the so called atmospheric window) is not easily measured in the lab, because cells used in spectroscopy have limited path lengths (several dozens of meters max). Therefore the database containing values derived from actual measurements is insufficient for algorithmic determination of atmospheric absorption/emission, one also needs extrapolation based on hardly understood models of continuum and far wing behavior. It is not a straightforward task to verify these models with in-situ atmospheric measurements, as the trace gas contents of the atmosphere is highly variable and it is neither controlled nor measured properly with sufficient spatio-temporal resolution.

    Even some of the so called “well mixed” gases (like carbon dioxide) are not really well mixed. In the boundary layer, close to the vegetation canopy CO2 concentration can be anywhere between 300 and 600 ppm, depending on season, time of day, insolation, etc. Active plant life continuously recreates this irregularity, which is then carried away by convection and winds. Turbulent mixing and diffusion needs considerable time and distance to smooth it out and bring concentration back to its average value.

    In case of water contents, humidity of an air parcel is even more dependent on its history (time and temperature of last saturation). There is strong indication atmospheric distribution of water is fractal-like over a scale of many orders of magnitude (from meters to thousands of kilometers). Fractal dimension of this distribution along isentropic surfaces tends to decrease with increasing latitude. It is close to 2 in the tropics, but drops well below 1 in polar regions. In other words it is transformed by advection from an almost space-filling tropical distribution through a stringy one at mid-latitudes to granulous patches at poles.

    As dependence of transmittance on concentration of absorber is highly non-linear, average concentration alone does not determine atmospheric absorptivity at a particular wavelength, finer details of the distribution (like higher moments) have to be given as well. However, these are neither measured nor modeled (because spatial resolution of computational climate models is far too coarse). Even with an absorber of high average atmospheric concentration, if there are see-through holes in its distribution, average optical depth can be rather low (you can see through a wire fence easily, while a thin metal plate made of the same amount of stuff blocks view entirely).

    So no, I do not have much confidence in radiative transfer models. The principles behind them are sound, but the application is lacking.

    • On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity. Absorption at frequencies in these bands (like the so called atmospheric window) is not easily measured in the lab,

      That’s very interesting, Berényi. What would you estimate as the uncertainty in total CO2 forcing as a function of this uncertainty?

      In the boundary layer, close to the vegetation canopy CO2 concentration can be anywhere between 300 and 600 ppm, depending on season, time of day, insolation, etc.

      What should we infer from this? That the Keeling curve underestimates the impact of CO2 on global warming? Are you trying to scare us?

      There is strong indication atmospheric distribution of water is fractal-like over a scale of many orders of magnitude (from meters to thousands of kilometers). Fractal dimension of this distribution along isentropic surfaces tends to decrease with increasing latitude.

      Also very interesting. How much of the impact of this effect on global temperature would you attribute to human influence? Global climate and anthropogenic global climate are not the same thing. Global climate has been going on for, what, 4.5 billion years? Anthropogenic global climate can only be compared with that on a log scale: roughly ten orders of magnitude.

      People, please get some perspective here.

    • You write “On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity.”

      I think you mean “close to zero” or “not close to unity”. Optical depth is unity, when the radiation is fully absorbed or scattered.

      • (I was waiting for Berényi Péter to answer this and then forgot all about it until just now.)

        My own answer would be that unit optical depth is where the OLR is changing most rapidly as a function of number of doublings of the absorber (e.g. CO2)
        and is therefore the most important density.

        To see this, let n be the number of doublings with n = 0 chosen arbitrarily (e.g. for the CO2 level in 1915 say). Hence for general n, optical depth τ = k*2^n where k is whatever the optical thickness is at 0 doublings. Hence (assuming unit surface radiation) OLR = exp(-τ) (definition of optical depth) = exp(-k*2^n).

        We want to know for what value of n (and hence optical depth) the OLR is changing most quickly. So we take the derivative of this twice and obtain – k*ln(2)*(ln(2) − k*ln(2)*2^n)*exp(n ln(2) − k 2^n). But this vanishes when 1 − k*2^n = 0 or k*2^n = 1 (and hence n = lb(1/k)). But τ = k*2^n, so the second derivative vanishes when τ = 1, the desired result.

        The same result would have obtained had we counted triplings of absorber instead of doublings: we would instead have 3^n and ln(3) in place of 2^n and ln(2) everywhere, but in the end we would obtain τ = k*3^n = 1.

        (The naive thing would have been to do this more directly as a function of optical depth itself, but one would then find that the OLR changes most quickly when the optical depth is zero. This is ok when considering absolute levels of CO2, but not when considering dependence of temperature on optical depth, the “climatological point of view” Berényi referred to, which calls for a second exponential.)

        I’m not sure where Pekka is getting his definition of optical depth from, but ordinarily it is synonymous with optical thickness and is defined with natural logs, as distinct from optical density which is the same thing but customarily with decimal logs. Absorbance is yet another name for the concept, preferred over optical density by the IUPAC, and does not commit to a particular base of log so one says (or at least the IUPAC recommends) decadic or Napieran absorbance when not clear from context.

        A material that is fully absorbing or scattering the radiation has infinite optical depth, one that allows it all to pass has zero optical depth. Optical depth is additive, so that radiation passing through depth d1 and then d2 is said to have passed through depth d1 + d2. It is a dimensionless quantity.

        For a given wavelength ν of radiation and absorbance of the atmosphere at that wavelength, an optical depth of 1 means that the fraction of photons of that wavelength leaving Earth’s surface vertically and reaching outer space is 1/e. As we saw above this is the depth where the number of escaping photons of that wavelength is most sensitive to changes in the logarithm of absorbance.

        Decreasing absorbance drives this fraction from 1/e up to 1, where the optical depth vanishes. The closer optical depth gets to 0, the less the impact of a given percentage change in the logarithm of absorbance (but the more the impact when working directly with absorbance itself).

        Conversely increasing absorbance drives this fraction from 1/e down to 0, where the optical depth tends to infinity. The closer optical depth gets to infinity, again the less the impact of a given percentage change in absorptivity, but simply because the change in number of escaping photons is negligible, it makes no difference at this end whether we’re working directly with absorbance or with its logarithm.

  56. Jeff,

    You are barking up the wrong tree. Radiative forcings are well defined by both off-line radiative transfer models, an by those that are used in climate GCMs. Radiation is being computed explicitly, and does not rely on logarithmic formulas. Any logarithmic behavior that you see in model output is the result of what the radiation models produce in response to changing greenhouse gas amounts, not a constraint.

    Take a look at some examples that I posted earlier on Roger Pielke Sr’s blog

    http://pielkeclimatesci.wordpress.com/2010/11/23/atmospheric-co2-thermostat-continued-dialog-by-andy-lacis/

    We also include multiple scattering effects in our GCM radiation modeling. These go beyond what Beer-Lambert exponential extinction is designed to represent.

    • Dr. Lacis
      Not being a scientist but a casual observer, I wander how would you reconcile within the CO2 hypothesis:
      – 1920-45 gentle rise in CO2 with sharpest rise in temperature
      – 1945-80 cooling period at the time of the steepest rise of CO2 in the recorded history.
      As far as I can see it, is not possible for the CO2 hypothesis to become accepted theory, if out of the 150 years of reliable records, the hypothesis it is not supported by 30% of the time.
      http://www.vukcevic.talktalk.net/CO2-Arc.htm
      re. above linked graph: Any geomagnetic hypothesis despite its good correlation, for time being, is not a contender without a viable quantifiable mechanism.

      • Define “the CO2 hypothesis”.

      • Hi Dr. Nielsen-Gammon
        That would be something (axiom possible, but hypothesis or even a theory not so sure). I did say : ‘Not being a scientist but a casual observer’. If I were to do that, I would be compounding one possible error with a much greater one.
        I may have naively assumed that Dr. Lacis might be able to deal my less than precisely articulated question, but your opinion would be also welcome and appreciated.

      • In comparing the time periods 1920-45 with 1950-2000, you should take a look at Fig. 5 and Fig. 8 of Hansen et al. (2007) “Climate simulations for 1880-2003 wi GISS ModelE”. The pdf is available from GISS webpage http://pubs.giss.nasa.gov/abstracts/2007/

        There are other forcings besides the GHG increases that need to be included, especially the strong volcanic eruptions of Agung, El Chicon, and Pinatubo that provided storn negative forcing in the 1960-1990 time period.

      • Dr. Lacis
        Thank you for your prompt reply. I will certainly look into suggested alternatives again. A science theory to stand rigorous tests of time, when sufficient quantity and quality of data is available, requires of those formulating such theories to be explicitly precise on every single exception, however irritating they may be.
        Else, hypothesis is just that a hypothesis and it will deprived of the respect and acceptance of a theory.
        I am also looking forward to possible further clarification by Dr. Nielsen-Gammon.
        Thanks again.
        p.s It was 1945-1980 I referred to, it is a very different proposition to 1950-2000 period you quoted; as the scientist aware of the exactness required for the case you present, I am sure you would agree.

      • Dr. Lacis
        By time Agung volcano erupted in1963 temperature was falling for 15 years or longer, and was already at its trough, while you would agree El Chichón- 1982 and Pinatubo -1991 were outside of the period I referred to (1945-1980), the temperature was already rising.
        The period I referred to is clearly marked on the graph
        http://www.vukcevic.talktalk.net/CO2-Arc.htm
        which you may have not taken opportunity to take a look at.
        Dr Lacis if we (i.e. our generation) are to build credible climate science than its foundations must be solid and indisputable.
        Thank you for your time, I can assure you it was not wasted, in my case it has widen my perspective on soundness of the state of the arguments presented.

      • By time Agung volcano erupted in1963 temperature was falling for 15 years or longer, and was already at its trough,

        And your point?

        The Atlantic Multidecadal Oscillation explains this very well, Milivoje.

      • I suppose my statement of the CO2 hypothesis would be something like:

        CO2 and other non-condensing Tyndall gases, whose concentrations are increasing rapidly to man’s influence, have become one of the strongest forcing agents on the global climate, and further increases in concentration will be large enough to cause a further increase of several degrees C within the century.

      • CO2 and other non-condensing Tyndall gases,

        Here is why I believe “greenhouse gas” is the correct name.

        1. It’s the standard name today.

        2. Greenhouses perform two functions: retaining the contained air, and trapping outgoing longwave radiation. Earth does the same thing, using gravity in place of walls and greenhouse gases in place of glass. The analogy for the former should be obvious (without gravity Earth would be a lot colder); for the latter, glass is at least triatomic (SiO2 for example), like greenhouse gases (H2O, CO2, O3, CH4, etc.). Salt windows do not trap infrared (shorter than 17 microns), being diatomic (NaCl), like O2 and N2.

        The question of whether greenhouses exhibited the greenhouse effect was denied by W.Wood in the Feb. 1909 issue of Phil.Mag., responded to in the July issue by Charles Greely Abbot, director of the Smithsonian Observatory, who ox Wood was goring. 65 years later the same question was debated strenuously in two journals in 1974, with the outcome being more or less consistent with what I wrote above minus my point about gravity. More on this by googling “The atmospheric science community seems to be divided” (must be in quotes) for Craig Bohren’s perspective on this debate.

      • It is misleading to call solid NaCl diatomic. It is an ionic crystal formed from individual atoms (or ions) without any grouping to diatomic molecules.

        In ionic crystals larger scale exitations – phonons – control the interaction with infrared radiation. It is, however, true that ionic crystals are transparent to a part of the IR radiation. NaCl absorbs strongly above 17 um and lighter LiF above 7 um. The reflection gets strong at even longer wavelengths (>30 um) and is therefore not so important.

        Normal glass is also transparent to shorter wavelengths of infrared, but the limiting wavelenth is typically 2-4 um and varies depending on the type of the glass. This limit is so low that glass is indeed an efficient barrier for IR radiation.

      • Thanks for clarifying that, Pekka. Sounds like we’re in perfect agreement.

        I have a couple of NaCl windows at home that I picked up for fifty bucks each, they’re great fun to play with. If you put a patch of black insulating tape on a saucepan (without which the silvery surface doesn’t radiate much) and boil water in it, a $10 infrared thermometer (way cheaper than the salt windows) will register close to 100 °C. When you put a sheet of glass between the thermometer and the saucepan the temperature plummets 60-70 degrees. But when you put a salt window between them there is hardly any difference.

        Hey, I’m just a retired computer scientist having fun doing the stuff I was trained for in college half a century ago before I discovered the joy of computing.

      • Incidentally, in connection with the molecular structure of NaCl, Arrhenius’s logarithmic dependency of the Earth’s surface temperature on atmospheric CO2 level was not his only “disruptive” contribution. Back when it was assumed that NaCl in solution consisted of diatomic molecules floating around among the water molecules, Arrhenius argued that they dissociated into Na⁺ and Cl⁻ ions, as an explanation of why salt lowers the freezing point of water. British chemist Henry Armstrong disagreed, arguing instead that NaCl associated with water to form more complex molecules. That’s the simplified version, the longer version is more complicated.

        As Pekka points out, the NaCl molecules lose their identity as such in the crystalline form. One can still pair them up, but not uniquely: there are six possible (global) pairings, one for each of the six Cl neighbors of each Na atom (or vice versa), since rock salt forms a face-centered cubic lattice. Pairing one Na with one Cl determines all remaining pairings (assuming no dislocations).

    • A. Lacis wrote: “Radiative forcings are well defined by both off-line radiative transfer models, an by those that are used in climate GCMs. Radiation is being computed explicitly”

      It is like saying that since my calculators have very accurate algorithms to calculate exponents and logarithmic functions explicitly with 20-digit accuracy, now I can calculate anything with same accuracy, be it ocean heat content, or annually-averaged CO2 flux across ocean surface, etc. Or global OLR. Don’t you see a big lapse of logic in your statements?

  57. David Hagen, 12.6.10 7:29 pm, 7:31 pm

    Miskolczi (2010) is about tau_sub_a, the Greenhouse-Gas Optical Thickness. He says,

    >> The relevant physical quantity necessary for the computation of the accurate atmospheric absorption is the true greenhouse-gas optical thickness . The definition and the numerical computation of this quantity for a layered spherical refractive atmosphere may be found in Miskolczi [4]. P. 244.

    Miskolczi [4] is Miskolczi, F.M. “Greenhouse effect in semi-transparent planetary atmospheres”, J. Hungarian Met. Serv., v. 111, no. 1, 2007, pp. 1-40. Miskolczi (2010) relies on [4] on pp. 244, 248 (2), 253 (2), and 259. He also includes as [11], Miskolczi, F.M. and M.G. Mlynczak, “The greenhouse effect and the spectral decomposition of the clear-sky terrestrial radiation”, J. Hungarian Met. Serv., v. 108, no. 4, 2004, pp. 209-251, but with no citations in the paper.

    In response to a reader’s invitation, I recently reviewed [4] and [11] jointly. The review can be read at IPCC’s Fatal Errors in response to a comment on 1/14/10. Google for Miskolczi at http://www.rocketscientistsjournal.com. My conclusions include that the author used a definition of greenhouse effect that was different than IPCC’s, that he tried to fit data from closed-loop real world records in an open-loop model, that he used satellite radiation measurements mistakenly as a transfer function, and that he forced his transfer function arranged inside a control loop to do the work of the entire control loop. He concludes,

    >>>> The theoretically predicted greenhouse effect in the clear atmosphere is in perfect agreement with simulation results and measurements. [11], Miskolczi (2004), p. 209.

    To which I responded,

    >>Just as a matter of science, Miskolczi goes too far. An axiom of science in my schema is that every measurement has an error. A more concrete observation is that his greenhouse effect is for a clear atmosphere, meaning cloudless, but he cannot possibly have had such measurements.

    The concluding exchange reads as follows:

    >>>>[I]t is difficult to imagine any water vapor feedback mechanism to operate on global scale. [4], Miskolczi 2007, p. 23.

    >>>>On global scale, however, there can not be any direct water vapor feedback mechanism, working against the total energy balance requirement of the system. [4], Miskolczi 2007, p. 35.

    >>There is precisely a water vapor feedback mechanism in the real climate. Miskolczi’s work has been productive. It has discovered the existence of the powerful, negative, water vapor feedback. Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity. In the end, Miskolczi, and hence Zágoni, share a fatal error with IPCC. The fatal result is independent of the mathematics. One cannot accurately fit an open loop model to closed loop data.

    Water vapor feedback works through cloud albedo to be the most powerful feedback in climate. It is positive and fast because of the burn off effect to amplify solar variations. It is negative because warming increases humidity, and slow because of the high heat capacity of surface waters. This negative feedback regulates the global average surface temperature to mitigate warming from any cause. It has not been discovered by IPCC.

    I conclude that [4], Miskolczi (2007), is an essential foundation of Miskolczi (2010), so the latter inherits fatal errors from its parent.

    Specifically you invited me to examine Figure 10 and sections 3 and 4 (probably pp. 257-260) in Miskolczi (2010). Miskolczi is here testing a model that says greenhouse absorption should be proportional to layer thickness, and that layer thickness should increase optical thickness. He says,

    >>To investigate the proposed constancy with time of the true greenhouse gas optical thickness, we now simply compute tau_sub_a every year and check the annual variation for possible trends. In Fig. 10 we present the variation in the optical thickness and in the atmospheric flux absorption coefficient in the last 61 years.

    From which he observes,

    >>The correlation between tau_sub_a and the top altitude is rather weak.

    He leaves to the reader the task of visual correlation. I would not venture a guess about the correlation between the two records in Figure 10. However, they could be made to appear much more correlated by adjusting the vertical scales to emphasize that effect. This is what IPCC does. Correlation is mathematical, and he should compute it.

    Miskolczi concludes

    >> In other word, GCMs or other climate models, using a no-feedback optical thickness change for their initial CO2 sensitivity estimates, they already start with a minimum of 200% error (overestimate) just in Δtau_sub_a.

    Besides visual correlation, a candidate for the most common error in data analysis is to take differences of a noisy signal, then attempt to fit a function to the differences. Correlating one record with another is wholly analogous to fitting a function to a record. This error is found everyday in economics, and in fields like drug and food studies where the investigator attempts to find a best fit probability density. Engineers quickly learn not to design a circuit to differentiate a noisy signal. Taking differences amplifies the noise, and attenuates the signal. One mathematical manifestation of the problem is that a sample probability density, the density histogram, is not guaranteed to converge to the true population density as the number of samples or cells increases. Not so, the probability distribution! Spectral densities will not converge, but spectra will. A well-behaved spectrum often has an impossible spectral density, as, for example, whenever line spectral components are involved. The better technique then is to fit a function to the total signal, to the cumulative probability, or to the spectrum, and then if needed or for human consumption, to differentiate (take differences of) the best fit function.

    My advice to all is always be suspicious of analysis from data that are differences, anomalies, or densities.

    By taking differences of the optical thickness and of atmospheric absorption data, Miskolczi is differentiating noisy signals. As discussed above, he should first fit analytical functions to each data record, such as power series or orthogonal series, such as Fourier series. In the best of worlds, these might reveal an analytic relation between the signals. Regardless, he can next detrend the signals with parts of his analytical functions to perform a full correlation analysis, providing a scatter diagram and graphical and numerical demonstrations of linear predictors and of data correlation. After this is done, his paper might be ripe for conclusions.

    • David L. Hagen

      Jeff Glass”man
      re: “By taking differences of the optical thickness and of atmospheric absorption data, Miskolczi is differentiating noisy signals. As discussed above, he should first fit analytical functions to each data record,”

      Please clarify where you see Miskolczi “taking differences in optical thickness” or “differentiating noisy signals.”
      I think you have misinterpreted his papers.
      I understood him to actually have “first fit analytical functions to each data record,” – e.g. of the atmospheric profile based on TIGR radiosonde data.
      Then he calculates the optical absorption for each of 150 layers.
      Next he INTEGRATES this (adds) not differentiates (substracts) – to get the global optical depth.

      His correlations are fitting parameters to the observed radiosonde data processed to give optical absorption. The differences he takes are after taking these parameters, or after finding and rounding the correlations between the fluxes.

    • David L. Hagen

      Jeff Glassman
      re Cloud feedback. You note: “Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity.”

      Do you have any way to clearly quantify this? Or papers supporting it?
      e.g. Roy Spencer critiques conventional assumptions that clouds dissipate with warming giving a positive feedback.

  58. A Lacis, 12/8/10, 11:49 am

    Wrong tree?

    What I said was “Radiative forcing in a limited sense applies radiative transfer, but it is not the same.” How can I parse what you have written to see what the wrong tree is?

    The core, the heart, the essence of the AGW model is the existence of a climate sensitivity parameter, in one form or another. It is the Holy Grail of AGW. Sometimes it’s the transient CSP, sometimes the equilibrium CSP, and sometimes just the vanilla CSP. Sometimes it is represented by λ, and sometimes not. IPCC says,

    >>The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. It is not a projection but is defined as the global average surface warming following a doubling of carbon dioxide concentrations. It is likely to be in the range 2ºC to 4.5ºC with a best estimate of about 3ºC, and is very unlikely to be less than 1.5ºC. Values substantially higher than 4.5ºC cannot be excluded, but agreement of models with observations is not as good for those values. Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty. {8.6, 9.6, Box 10.2} AR4, SPM, p. 12.

    This puts the parameter as a response to an RF. In another expression, IPCC turns the relation around a bit, saying

    >>The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid. These formulae are based on global RF calculations where clouds, stratospheric adjustment and solar absorption are included, and give an RF of +3.7 W m–2 for a doubling in the CO2 mixing ratio. (The formula used for the CO2 RF calculation in this chapter is the IPCC (1990) expression as revised in the TAR. Note that for CO2, RF increases logarithmically with mixing ratio.) 4AR ¶2.3.1 Atmospheric Carbon Dioxide, p. 140.

    Of course RF increases logarithmically with mixing ratio! The concept that RF(2C) = a constant + RF(C) is a functional equation, and its unique solution is the logarithm function, and the base is irrelevant. Generally, the solution to y(kx) = constant + y(x) is the logarithm, and in the AGW world, the standard k is a doubling of x, the concentration of CO2, almost always. The constant is the climate sensitivity parameter when k = 2.

    Safe to say, all the major computer models used by IPCC produce a constant climate sensitivity parameter. A table of the models studied in the Coupled Carbon Cycle-Climate Model Intercomparison Project (C4MIP) with their transient climate sensitivity parameter is AR4, Table 7.4, p. 535. IPCC says,

    >>The equilibrium climate sensitivity estimates from the latest model version used by modelling groups have increased (e.g., CCSM3 vs CSM1.0, ECHAM5/MPI-OM vs ECHAM3/LSG, IPSL-CM4 vs IPSL-CM2, MRI-CGCM2.3.2 vs MRI2, UKMO-HadGEM1 vs UKMO-HadCM3), decreased (e.g., CSIRO-MK3.0 vs CSIRO-MK2, GFDL-CM2.0 vs GFDL_ R30_c, GISS-EH and GISS-ER vs GISS2, MIROC3.2(hires) and MIROC3.2(medres) vs CCSR/NIES2) or remained roughly unchanged (e.g., CGCM3.1(T47) vs CGCM1, GFDLCM2.1 vs GFDL_R30_c) compared to the TAR. In some models, changes in climate sensitivity are primarily ascribed to changes in the cloud parametrization or in the representation of cloud-radiative properties (e.g., CCSM3, MRI-CGCM2.3.2, MIROC3.2(medres) and MIROC3.2(hires)). However, in most models the change in climate sensitivity cannot be attributed to a specific change in the model. 4AR, ¶8.6.2.2, p. 630.

    Safe to say, every climate model produces a climate sensitivity parameter. You don’t have read esoteric papers on absorption to find the logarithm dependence. The very notion that such a thing as this constant exists is the same as assuming that radiative forcing is proportional to the logarithm of the gas concentration. Further, recognizing that the effect of CO2 is the absorption of infrared lost from the surface, we have the key underlying assumption to all of AGW that the absorption of IR by CO2 is proportional to the logarithm of the CO2 concentration.

    No matter how these models might have been mechanized, whether computing a radiation transfer or not, whether mechanizing the atmosphere as one or many layers, whether making actual computations or parameterizing, they produce the logarithm dependence. Like Captain Kirk, IPCC said, “Make it so.”

    The assumption is false. That is the wrong tree up which I am barking.

    • Jeff,

      I think you would get to understand radiative transfer, and radiative transfer effects and issues, a whole lot better if you took the time to read radiative transfer papers from the published literature (e.g., JGR, GRL, J Climate,or IPCC), or checked out the posted background information on blogs like Real Climate, Chris Colose, or Roger Pielke, Sr, instead of spending time perusing such papers as those by Miskolczi dealing with his mistaken interpretation of the greenhouse effect.

      There is good reason why Miskolczi’s papers are not getting published in the mainstream scientific literature. These journals try very hard not to publish material that they know to be erroneous.

    • No matter how these models might have been mechanized, whether computing a radiation transfer or not, whether mechanizing the atmosphere as one or many layers, whether making actual computations or parameterizing, they produce the logarithm dependence.

      Jeff, you can forget completely about models. Observation of the temperature after subtracting the 65-year AMO shows an impressively accurate fit to the logarithm dependence when the Hofmann formula 280 + 2^((y-1790)/32.5) is used for CO2.

      Not only does CO2 have a measurable effect on climate, that effect is without any shadow of doubt logarithmic. The fit is far too good to have any other explanation.

      If you don’t believe this, how do you account for the fact that the HADCRUT temperature record with 10-year smoothing is now 0.65 °C above any temperature attained prior to 1930? This even applies to 1880, the highest temperature between 1850 and 1930. Is God holding a soldering iron over us, has the Sun very suddenly gotten far hotter than in the last 3 million years, has the Devil decided the Apocalypse is nigh and is slowly boiling us like frogs in a pot, or what?

  59. Willard 12/8/10 12:07

    You need wonder no more! The answer to the purpose of the (true) little parable is in the very next paragraph. Do read on!

    Myhre, et al. added labels and symbols to their graph, the one that charmed scientistofdoom, to give the appearance that climate researchers were approximating data and fitting curves to data. They represented the output of a couple of computer models as data. They did this because people with under-developed science literacy put great stock in the output of computers. This bridges from football poolers to IPCC’s audience.

    Computer models are grand and essential. What is missing in these examples is the notion that computer models, like all other scientific models, must make significant predictions which then survive the tests of validation. This is the scientific process of advancing from hypothesis to theory.

    My contention is that only theories can be used ethically for public policy. IPCC fails this test.

    • Jeff Glassman,

      The answer to what you do with your paragraph lies in the sentence that immediately follows it:

      > Myhre, et al. (1998) did the same thing, making the picks look more genuine by graphing some with lines and some as if they were data points for the lines.

      So Myhre’s “pick” amounts to using a number generator. As far as I can see, there are two ways to interpret this. Either it’s meant litterally, in which case it really looks like a caricature. Or it’s meant as a way to express sarcasm, i.e. Myhre’s pick is no better than a random choice. My personal impression is that you are expressing sarcasm, the figure of speech that Tallbloke was condemning.

      I underlined this story of yours (which I liked, btw) to show that caricature or sarcasm is common down here. Complaining that scientists are the one that indulge in that kind of trick, here and in general, amounts cherry-picking. The habit is more general than that. Style matters, mostly, as far as I am concerned. As long as one is interested to invest some time to entertain the gallery in a most pleasant way, I don’t mind much.

  60. Hi Jeff,
    A fair bit of this analysis is over my head, but I wondered if you could just clarify this statement for me:

    “Miskolczi’s work has been productive. It has discovered the existence of the powerful, negative, water vapor feedback. Specific humidity is proportional to surface temperature”

    Proportional to surface temperature at which pressure level?
    I assume we are talking about Miskolczi’s analysis of the radiosonde data?

  61. Rather than respond individually, I’d like to make a few points relevant to several comments above.

    1. One of the striking elements of Judith Curry’s post is the linked references to multiple sources demonstrating the exellent correspondence between radiative transfer calculations and actual observations of IR flux as seen from both ground-based and TOA vantage points. This correspondence is based initially on line-by-line calculations utilizing the HITRAN database. Models based on band-averaging are less accurate but still perform well. Empirically, therefore, the radiative transfer equations have serve an important purpose in representing the actual responses of upward and downward IR to real world conditions.

    2. It is universally acknowledged that the roughly logarithmic relationship between changes in CO2 concentration and forcing applies only within a range of concentrations, but that range encompasses the concentrations of relevance to past and projected future CO2 scenarios. It does not necessarily appy to other greenhouse gases, although water appears to behave in a roughly similar manner.

    As far as I know, that logarithmic relationship can’t be deduced via any simple formula. Rather, it represents the shape of the curve of absorption coefficients as they decline from the center of the 15 um absorption maximum into the wings. As CO2 concentrations rise, the maximum change in absorption moves further and further into the wings, and since the absorption coefficients there are less than at the center, the effect of rising CO2 assumes a roughly logarithmic rather than linear curve.

    3. A point was made earlier about the difficulty of laboratory determination of absorption coefficients relevant to atmospheric concentrations where the tau=1 relationship holds. Not being a spectroscopist, I can’t give an informed opinion on this, but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.

    • Fred,

      Concerning your point 2. One example that leads to the logarithmic relationship is a strong absorption line with exponential tail. For this example it is possible to derive the result analytically.

      I do not try to claim that this a correct model, but the derivation may help in understanding how broadening of a saturating absorption peak leads to the logarithmic relationship.

      • Pekka Pirilla – Maybe I’m misinterpreting your point, but the main CO2 absorption band centered around 15 um contains hundreds of individual lines, representing the multitude of quantum transitions singly and in combination that CO2 can undergo. The 15 um line represents a vibration/rotation transition. As one moves in either direction away from 15 um, the lines are weaker, because the probability of a match between photon energy and the energy needed for that transition declines. As a result, IR in those wavelengths must encounter more CO2 molecules in order to find a match. Absorption is so efficient at 15 um that more CO2 makes little difference at that wavelength (surface warming is a function of lapse rate, but the lapse rate at the high altitude for 15 um emissions is close to zero). In the wings, however (e.g., 13 um or 17 um), more CO2 means more absorption and greater warming. The logarithmic relationship appears to reflect the fact that increasing CO2 more and more involves absorption wavelengths of lower and lower efficiency – those further and further from 15 um.

        Note that we are talking about the breadth of the absorption band with its many lines. The term “broadening” generally refers to the increasing width of individual lines in response to increases in pressure or temperature.

        Finally, the absorption within a single line (i.e., monochromatic absorption) follows the Beer-Lambert law of exponential decay as a function of path length, but this is not the source of the logarithmic relationship we are discussing. Indeed, in the atmosphere, absorption is followed by emission (up, down, or sideways), followed by further absorption and so on, which is why the radiative transfer differential equations rather than a simple absorption-only paradigm must be used.

        I may not have addressed your point, but I’m hoping to clarify what happens in the atmosphere for individuals unfamiliar with the spectral range of absorption or emission involving greenhouse gas molecules.

      • Fred Molten,
        The simple mathematical example that I was referring to applies to a situation where the absorption is fully saturated in the center of the band and the tails have an exponential form. For this kind of absorption peak applying Beer-Lambert law to the tails, gives as an analytical result the logarithmic relationship between concentration and transmission through the atmosphere.

        The fact that the logarithmic relationship is approximately valid also in the LBL-models may be interpreted to tell that weaker and weaker absorption becomes effective at approximately similar relative rate than in exponential tails.

      • Alexander Harvey

        Pekka,

        I believe that the existence of an abundance of lines distributed widely across a range many orders in magnitude of their strength would by themselves tend to give rise to a logarithmic type response to increasing concentrations over a wide range. This may be similar, more or less important than the side band effect, I simply do not know.

        Alex

      • Alex,
        The result should be the same if the strenth distribution of the lines is suitable. My intuition tells that it should be such that the PDF of the logarithm of the line strengths is flat over the relevant range.

      • Absorption is so efficient at 15 um that more CO2 makes little difference at that wavelength (surface warming is a function of lapse rate, but the lapse rate at the high altitude for 15 um emissions is close to zero).

        If you led the world’s theoretical physicists out to a courtyard and shot them all, I believe you would seriously set physics back.

        I cannot say the same for climate science. The ratio of theorizing to observation is totally out of hand.

        Admittedly Fred Moolten’s theorizing is bordering on the crackpot. However it seems to me that even highly respected theoretical climate scientists are undermining the credibility of their field with calculations that underestimate the environment’s complexity.

        Theoretical economists have a similar problem. It’s a good question whether the economy or the climate is computationally more intractable in that regard. They’re both incredibly complicated systems that theorists love to oversimplify.

    • David L. Hagen

      Fred Moolton
      Appreciate your clarifications. You note above:
      “Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful.”
      I can see how absorption can vary with altitude as concentration changes. e.g. Essenhigh calculates for 2.5% water vapor vs 0.04%. (I can see how altitude variations would the relative absorption heating and the temperature lapse rate – and in turn adjust clouds.)

      However, the Beer Lambert absorption shows the log of Io/I to change as the produce of concentration and depth.

      Question:
      As long as that total concentration x depth remains constant, does the total absorption change depending on how the concentration is distributed?

      • Actually, for a single absorber and no emission, very little. I published a proof of that once. The total Beer’s Law absorption is pretty much dependent on the mass of absorbing material along the ray, however distributed.

        However, it’s different with competing absorbers and thermal emission happening as well.

      • David,

        Water vapor absorption is line absorption with the water vapor line widths being linearly proportional to atmospheric pressure (P/Po). This makes water vapor a less efficient absorber with decreasing pressure. Thus, the same amount of water vapor near the tropopause will absorb a lot less solar radiation than if that same amount of water vapor was at ground level.

        Also, since water vapor absorption is line absorption, it therefore does not follow the Beer-Lambert law, except on a monochromatic basis. To get the atmospheric absorption right in the presence of strongly varying absorption with wavelength, you need to either do line by line calculations, or use a correlated k-distribution approach.

      • David L. Hagen

        A. Lewis
        “This makes water vapor a less efficient absorber with decreasing pressure.”
        Thanks Andy, that is a clear physical reason for the difference.

        Re: “To get the atmospheric absorption right . . . you need to either do line by line calculations, or use a correlated k-distribution approach.”

        1) I would welcome any comments/references you might have as to the relative accuracy of LBL vs k-distribution calculations, especially any good reviews.

        2) I found:
        Intercomparison of radiative forcing calculations of stratospheric water vapour and contrails
        GUNNAR MYHRE et al. Meteorologische Zeitschrift, Vol. 18, No. 6, 585-596 (December 2009) DOI 10.1127/0941-2948/2009/0411
        http://www.igf.fuw.edu.pl/meteo/stacja/publikacje/Myhre2009.pdf

        Detailed line-by-line codes agree within about 15 % for longwave (LW) and shortwave (SW) RF, except in one case where the difference is 30 %. Since the LW and SW RF due to contrails and SWV changes are of opposite sign, the differences between the models seen in the individual LW and SW components can be either compensated or strengthened in the net RF, and thus in relative terms uncertainties are much larger for the net RF.

        These differences are much larger than the 1% level agreement you noted above. Is this primarily due to trying to evaluate contrails?

        Are these reflective of the difficulty in modeling clouds absorption/ reflection vs water vapor?

        3) By contrast I found:
        An improved treatment of overlapping absorption bands based on the correlated k distribution model for thermal infrared radiative transfer calculations, Shi et al. Journal of Quantitative Spectroscopy and Radiative Transfer
        Volume 110, Issue 8, May 2009, Pages 435-451

        This paper discusses several schemes for handling gaseous overlapping bands in the context of the correlated k distribution model (CKD). . . . flux differences did not exceed 0.8 W/m2 at any altitude. . . .

        Compare: Chou, Ming-Dah, Kyu-Tae Lee, Si-Chee Tsay, Qiang Fu, 1999: Parameterization for Cloud Longwave Scattering for Use in Atmospheric Models. J. Climate, 12, 159–169.

        A parameterization for the scattering of thermal infrared (longwave) radiation by clouds has been developed based on discrete-ordinate multiple-scattering calculations. . . .
        For wide ranges of cloud particle size, optical thickness, height, and atmospheric conditions, flux errors induced by the parameterization are small. They are <4 W m−2 (2%) in the upward flux at the top of the atmosphere and <2 W m−2 (1%) in the downward flux at the surface.

        That appears much better than an 1% error.

        4) Would you consider Miskolczi’s LBL using 3459 spectral ranges to provide sufficient resolution “to get atmospheric absorption right” for water vapor absorption in each of his 150 vertical segments, assuming the HITRAN data base etc.?

    • >but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.<

      Are you suggesting this has NOT been done ? If it hasn't, which I really doubt, then I am dumbfounded – another one of my assumptions shot to pieces

  62. tallbloke 12/8/10 1:58 pm

    Miskolczi (2007) said in his abstract,

    >>Simulation results show that the Earth maintains a controlled greenhouse effect with a global average optical depth kept close to this critical value.

    For this, and without validating his analysis, I applaud him. He claims to support his results with radiosonde data.

    As a matter of philosophy, climatologists should assume that the climate is in a conditionally stable state, because the probability is zero that what we observe is a transient path between stable states. Then they should set about to estimate what controls that state, and the depth or dynamic range of the controls. From this modeling, they could determine how Earth might experience a significant state change, and it would help them distinguish between trivial and important variations.

    Instead, the GCMs model Earth as balanced on a knife edge, ready to topple into oblivion with the slightest disturbance. This is analogous to finding round boulders perched on the sides of hills, and cones balanced on their apexes. This is Hansen’s Tipping Points. This modeling is intended to achieve notoriety and to frighten governments.

    Of course Earth’s greenhouse effect is controlled. Earth has two stable states, a warm state like the present, and a cold or snowball state. Temperature is determined by the Sun, but controlled in the warm state by cloud albedo, the strongest feedback in climate, positive to amplify solar variations, and negative to mitigate warming. In the cold state, surface albedo takes over as the sky goes dry and cloudless, the greenhouse effect is miniscule, and white covers the surface. The cold state is more locked-in than controlled.

    The regulating negative feedback is proportional to humidity, which is proportional to surface temperature. IPCC admits the humidity effect, but doesn’t make cloud cover proportional to it. Remember, proportional only means global average cloud cover increases with global average surface temperature, not that they occur in some neat linear relationship. And to answer your question, this all occurs at a pressure of one atmosphere.

    • Thanks Jeff.
      The reason I asked is that I noticed this curious apparent relationship between specific humidity at the 300mb level, and solar activity, and I wondered how this might fit with Miskolczi’s scheme:

      http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png

      This is up around the altitude the Earth mainly radiates to space from, and I was wondering if it might indicate that the temperature there is proportional both to temperature at that altitude and solar irradiance received at that altitude. The interface…

    • Jeff – I find numerous errors in your statement, wbich I believe could be rectified if you reviewed the two threads in this blog addressing the greenhouse effect, as well as well as other sources (you could start with Hartmann’s climatology text and then graduate to Pierrehumbert’s “Principles of Planetary Climate” due out very shortly.

      The reason I don’t address them here is that I find myself unable to provide an adequate response without consuming many pages, and so I would simply end up listing the errors without explaining what the correct answers are. There are probably others who can be more succinct, and I hope they may respond.

    • For this, and without validating his analysis, I applaud him.

      Join the crowd. All we need now is a reputable validator of his analysis.

      But even if his analysis survives this validation, what good is that if it doesn’t refute global warming? Reducing flow of water vapor into the atmosphere could well increase global warming instead of cooling it as FM claims.

      In any event this is simply yet another model. Some of us out there, on both sides of the debate, don’t trust complex models that we have no way of verifying or validating ourselves. Until easily understandable specifications are written for these models, and the models have been shown to meet those specifications, they can’t be trusted.

      In the meantime simply looking at the temperature and CO2 records is a lot more convincing.

      • In the meantime simply looking at the temperature and CO2 records is a lot more convincing.

        I couldn’t agree more lets look at Vostok ice core data nice long term stuff. I did check this and know the guy/gal who did this was a bit naughty adding in a flask (dare one hope) data point for recent CO2 level. Nevertheless CO2 has had a positive trend for ~ 7,500 years and for the same period the temperature trend has been negative. This raises a question about how Arrhenius’ logarithmic relation is validate on a long term in the real atmosphere rather than a laboratory.

      • Which side of that graph would you say is the “present day”?

      • Yes it’s frustrating when graphs are inadequately labelled.
        The x axis is missing the BP acronym

      • I expected most people would recognise the younger Dryas on the right of the graph.

    • As a matter of philosophy, climatologists should assume that the climate is in a conditionally stable state, because the probability is zero that what we observe is a transient path between stable states.

      Yeah, right.

      As soon as we stop doubling the CO2 we pump into the atmosphere every third of a century we can say something like this.

  63. Tomas Milanovic

    I have the same issue as Al Thekasski.
    There is no dynamics in the line by line radiation transfer models as far as I know.
    Given a ground temperature, it needs a fixed atmospheric profile (temperature, concentrations and pressure) to run.
    As I have seen the units used in the posts W/m² (e.g ONE number!), there is some averaging going on.
    I am not a specialist of radiative transfer but it seems impossible to run at every time step of the model as many individual radiative transfer calculations as there are horizontal cells.
    Especially when convection is involved which massively changes the temperature and humidity profiles at every instant.

    If it is true that only a few “standard” profiles are considered without spatial coupling, I do not believe the accuracy of the radiation flows of 1% which has been thrown around here.
    So the question, how many profiles (temperature, pressure, concentration) are considered for every time step of the model?
    Besides it is also not true that for any column of basis 1 m² “radiation in = radiation out” as the temperature variations readily show.

    • Tomas – the modelers are the ones who should probably be addressing your comment. However, you may be confusing climate modeling with the radiative transfer equations as a means of assessing climate forcing from changes in CO2 or other moieties. Forcing is calculated by assuming everything remains constant in the troposphere and on the surface except for a change in radiative balance (typically at the tropopause). That means an assumption of unchanging temperature, humidity, pressure, etc.. Convection is a response to changes induced by forcing and is excluded from the forcing calculations themselves.

      The models then attempt to incorporate the other variables over the course of time and grid space (certainly including convection), but that is a separate issue from the radiative transfer equations as a means of determining the effects of forcing.

      • I should add that the models don’t assume, even with forcing calculations, that pressure, temperature, humidity, etc., are the same all over the globe or at different seasons. This is one reason why their estimates of the temperature change from CO2 doubling (without additional feedbacks) is 1.2 C instead of the 1 C change estimated simply by differentiating the Stefan-Boltzmann equation and assuming a single mean radiating altitude and lapse rate.

      • With regard to convection, I was referring to a change in convection. Lapse rates themselves reflect the effects of convection, but forcing calculations assume no change in these, but only a radiative imbalance, and it is the latter that provides the basis for applying the radiative transfer equations.

      • Tomas Milanovic

        Convection is a response to changes induced by forcing and is excluded from the forcing calculations themselves.

        Well as I said, I am not a specialist in radiative transfer but this is certainly not true or not meant what it is saying.
        Convection or more generally fluid flows are in no way an “answer” to radiation and even less to some “forcing”.
        One could as well say that the radiation is the answer on the particular properties of fluid flows (like temperature and pressure fields).

        The right expression is that both are coupled so that neither can be considered independently of the other.
        So considering a radiative transfer uncoupled of the fluid dynamics is a nice theoretical exercice but has nothing to do with the reality.
        Hence my question.

      • We may not disagree as much as it seems. Forcing (at least on planet Earth) is a hypothetical concept based on the assumption of unchanging conditions outside of the radiative imbalance. It has utility, however, as the basis for adding feedbacks.

        On a planet without water, it might be possible to measure forcing directly.

    • Thomas

      As I have seen the units used in the posts W/m² (e.g ONE number!), there is some averaging going on.

      A single value is the usual outcome of integration whether of the spectrum as is the case of line by line integrators like HARTCODE and FASCODE for example and since the output is a value for energy an extensive property it’s perfectly valid to average it over time and space.

      The line by line integrators mentioned work with atmospheric profiles taken from radiosonde ascents from all over the world. Though in TIGR 1 data set the tropics are under represented.

    • Actually Tomas, I am not yet into the question of dynamics and associated possibility of substantial errors due to order of averaging of fluctuating functions. My concern was about validity of static calculations under realistic atmospheric conditions, when vertical gradient of air temperature changes its sign in the middle.

      Consider the following example. Let assume that we have a standard profile of atmosphere — temperature goes lower for the first 11km, then some 1-2km tropopause, and then goes the stratosphere where temperature is increasing with height. Assume that the absorption spectrum consists only of two bands. Let a narrow band (say, 14-16um) have quite strong absorption, while another, wider area (say, 3x wider than the strong band), have a very weak absorption (aka “transparency window”). Let assume their the average “emission height” of the whole range to be at 6km. According to the standard approach of averages, the ”average emission height” will go up with increase of CO2, where “higher= colder”. Colder layer emits less, and therefore the global imbalance of OLR would occur and will force climate to warm up. This is the standard AGW concept.

      However under a more careful consideration the average (“effective emission height”) of our hypothetical spectrum of 6km is made of 0km for 3/4 of IR range, and of 24 km for the remaining 1/4 of the band. If we add more CO2 , the increase of 0 km band gives you zero change in OLR, while increase in 24km emission height will give you MORE OLR, because the temperature gradient in stratosphere is opposite to one in troposphere, so “higher = warmer”. As result, the warmer layer would emit more, and the overall energy imbalance would be POSITIVE. This implies climate COOLING, or just exactly opposite to what the standard “averaging” theory says.

      In reality the spectrum is more complex, edges of absorption bands are not abrupt, so many different trends would coexist. But the above example suggests that it seems very likely that warming and cooling effects may cancel each other in the first approximation. Therefore, the sensitivity of OLR to CO2 increase is a second-order effect, and must be much harder to calculate accurately. Hence my question to one of the fathers of “forcing” concept.

  64. Tomas Milanovic

    Earth has two stable states, a warm state like the present, and a cold or snowball state.

    Earth can’t have any stable states because if it could , it would have already found it in the last 4.5 billions years and stayed there forever.
    At least if the word “stable” is to be understood as “a fixed point in the phase space”.
    Earth as a dissipative system has never been in any kind of equilibrium and will never be – its trajectory in the phase space is dynamical and wandering between an infinity of possible regions in the parameter space.
    It is precisely because the energy supplied and the energy dissipated are not stationary at any time scale that there can’t be any stable point.

    The Earth can be only understood in terms of its dynamical orbits in the parameter space and not in some inexisting “equilibrium” or “stable” states.

    • Tomas,

      The radiation is indeed being calculated at every gridbox of the model for every physics or radiation time step (more than 3300 times for each time step for a moderately coarse spatial resolution of 4 x 5 lat-lon degrees. At each grid box, the radiative heating and cooling rates are calculated for the temperature, water vapor, aerosol and cloud distributions that happen to exist at that grid box at that specific time. The (instantaneous) radiative heating and cooling information is then passed on to the hydrodynamic and thermodynamic parts of the climate model to calculate changes in ground and atmospheric temperature, changes in water vapor, clouds, and wind, as well as changes in latent heat, sensible heat, and geopotential energy transports. All these changes in atmospheric structure are then in place for the next time step to calculate a new set of radiative transfer inputs to keep the time marching cycle going.

    • Thomas

      Earth can’t have any stable states because if it could , it would have already found it in the last 4.5 billions years and stayed there forever.

      I do believe there is sufficient evidence that the earth has been in snowball state (an example and also for the current state it’s in.

      • My theory about snowball states of the Earth is that life adapts to it and fills every niche until the snow is black (blue, green, brown, anything but white) with life. This reverses the albedo and the snow then melts.

        The obvious objection is that snow should still be black today, teeming with the same life, instead of white.

        I have the following suggestions.

        1. The snow/ice species were completely killed off by global warming. There was no snow at all in the Cambrian.

        Counter-objection: we’ve had 49 million years since the Azolla event for those species to regroup.

        Counter-counter-objections:

        (a) Snow species evolve so slowly (because of the temperature) that we’re not there yet.

        (b) Snowball Earth was a much more stable place for snow species than today’s ice caps, which are subject to storms too violent for species to evolve comfortably on account of the huge difference between tropical and polar temperatures. When the equator was icebound it would have been a far calmer place than today’s ice caps.

      • (b) Snowball Earth was a much more stable place for snow species than today’s ice caps, which are subject to storms too violent for species to evolve comfortably on account of the huge difference between tropical and polar temperatures. When the equator was icebound it would have been a far calmer place than today’s ice caps.

        That sounds plausible enough for me to accept it until something better comes along.
        “Life” is very resilient, as we have seen over and over.

      • That sounds plausible enough for me to accept it until something better comes along.
        “Life” is very resilient, as we have seen over and over.

        So what do you think of JoNova?

      • So what do you think of JoNova?

        Less plausible then supernova

        http://www.nature.com/nature/journal/v275/n5680/abs/275489a0.html

      • Maksimovich
        Could the rapid growth in the industrial activity since 1920’s have changed ( increased ) ionisation of the stratosphere?

      • The reason we have a comprehensive test ban is obvious.Industrial activity is less then the effects of the decrease in modulation of the earths geomagnetic field over the last 100 years eg Svalgaard,

        As a thought experiment ie as an inverse model.Are the temperature excursions in the latter part of the last century due to an increase in forcing say GHG etc or a decrease in the efficiency in dissipation say such as heat transport to the poles ?

      • I don’t understand the segue but I’m not surprised.

        I don’t know Jo Nova personally. I do however like what she does and how she does it on her blog. I participate often and help out as much as I can.
        Anything else you’d like to know Vaughan?

      • Anything else you’d like to know Vaughan?

        Sure. I’d been trying to decide between JoNova, Ann Coulter, and Michelle Malkin as to who was the most vicious. Currently I’m shooting for JoNova. If you disagree I’d love to know why.

        Another dimension is intelligence. I put Ann Coulter miles ahead of the other two. Love to hear your arguments on that one too. Reckon JoNova is a genius?

        I’d also be interested to know whether Michelle Malkin wins in any category. Feel free to be creative in making up categories.

        Should I start a pool on these three?

        I left out Leona Helmsley because no one likes the Queen of Mean, and also Martha Stewart because every likes her after what the system put her through and she took it all with her usual consummate grace. Not unlike Bill Gates who also came out smelling like a rose after the wringer we technocrats put him through.

        I say this as the designer of the Sun logo, which ought to incline me to a more cynical view of Gates but it doesn’t, perhaps because my logo has been appearing on the back cover of the Economist about every third week since we were acquired by Oracle and Scott McNealy was fired. The Wikipedia article attributes the Sun name to my Ph.D. student Andy Bechtolsheim but it originated with Forest Baskett, Andy’s previous advisor and my predecessor on the Sun project.

        Like God, capitalism works in mysterious ways.

      • Glad to see you’ve opened yourself up nice and wide for all to see.
        I won’t be getting down into this trash but feel free to wallow all by yourself.

      • I won’t be getting down into this trash

        Looks like we understand each other’s position.

        Staying above the fray is a good idea for those working in environmental science. Since I don’t do the latter I don’t find a need to do the former.

      • “I say this as the designer of the Sun logo”

        I’m impressed. :-)

        I got a sparcstation IPC in 1995 with a sony 1152×900 motitor.
        my PC owning friends were very jealous.

      • The logo was a stroke of brilliance, apropos of nothing much.

  65. @Fred Moolen

    I asked this question as a reply to your post upthread a bit, but it’s a long thread and you may miss the question

    >but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.<

    Are you suggesting that this has NOT been done ? If it hasn't, and I really doubt that, then I am dumbfounded – another of my assumptions shot to pieces

    • I do assume it has been done. My uncertainty relates to how extrapolable the results are to atmospheric conditions involving far less concentrated gas concentrations and far longer path lengths. In any case, I’m encouraged by the fact that observational and modeled radiative transfer data seem to correlate well, at least in the troposphere under clear sky conditions.

    • Absorption, or actually spectral transmission has been measured in the laboratory by spectroscopists for practically every atmospheric gas that matters. The line by line absorption line coefficients (for more than 2.7 million lines) are tabulated in the HITRAN data base.

      Actually, it not measured laboratory spectra that is being tabulated, but it is a combination of theoretical quantum mechanical calculations and analyses (which they can do very precisely and very effectively), normalized and validated by the laboratory spectra, that enable the modern-day spectroscopists to define the absorption line spectral positions, line strengths, line widths, and line energy levels along with all their quantum numbers – everything that is needed to perform accurate line-by-line radiative transfer calculations for any combination, amount, and vertical distribution of atmospheric greenhouse gases.

      • Thank you for your reply. I am slowly garnering the hard, basic data against which to test my doubts on the significance of AGW. This thread has been very useful, primarily because you and Chris Colose have been honest in your replies to honest questions

        The to&fro of actual technical debate on this thread has been the most comprehensive I have seen for 4-5 years. Treating people such as myself as suitable fodder for press releases is guaranteed to aggravate polarisation of debate. You and Colose have slowly changed from that also – perhaps Judith C was right to try this blog experiment :)

  66. David Hagen, 12.8.10 8:14 pm

    You inquired about Figure 10 in Miskolczi (2010). He says,

    >>we now simply compute τ_sub_a every year and check the annual variation for possible trends. In Fig. 10 we present the variation in the optical thickness and in the atmospheric flux absorption coefficient in the last 61 years.

    The two ordinates in Figure 10 are both in percent, indicating the relative change year by year, confirming what he said in the text. The text and chart are clear that these are differences, the discrete analog of differentiating.

  67. David Hagen, 12.8.10 8:35 pm

    You asked about my statement,

    >> Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity.

    And then refer to an article by Roy Spencer questioning the nature of cloud feedback, and especially questioning whether “clouds dissipate with warming giving a positive feedback”.

    The argument for my position qualitative, not quantitative. It is not complex, and it involves physical phenomena admitted by IPCC or which are everyday occurrences.

    Spencer discusses a regional phenomenon since the 1950s to say,

    >> These problems have to do with (1) the regional character of the study, and (2) the issue of causation when analyzing cloud and temperature changes.

    And later,

    >>I am now convinced that the causation issue is at the heart of most misunderstandings over feedbacks.

    I agree.

    Spencer concludes,

    >> The bottom line is that it is very difficult to infer positive cloud feedback from observations of warming accompanying a decrease in clouds, because a decrease in clouds causing warming will always “look like” positive feedback.

    Spencer’s inference is à posteriori modeling, developing a model to fit the data. A more satisfying method is to create an à priori model, one which relies on physical reasoning first. The à priori model must contain a cause & effect. The à posteriori model may, depending on modeler skill. The main difference is that the à priori model is rational in the physics, while the à posteriori model is a rationalization of physics to fit data.

    I would not make the inference Spencer finds objectionable. On this topic of cloud feedbacks, I argue from causation first, based on physics, leading to a model that can be validated against data.

    First I note that cloud albedo is a powerful feedback. It gates solar radiation, so has the greatest potential among Earthly parameters to be a feedback. It is a quick, positive feedback because of the burn-off effect witnessed by everyone. Cloud cover dissipates upon exposure to the Sun, so when the Sun output is temporarily stronger, the effects on Earth are increased proportional to the TSI increase, but more, magnified because burn-off occurs sooner. The reverse holds as well. The effect is to cause solar variations to be a predictor of Earth global average surface temperature, as shown in my paper SGW. http://www.rocketscientistsjournal.com . IPCC denied this relationship exists.

    At the same time, cloud albedo is a slow, negative feedback with respect climate warming. It is slow because ocean heat capacity makes ocean temperature changes slow. Next, I assume that the climate throughout recorded history has been in a conditionally stable state. Hansen’s tipping points never occur. The cause of this negative feedback I attribute to increased warming causing increased humidity, resulting in increased cloud cover. A little calculation shows that this effect could be as large as reducing the instant climate sensitivity parameter by a factor of 10 without being detectable within the state-of-the-art for measuring cloud albedo. Recent work by Lindzen (“On the determination of climate feedbacks from ERBE data”, Geophysical Res.Ltrs., 7/14/09) shows climate sensitivity is about 0.5ºC instead of a nominal figure of about 3.5ºC. That’s reduction by a factor of 7, an empirical validation.

    Now cloud cover is the result of humidity condensing around CCNs. The probability that humidity and the concentration of CCNs are exactly in balance has to be zero. So one or the other must be in superabundance, leaving cloud cover dependent on the other parameter. In order for cloud albedo to be the regulator stabilizing Earth in the warm state, it must be able to respond directly to changes in humidity. That means that the CCN must be in superabundance. The alternative is that cloud cover could not respond to warming or cooling, meaning we would have to find an alternative, powerful mechanism. The candidate set seems to have one member: cloud cover.

    • David L. Hagen

      Thanks Jeff for expanding on your cloud perspective.
      You may find interesting the work by Willis Eschenbach on clouds acting as a global thermostat. See WUWT: Willis publishes his thermostat hypothesis paper

      See also Spencer at WUWT Dec. 9, 2010
      The Dessler Cloud Feedback Paper in Science: A Step Backward for Climate Research

      “What we demonstrated in our JGR paper earlier this year is that when cloud changes cause temperature changes, it gives the illusion of positive cloud feedback – even if strongly negative cloud feedback is really operating! . . . We used essentially the same satellite dataset Dessler uses, but we analyzed those data with something called ‘phase space analysis’. Phase space analysis allows us to “see” behaviors in the climate system that would not be apparent with traditional methods of data analysis.”

      Spencer’s phase space approach appears key to distinguishing cause and effect.

    • *****
      First I note that cloud albedo is a powerful feedback. It gates solar radiation, so has the greatest potential among Earthly parameters to be a feedback. It is a quick, positive feedback because of the burn-off effect witnessed by everyone. Cloud cover dissipates upon exposure to the Sun, so when the Sun output is temporarily stronger, the effects on Earth are increased proportional to the TSI increase, but more, magnified because burn-off occurs sooner.
      *****
      So this is an observation rather than a derivation from first principles? Do you have studies that would lend a lot of confidence to this assertion?

  68. I picked out one element of Jeff Glassman’s original claims (from December 7, 2010 at 9:31 am).

    He originally stated this:

    IPCC declares that infrared absorption is proportional to the logarithm of GHG concentration. It is not. A logarithm might be fit to the actual curve over a small region, but it is not valid for calculations much beyond that region like IPCC’s projections. The physics governing gas absorption is the Beer-Lambert Law, which IPCC never mentions nor uses. The Beer-Lambert Law provides saturation as the gas concentration increases. IPCC’s logarithmic relation never saturates, but quickly gets silly, going out of bounds as it begins its growth to infinity.

    I challenged these claims on December 7, 2010 at 9:31 am.

    In his long response of December 8, 2010 at 10:40 am he says:

    The applicable physics, the Beer-Lambert Law, is not shown by Myhre, et al., of course.

    Of course!

    “Of course” seems to mean here “they didn’t use it, they made stuff up instead”.

    Many early papers from the 60s and 70s do include all of the equations and the derivations – and the simplifications – necessary to solve the RTE (radiative transfer equations).

    It might seem incredible that 100’s of papers that follow – which include results from the RTE – don’t show the equations or re-derive them.

    Of course, these authors probably also made up all their results and didn’t use the Beer Lambert law..

    Well, I might seem like a naive starry-eyed optimist here, but I’ll go out on a limb and say that if someone uses the RTE they do use the physics of absorption – the Beer-Lambert law. And they do use the physics of emission – the Planck law modified by the wavelength dependent emissivity.

    Then in his followup claims, Glassman says:

    None of Myhre’s traces comprise measurements, not even the output of the computer models, which will surprise some. You can depend on these models, the NBM and BBM, to have been tuned to produce satisfactory results in the eyes of the modeler, and in no way double blind simulations. Just like GCMs.

    What’s the claim?

    Is Glassman’s problem that Myhre doesn’t use the Beer Lambert law OR that Myhre hasn’t got a pyrgeometer out to measure the DLR. And how would Myhre measure the radiative effect of 1000ppm CO2 in the real atmosphere?

    I believe Glassman’s real issue is not understanding the solution to the RTE in the real atmosphere.

    The RTE include emission as well as absorption. The absorption characteristics of CO2 and water vapor change with pressure and temperature. Pressure varies by a factor of 5 through the troposphere. Water vapor concentrations change strongly with altitude. These factors result in significant non-linearities.

    The results for the RTE through the complete atmosphere vs concentration changes are not going to look like the Beer Lambert law against concentration changes.

    Glassman says:

    The logarithm model makes CO2 concentration proportional to the exponent of absorption. The Beer-Lambert Law makes absorption proportional to the exponent of CO2 concentration.

    Perhaps this is Glassman’s issue – he doesn’t believe that the results presented are correct because he thinks that doubling CO2 should result in a change in proportion to the Beer Lambert absorption change? That would only happen in an atmosphere of constant pressure, constant temperature and a constant concentration vs altitude.

    When he doesn’t see this he imagines that these climate scientists have been making it up to fit their pre-conceived agendas?

    Well, despite many claims and accusations about climate scientists it is still true that when Myhre et al did their work they used the Beer Lambert law but not just the Beer Lambert law. And it is still true that the IPCC relied on Myhre’s work and therefore the IPCC also “used” the Beer Lambert law.

    Glassman’s original claim is still not true.

    But for the many who know that we can’t trust these climate scientists who just “make stuff up” – anyone can calculate “the real solution” to the RTE vs increasing concentrations of CO2.

    The RTE are not secret. Anyone with a powerful computer and the HITRANS database can do the calculations and publish the results on their blog.

  69. A Lacis, 12/9/10, 12:31 am

    If Miskolczi’s papers had been published in the mainstream scientific literature, we could be sure of just one thing: they conformed to the AGW dogma. I critiqued them only on request, partly in the hope of finding a gem, but always to hone the science.

    I disagree with you that my education in radiative transfer is need of augmentation. As I said before, no matter how the climate models might calculate radiation, or how you might think they do, in the end they produce a radiative forcing dependent on the logarithm of CO2 concentration. That might be valid in a narrow region, but for climate projections over a doubling of CO2 concentration, the models violate physics. That is something you might want to study.

    You suggested IPCC as a source for my education on radiative transfer. The Fourth Assessment Report contains this revelation:

    >> The results from RTMIP imply that the spread in climate response discussed in this chapter is due in part to the diverse representations of radiative transfer among the members of the multi-model ensemble. Even if the concentrations of LLGHGs were identical across the ensemble, differences in radiative transfer parametrizations among the ensemble members would lead to different estimates of radiative forcing by these species. Many of the climate responses (e.g., global mean temperature) scale linearly with the radiative forcing to first approximation. AR4, §10.2.1.5, p. 759.

    RTMIP was the Radiative-Transfer Model Intercomparison Project, a response to a chronic problem with radiative transfer modeling. The models didn’t agree, and still don’t. Furthermore, the modelers reduce the problem to parametrization, putting in a statistical estimate for a process too complex or too poorly understood to emulate. So, regardless of what you perceive as my needs in the theory of radiative transfer, in the last analysis radiative transfer is pretty much a failure and irrelevant in IPCC climate models.

    It is a failure because, in the end, the modeled climate responses scale linearly with the radiative forcing to a first approximation. And if climate models could get the climate right in the first order, we would have a scientific breakthrough. As I wrote to you above re barking up the wrong tree, the fact that global mean temperature turns out to a first approximation to be proportional to radiative forcing, means that the models to a first approximation are producing a dependence on the logarithm of CO2 concentration. I have no doubt that that could be true and valid, all in the first order, over the domain of CO2 concentration seen at Mauna Loa. I also have no doubt that it is not valid for a doubling of CO2.

    Radiative forcing will follow a form like F0 + ΔF*(1-exp(-kC)), where C is the gas concentration. That is an S-shaped curve in the logarithm of C, showing a saturation effect. It is not a straight line. You may apply this function in bulk, a first order approximation, or in spectral regions as you have the time and patience to do. However, knowing the greenhouse effect of CO2 requires knowing where you are on the S-shaped curve. This is just one of many fatal errors in IPCC modeling.

    Why would anyone be motivated to master radiative transfer for arbitrary failed models? They are initialized in the 18th Century by zeroing out on-going warming from the last several cold epochs, causing modelers to attribute normal warming to man. The models place the surface layer of the ocean in thermodynamic equilibrium. They assume CO2 is well-mixed and long-lived in the atmosphere. They make the solubility of natural and anthropogenic CO2 different. These IPCC models are open loop with respect to the positive feedback of CO2, and open loop with respect to the positive and negative feedback of cloud albedo.

    Perfecting radiative transfer will have no effect on this sorry excuse for science.

  70. I am dumbfounded. Since I put up my contribution at December 7, 2010 at 9:12 am, there have been numerous posts with claims and counter claims. This reminds me of the ancient philosophers arguing as to how many angels can dance on the head of a pin. I cannot see how these differences can be easily reconciled, and I come back to my main point.

    It is impossible with current technology to MEASURE the change in radiative forcing for a doubling of CO2. So it will never be possible to find out who is right and who is wrong. And the IPCC can never establish that CAGW is correct using the “scientific method”. We can never know what the true value is for the change in radiative forcing for a doubling of CO2, not whether such a number actually means anything.

    • Jim,

      Radiative transfer is based directly on laboratory measurement results, and as such, has been tested, verified, and validated countless times. John Tyndall in 1863 was one of the first to measure quantitatively the ability of CO2 to absorb thermal radiation. Since then spectroscopists have identified literally thousands of absorption lines in the CO2 spectra, and have tabulated the radiative properties of these lines in the HITRAN data base. They have full understanding of why each of the CO2 absorption lines is there, and why it has the spectral position, line strength, and line shape that is measured by high resolution spectrometers for a sample of CO2 in an absorption cell for any pressure, temperature, and absorber amount conditions.

      It is more a matter of engineering than science to calculate by radiative transfer methodology how much radiation a given amount of CO2 will absorb. Just as there is no need to throw someone off a ten story building to verify how hard they will hit the pavement, there is no real need to measure directly how much radiative forcing doubled CO2 will produce. Nevertheless, the actual experiment to do this (doubling of CO2) is well underway. In the mid 1800s, atmospheric CO2 was at the 280 ppm level. Today it is close to 390 ppm, and increasing at the rate of about 2 ppm per year. At that rate, before the end of this century we will have surpassed the doubling of CO2 since pre-industrial levels.

      The radiative forcing for doubled CO2 is about 4 W/m2. The precise value depends on the atmospheric temperature profile, the amount of water vapor in the atmosphere, and also on the details of the cloud cover. The 4 W/m2 is a reasonable global average for current climate conditions. The direct warming of the global temperature in response to the 4 W/m2 radiative forcing is about 1.2 C.

      Best estimates for water vapor, cloud, and surface albedo feedbacks increase the global mean temperature response to about 3 C for doubled CO2. Because of the large heat capacity of the ocean, the global temperature response takes time to materialize, but that is the global equilibrium temperature that the global climate system is being driven toward.

      Current technology measurement capabilities are not adequate to measure directly the radiative forcing of the GHG increases. But current measurement techniques do measure very precisely the ongoing increases in atmospheric greenhouse gases, and radiative transfer calculations (using the laboratory measured properties of these GHGs) provide an accurate accounting of the radiative forcing that is driving climate change.

      • I fail to see how this discussion can advance case of the CO2 induced GW, if a simple question related to reconciling:
        – 1920-45 modest rise in CO2 with sharpest rise in temperature
        – 1945-80 cooling period with the steepest rise of CO2 in the recorded history

        http://www.vukcevic.talktalk.net/CO2-Arc.htm
        has no clear answer.

      • Andy Lacis has stated that it’s something to do with oceanic cycles, but has declined to tell us if or how these oceanic cycles have been incorporated into his model, or what the magnitude of their contribution to the recent warming was.

        This is unsurprising, because if he did, there would have to be a reassessment of the modeled climate sensitivity to co2, which would inevitably drop.

      • Let’s assume Dr. Lacis is correct.
        If there was no increase in CO2 (providing CO2 does as suggested) then inevitable conclusion is that in 1960s temperature would have been much lower, about 0.8 C, approximately as 1810s (Dalton minimum).
        I think IPCC needs to do some work on that one.

      • Best estimates for water vapor, cloud, and surface albedo feedbacks increase the global mean temperature response to about 3 C for doubled CO2.

        That’s the only bit that’s relevant, and also the only bit which can’t be determined by your RTMs.

        The radiative forcing of 4W/m2 may be correct and measurable (I have no argument with it), but it’s largely irrelevant. The relevant bit is the sensitivity, as expressed as degrees per CO2 doubling. And that, as I see it, is little more than guesswork.

      • A. Lacis writes “It is more a matter of engineering than science to calculate by radiative transfer methodology how much radiation a given amount of CO2 will absorb. Just as there is no need to throw someone off a ten story building to verify how hard they will hit the pavement, there is no real need to measure directly how much radiative forcing doubled CO2 will produce.”

        This is complete and utter garbage. There is lots of experimental data that if someone falls off a 10 story building they will hit the pavement hard. However, it is still IMPOSSIBLE to measure radiative forcing directly. If it was possible to measure radiative forcing directly, then there would be no need to rely on estimates from radiative transfer models.

        Can you describe the way that radiative forcing would be actually measured.

      • A. Lacis writes “Current technology measurement capabilities are not adequate to measure directly the radiative forcing of the GHG increases. But current measurement techniques do measure very precisely the ongoing increases in atmospheric greenhouse gases, and radiative transfer calculations (using the laboratory measured properties of these GHGs) provide an accurate accounting of the radiative forcing that is driving climate change.”

        You seem to agree that radiative forcing cannot be measured directly. That is all that matters. I agree that when you add CO2 to the atmosphere, it disturbs the radiative balance of the atmosphere. CO2 is a GHG. But this still does not alter the fact that radiative forcing cannot be MEASURED.

        This will be a much more important issue when Judith introduces how much a change in radiative forcing affects global temperatures.

      • Radiative forcing can never be measured. No improvement in empirical capabilities can make it possible, because it is a concept related to such a modification in the atmosphere that cannot occur. It is defined as the net flow of energy in a situation where the radiative transfer changes without any change in the temperature profile, but in all real changes the temperature profile will also change.

        Thus the radiative forcing will always remain a parameter calculated using some theoretical model.

        The related concept of climate sensitivity (including feedbacks) can be measured at some future time as it refers a change that can occur on the real earth. The accuracy of this measurement may remain low, but some direct empirical measurements will be possible.

      • Pekka you write “The related concept of climate sensitivity (including feedbacks) can be measured at some future time as it refers a change that can occur on the real earth.”

        Absolutely correct. HOWEVER, and it is a big however, the rise in temperature for a doubling of CO2 WITHOUT feedbacks, CANNOT be measured. That will be the isssue as we discuss this in detail, when Judith introduces the subject.

      • Many concepts discussed in connection with climate change are artificial. They can be considered as intermediate quantities in model analysis. They have no real meaning outside the world of models, but they may be useful concepts in the interaction between modelers, whether the models are simple and very approximative conceptual models or complicated models like AOGCM’s.

        For some people such artificial concepts are worthless. This is not my view as I consider many of them to be very useful, but only provided that their meaning is understood.

      • Quinn the Eskimo

        Pekka Pirilä at 3:43 PM

        That radiative transfer can never be verified seems an important caveat for a scientific enterprise of this importance.

        But let not such small things be the hobgoblins of our minds. There are more significant caveats still to go.

        The actual real world effects depend on climate sensitivity, which depends on the cumulative effect of all relevant feedbacks.

        Per the IPCC there is a low level of scientific understanding of the influence of the sun and of clouds, to pick two out of many important issues adrift in the same rudderless boat.

        The low level of scientific understanding of cloud feedbacks precludes determination of a valid, reliable or verifiable climate sensitivity.

        It is therefore logically impossible to claim 90%+ confidence in the understanding of the effects of GHGs, or projections of the climate models.

        But the IPCC does anyway.

        Conversely, however, we can say with high confidence as a matter of logic that until the feedbacks are very well understood, which they aren’t, the IPCC won’t know jack about the net effect on climate of increased CO2 concentrations, or any other climate driver for that matter.

        This conceptual confusion is confirmed by recalcitrant, ill-fitting facts. Vukevic keeps asking about, but not getting sufficient answers about:
        – 1920-45 gentle rise in CO2 with sharpest rise in temperature
        – 1945-80 cooling period at the time of the steepest rise of CO2 in the recorded history.

        To which we might add the recent 15 years of no statistically significant warming despite continuously increasing levels of CO2.

        Or the studies showing that the model projections are not accurate.

        That CO2 just doesn’t seem like much of a control knob, does it?

        It’s not just despicable dumb old me. Kevin Trenberth said:

        How come you do not agree with a statement that says we are no where close to knowing where energy is going or whether clouds are changing to make the planet brighter. We are not close to balancing the energy budget. The fact that we can not account for what is happening in the climate system makes any consideration of geoengineering quite hopeless as we will never be able to tell if it is successful or not! It is a travesty!

        Reducing human emissions of GHGs – geoengineering- is “hopeless” because “we can not account for what is happening in the climate system.” The emperor has no clothes. Are we now to be told that Trenberth is a moron, or an idiot, or an ignoramus on RTE, or a recipient of Heartland Institute grant, or that it depends on what the meaning of “travesty” is?

        We certainly live in interesting times!

        Cheers!

  71. Vaugn Pratt, 12.9.10 2:42 am

    So my ten paragraphs, each eminently refutable, can be refuted by just “shooting down the tenth”. Pratt commands the firing squad where one man has a bullet, and everyone else a blank.

    So Pratt knows that no published data exists in which CO2 appears to be in saturation.

    I invite him to open the radiation and absorbing spectra published in Wikipedia. http://en.wikipedia.org/wiki/File:Atmospheric_Transmission.png. Now check the CO2 absorption spectra. All these bands will move up or down as CO2 concentration increases or decreases.

    The band at about 13.1-17.2 mu (3 db points) appears to be saturated, i.e., about 100%, between 13.8 and 16.0 mu for whatever conditions applied for the chart. (Of course it can never be 100%, but that’s the best that can be resolved on the unfortunate linear scale.) This band is visible between about 12.1 and 18.8 mu (of course, it could be much more, but that is again a limitation of the linear scale.)

    Ignoring absorption unresolvable on this scale, added CO2 can decrease OLR, which runs from about 8.1 to 13.2 mu, only in the region of 12.1 to 13.8 mu, or 33%. This is a first order limitation or saturation effect.

    CO2 also has absorption bands between about 9.2 and 9.7 mu and between about 10.0 and 10.7 mu. These are in the OLR band, and are just barely visible along the base line. Again ignoring unresolvable absorption, these bands could absorb about 1.2 mu, or 23.5% of the OLR.

    The logarithm of concentration model has no limitations. It doesn’t care if CO2 concentration exceeds 100% of the atmosphere (more than a million parts per million), much less whether the gas has absorption band limitations. I prefer to work around unresolvable absorption bands than resort to a silly function.

    Now, I assume, all ten paragraphs are vindicated.

    • You misinterpreted the diagram, Jeff. It shows that only about 15-30% of IR emitted from the surface escapes to space uninterrupted, but of course, 100 % must escape eventually for energy balance. The remainder is intercepted by greenhouse gas molecules (CO2, water, etc.), and eventually re-emitted. At 15 um, the emission to space is mainly at high cold altitudes such that further increases in height won’t change the temperature (because the lapse rate at these altitudes is about zero). In the wings of the CO2 band, however, absorbed photon energy is re-emitted at lower altitudes that must rise with increasing CO2 to overcome the increased opacity. This results in the warming effect. It is nowhere near saturated, and can’t be at any conceivable CO2 concentration compatible with human existence, much less the concentrations estimated for coming decades and centuries. I urge you to read more on the greenhouse effect in order to understand this, because you appear to be repeating the same misconceptions in your various commentaries.

      • For clarity, 100 percent of absorbed radiation must escape for balance. As shown in the wikipedia diagram, a fraction escapes directly via “window” regions. The rest escapes after absorption by CO2, H2O, etc., and subsequent re-emission. The total actually emitted from the surface exceeds 100 percent because of back radiation.

    • The absorption bands that you list block efficiently the direct IR radiation from the earth surface at wavelengths well inside the bands, but at the edge of the band there are always wavelengths where the blocking is incomplete. Increasing the CO2 concentration will always increase absorption at the edge of the band. In other words the band will always get wider with increasing concentration over the full range even remotely possible for the atmosphere.

      Widening the band requires progressively more CO2 when the concentration is increasing. This is the reason for the approximately logarithmic relationship between concentration and radiative forcing. Logarithmic relationship means that the influence gets weaker and weaker but does not saturate.

  72. Gordon Robertson

    Judith wrote “While the models have not been validated observationally for a doubling of CO2, we infer from the above two tests that they should perform fine”.

    Judith, the IPCC has itself admitted that the mdoels have NOT been validated.

    http://www.climategatecountryclub.com/profiles/blogs/the-ipccs-climate-models-are

    When Vincent Gray, an expert reviewer for the IPCC, pointed out to the IPCC that their models were not validated, the IPCC changed the term validation to ‘evaluation’ . They also changed the term prediction to projection, presumably because of their TAR claim that future climate states could not be predicted.

    Also, if the models are validated, why did they fail to predict the El Nino extremes of 1998 and 2010? It’s pretty obvious, is it not, that models cannot predict future events because they don’t have the programming to do that.

    James Hansen was forced to admit that the predictions of his models between 1989 and 1998 had been wrong. Pat Michaels covers some of that here:

    http://www.sepp.org/Archive/reality/michreviews.html

    There have been claims that satellite data is now in step with model prediction. It seems odd to put the cart before the horse in that manner. A look at the UAH satellite data from Roy Spencer’s site reveals the lie in that assertion. Whereas the trends may be similar they mean entirely different things.

    http://www.drroyspencer.com/latest-global-temperatures/

    The zero reference on the graph is the 1979 – 1998 mean. Spencer has pointed out that the recent Tropical mean is now below that average as of November 2010. It seems the entire global mean is headed there as well. His partner, John Christy, has been trying to point out for some time, that the global mean was also below that zero anomaly from 1979 till about 1995.

    True warming did not occur till after the 1998 El Nino anomaly, whereby it jumped about 0.2 C, with no further warming trend till the 2010 ENSO anomaly. That warming in no way resembles the surface record, especially that of GISS, which has shown a steady warming trend since 1980.

    In the climategate emails, Kevin Trenberth admitted that the warming has stopped. He claimed later to have been taken out of context. From what I gather, he is now claiming that the warming is being hidden by ENSO activity.

    Models don’t do ENSO, at least not very well. With ENSO able to swing the global mean over 0.8 C in one year, I hardly think the CO2 programmed into a climate model as a fudge factor is significant, especially if its theoretical trend can be so easily obliterated.

    • Gordon, my original comment was made in regard specifically to the radiative transfer models, not to the global climate models. As per the citations made in my original post, I think substantial support is provided for the validation of the clear sky atmospheric radiative transfer codes.

      • Rattus Norvegicus

        Judith,

        Maybe you should make a post taking your denialist commenters to task. They are just wrong as Moolten and Lacis have pointed out.

      • Gordon Robertson

        Rattus Norvegicus…”Judith, Maybe you should make a post taking your denialist commenters to task. They are just wrong as Moolten and Lacis have pointed out”.

        I have not read your experts, but I have spent the last couple of years reading what experts like Lindzen, G&T, Spencer and Christy have to say about the theories of believers. I prefer their objectivity to the virtual science of modeling and the probability theories of the IPCC.

        The one thing believers cannot explain is the objective data from satellites, which is a sampling of billions of emissions from atmospheric oxygen and covers 95% of the troposphere. The data reveals that models are wrong, that the major warming is quite localized to the Arctic, in winter, and that the warming trend ended a decade ago.

        Believers can offer all the thought experiments they like based on atomic theory. The macro evidence shows clearly that laboratory experiments on high densities of CO2 cannot be transferred to the atmosphere, where CO2 is a rare gas.

        Furthermore, the radiative theory upon which the models are based is a minor player in the atmosphere. It is plain, according to G&T, that the surface is heated by solar energy, and that the surface warms atmospheric gases by conduction. That brings O2 and N2 into the equation and they account for 98% of atmospheric gases.

        We were taught in high school that hot air rises. Heat is also transferred by convection, as Lindzen explains, and is released from high in the atmosphere, not from the surface per se. Without convection, according to Lindzen, the mean surface temperature would be 72 C. It is convection that carries off most of the heat, not radiation. The models are based on faulty physics.

      • But the empirical observations disagree.

        Willis over on WUWT observes:

        Evans and Puckrin 2006
        http://ams.confex.com/ams/Annual2006/techprogram/paper_100737.htm

        found that in the presence of DLR from H2O of more than 200W/m2 the DLR attributable to CO2 was dramatically suppressed

        Lacis claims that

        “In round numbers, water vapor accounts for
        about 50% of Earth’s greenhouse effect, with
        clouds contributing 25%, CO2 20%, and the minor
        GHGs and aerosols accounting for the remaining
        5%. Because CO2, O3, N2O, CH4, and
        chlorofluorocarbons (CFCs) do not condense and
        precipitate, noncondensing GHGs constitute the
        key 25% of the radiative forcing that supports
        and sustains the entire terrestrial greenhouse effect …”

        In the Evans clear-sky observations (no clouds) CO2 was measured at 11% of the total downwelling radiation from water vapor, CO2, and minor GHGs.

        In the Lacis10 study CO2 was 27% of the total of water vapor, CO2, and minor GHGs. In other words, the Lacis10 computer results show about two and a half times more radiation from CO2 (and minor GHGs) than Evans’ observations.

        Alternatively, we could approximate the clouds. In Lacis10 clouds have half the forcing of water vapor. We can apply this same percentage to the Evans observations, and assume that the clouds are half of the water vapor forcing. This increases the total forcing by that amount.

        This (naturally) reduces CO2 forcing as a percent of the now-larger total. In this case, which is comparable to the Lacis formulation above, CO2 is 8% of the total, with the minor GHGs at 2%.

        Once again, the Lacis10 results are about two and a half times larger than the Evans observations.

        The other finding of Evans was that more water vapor in the air means less CO2 absorption in both absolute and relative terms. In the winter it was measured at 105 W/m water vapor versus 35 W/m for carbon dioxide (33%). In summer CO2 radiation goes down to 10 W/m2 versus 255 W/m2 for water vapor (4%), due to increased absorption by water vapor.

        This means that CO2 will make less difference in the tropics, with its uniformly high humidity, than in the extratropics where the Evans observations were taken.

        And:

        The Evans paper is important in this regard. It says that when relative humidity is high (most of the time in the tropics) most of the radiation is from water vapor.

        In the Evans paper, GHGnc were the source of 6% of the summer clear-sky radiation (with high humidity). If that were the case here, the clear-sky radiation is 300 W/m2, so the split would be 282 W/m2 from H2O, and 18 W/m2 from GHGnc. This is physically quite possible, since H2O alone adds 291 W/m2.

        Even using the straight MODTRAN calculations, however, shows that the Lacis10 claim is unlikely. The loss of the GHGnc gases gives only a 9 W/m2 change in downwelling radiation, according to MODTRAN. While this is a significant change, it is far from enough to send the planet spiraling into a snowball.

        The surface is currently at 390 W/m2. For radiation balance, MODTRAN says the surface would cool by about 3°C (including water vapor feedback, but without cloud or other feedbacks).

  73. Gordon Robertson

    Judith “Gordon, my original comment was made in regard specifically to the radiative transfer models, not to the global climate models”.

    Judith…I respect your work and I am not trying to create an issue with you over this, but I think all model theory has a shaky basis with regard to modeler interpretation of physics. Lindzen has pointed out clearly that radiative transfer is but one part of the atmospheric equation, and that the Earth is never in balance with respect to energy. Gerlich and Tscheuschner pretty much agree with Lindzen and have put the radiative model to bed.

    https://www.cfa.harvard.edu/~wsoon/ArmstrongGreenSoon08-Anatomy-d/Lindzen07-EnE-warm-lindz07.pdf

    http://gazettextra.com/news/2010/apr/08/con-earth-never-equilibrium/

    http://icecap.us/images/uploads/Falsification_of_CO2.pdf

    I know the alarmists will be at me for G&T and they will claim the Smith and Halpern papers have disproved it. However, neither Smith nor Halpern seem to be able to distinguish the 1st law of thermodynamics from the 2nd law. Halpern et al talk around the 2nd law and seem lost when it comes to explaining it.

    If the clear sky models are accurate, they should be able to tell us exactly how much ACO2 energy is being returned to the surface as back-radiation. They can’t. Furthermore, they can’t explain how ACO2, which accounts for 1/1000th percent of atmospheric gases can block enough surface radiation or back-radiate enough to super-warm the surface.

    Most importantly, the basis of AGW is twofold: that ACO2 is helping trap heat, and/or, that ACO2 backradiates sufficient energy to super-warm the surface beyond the temperature it is heated by solar energy. Craig Bohren, a physicist/meteorologist calls the trapping effect a metaphor at best, and at worst, plain silly. He refers to the second theory as a model.

    The problem with the model is that it blatantly disregards the 2nd law. Clausius put it in words when he asserted that a cooler body warmed by a warmer body, cannot warm the warmer body to a temperature above what it was when it warmed the cooler body. That is basically common sense. There are losses in the system, and you cannot increase energy through positive feedback when those losses are in place and there is no independent energy source to achieve the heat gain required.

    That is basic thermodynamics but the AGWers have brought in the 1st law through an obscure net energy balance (see G&T paper). Clausius created the 2nd law for that reason. Under certain conditions, the 1st law permits a perpetual motion machine, in which ACO2 can take energy from the surface and behave as an independent heat source. It is not possible to add energy from a dependent source to increase energy in a system.

    I noticed a post from Jeff Glassman. If he is the same one who had an interchange with Gavin Schmidt, perhaps he would be good enough to explain mathematically how feedback works in the atmosphere and why it cannot exist without an independent energy source.

    Halpern et al are thoroughly confused about back-radiation as explained by G&T. Whatever the modelers think they are modeling is apparently wrong. How they extract the effect of ACO2 from background CO2, of which ACO2 is a small fraction (IPCC), is beyond me. Roy Spencer has already pointed out that we don’t have the instrumentation to do that. So, what are modelers seeing?

    If you read the G&T paper, which I hope you do, you will see them discuss Planck, Boltzman et al. In this thread I have seen assumptions made about energy flow in the atmosphere which is pretty basic. One poster discusses a cavity resonator, which is a theoretical concept. Blackbody radiation a la Boltzman is not directly transferable to the atmosphere according to G&T.

    One of the problems is a condition Planck put on his formula. It applies only when the wavelength of the radiation is much shorter than the surface dimensions upon which the radiation is incident. G&T describe a bulk problem with CO2 at 300 ppmv, where 80 million CO2 molecules in a cube with sides of 10 microns, a typical wavelength for infrared radiation. They claim that applying the formulas for cavity radiation in such a cube are sheer nonsense.

    If they are correct, then the application of Boltzman and Planck in a model is wrong. Both G&T and Bohren agree that energy exchange in the atmosphere cannot be described by simple vectors. Many people think that photons can be exchanged as discreet particles, but that is still a theory. A photon is a definition, and we have no idea what EM looks like or how it operates.

    When you claim that models accurately represent clear sky models, I wonder what you base that on.

    • Here we have again full proof that science can never convince people who do not want to accept any evidence, but prefer all kind of unscientific claims.

      What is the accuracy of experimental evidence that would be sufficient, when the 8 significant figures of the experiments that confirm our understanding of electromagnetism and photons are not enough?

      Good try, Judith, but ..

  74. Tomas Milanovic

    A.Lacis
    The radiation is indeed being calculated at every gridbox of the model for every physics or radiation time step (more than 3300 times for each time step for a moderately coarse spatial resolution of 4 x 5 lat-lon degrees

    Thanks, that answered the question.

    Jan Pompe

    I do believe there is sufficient evidence that the earth has been in snowball state (an example and also for the current state it’s in.

    This might be true and it also was in countless other states during the last 4.5 billions of years.
    Trivially none of them was/is stable because the Earth never stayed there what would be the definition of being a “stable state”.
    Clearly the Earth we observe is no more in a “snowball” state if it ever was there.
    That’s what I have been saying – there was/is no stable state for the Earth as long as it has a fluid (liquid/gas) envelope.

  75. Tomas Milanovic

    As per the citations made in my original post, I think substantial support is provided for the validation of the clear sky atmospheric radiative transfer codes.

    I also think that.
    An important mention however is that this this is verification is done for a given vertical profile (temperature, concentration).
    In a gas column in equilibrium where everything is constant, there is no reason why the radiative transfer equations should not give a rather correct answer at least as far as the spectrum is concerned.
    Of course, like Judith rightly says, this doesn’t allow any conclusion about the validity of the calculation once the whole system becomes dynamical.

    For instance if the temperature and concentration fields in the model (we won’t talk clouds and particles here) don’t evolve like they should , the radiation transfer will still be computed correctly but give a wrong result.
    But even in this case it would not be the radiation module that would be in cause but the dynamical module and/or the coupling radiation-fluid flow.

  76. The clear-sky atmospheric radiative energy transport models and codes and application procedures are very likely to have been subjected to adequate testing for verification and validation.
    The primary focus, in my opinion, needs to be on the calculations by the GCMs, or other models and codes that are applied to the Earth’s climate systems. The primary question within this arena is, Do these models / codes / application procedures sufficiently resolve the radiative energy transport to a level of fidelity that is necessary to account for the few W/m^2 change in radiative energy transport that is due to the activities of humans?
    The initial approach to the question can be within the same spherical-cow version of the problem that has been used in this thread.
    The real-world problem of the Earth’s climate systems, and its rough process-model approach through the critical parameterizations, very likely introduces so much fuzziness that the question can’t be answered.

  77. Gordon,

    I would help a lot if you tried to be a bit more objective. I don’t know this for a fact, but I would surmise that you have never read any of my published paper on radiative transfer, or those by Jim Hansen. All of these papers are freely available through the GISS webpage: http://pubs.giss.nasa.gov/

    You should also read some of the papers on radiative transfer by Michael Mishchenko. He has published two technical books on radiative transfer, on which I am also a co-author. To my knowledge, Lindzen, Spencer, and Christy have never published any papers on climate related radiative transfer. Lindzen is an expert in atmospheric dynamics, and Spencer and Christy are expert in satellite microwave measurements. The G&T paper is so far off the wall that it is simply beyond just being wrong. If you have a dental problem, clearly, you would consult a dentist. Why on earth would you want to go and consult a gastro-internist for a dental problem?

    Climate is what physicists call a boundary value problem. And the important boundary value for climate is the solar-thermal energy balance at the top of the atmosphere, which is 100% by radiative means. Atmospheric dynamics is certainly important in climate, but dynamics is much more important in the case of weather forecasting applications.

    If you were to take the time to read more of the main stream published papers on radiative transfer, you would develop a clearer and more objective perspective to interpret the issues that affect global climate change.

    It is radiation that sets the global energy balance of Earth, and radiation is fortunately the one physical process that is understood best and is amenable to accurate modeling. There are plenty of other dynamic and thermodynamic processes in climate modeling where significant uncertainty exists, and which do affect the accuracy of climate model forecasting capability.

    • It is radiation that sets the global energy balance of Earth, and radiation is fortunately the one physical process that is understood best and is amenable to accurate modeling.

      So long as there are parameterizations available for changing the results of the model, characterization as “accurate modeling” does not apply. Never has, never will.

      Applications of radiative-energy transport within the Earth’s climate systems critically rely on ad hoc, heuristic, somewhat empirical parameterizations for all the aspects of the systems that interact with radiative energy transport. There’s very little that’s based on fundamental first-principles.

      Not to mention that there’s never been a time when the Earth’s radiative energy budget is in balance. And never will be.

      • oops, the

        got dropped from the first paragraph.

        It is radiation that sets the global energy balance of Earth, and radiation is fortunately the one physical process that is understood best and is amenable to accurate modeling.

    • Andy,

      It would help a lot if you tried to listen to what people are saying, and thought about it, and spent less time repeating the same dogma about radiative transfer. You could learn a lot from Judith.

      Here is a selection of comments:

      Gordon –

      Heat is also transferred by convection, as Lindzen explains, and is released from high in the atmosphere, not from the surface per se. Without convection, according to Lindzen, the mean surface temperature would be 72 C. It is convection that carries off most of the heat, not radiation.

      Tomas Milanovic –

      So considering a radiative transfer uncoupled of the fluid dynamics is a nice theoretical exercice but has nothing to do with the reality.

      Peter317 (previous greenhouse thread)-

      Yes, a body will radiate ~3.7W/m2 more energy if its temperature is increased by ~1c. However, that does not mean that increasing the energy flow to the body by that amount is going to increase its temperature by ~1C. Any increase in the temperature has to increase the energy flow away from the body by all means it can, ie conduction, convection, evaporation as well as radiation. When the outflow of energy equals the inflow then thermal equilibrium exists with the body at a higher temperature. But that temperature cannot be as much as ~1C higher, as the radiative outflow has to be less than the total outflow.

      You should also look at the thread at WUWT (title “knobs”), if you can bring yourself to do so, which points out the erroneous claim in your recent paper.

      Your statement in the paper,
      “Because the solar-thermal energy balance of Earth [at the top of the atmosphere (TOA)] is maintained by radiative processes only, and because all the global net advective energy transports must equal zero, it follows that the global average surface temperature must be determined in full by the radiative fluxes arising from the patterns of temperature and absorption of radiation. ”
      is untrue – see comments above and my comment at WUWT.

      • Paul,

        Climate GCMs include a full accounting of the atmospheric dynamic and thermodynamic processes. That is what determines the atmospheric temperature profile as well as the distribution of atmospheric water vapor at each model grid box and time step. It is by radiative transfer means that the relationship between the surface temperature and the outgoing LW flux to space is established – quite contrary to what your cited commentators understand or believe.

    • Gordon Robertson

      A Lacis | December 10, 2010 at 10:51 am | Reply

      Gordon,

      I would help a lot if you tried to be a bit more objective. I don’t know this for a fact, but I would surmise that you have never read any of my published paper on radiative transfer, or those by Jim Hansen. All of these papers are freely available through the GISS webpage: http://pubs.giss.nasa.gov/

      I have no respect for Jim Hansen whatsoever. He is a physicist whose background has been mainly in astronomy, and he moved into climate modeling without an adequate background in meteorology or climate science itself.

      He fell under the influence of astronomer Carl Sagan, who had a theory that the atmosphere of Venus was due to a runaway greenhouse effect. IMHO, Hansen has been trying to foist the same theory on the Earth’s atmosphere.

      Hansen has also fostered a generation of modelers who have tried to rewrite the theories of meteorology and atmospheric physics. I think Hansen is wrong and all of his disciples are wrong. He has been proved wrong. He made certain model-based predictions in 1988 and had to recant them in 1998.

      http://www.sepp.org/Archive/reality/michreviews.html

      Personally, I don’t like the way Hansen mixes science and politics. I regard him as an alarmist of the first order, and that GISS has corrupted the surface database.

      The following site is not authored by a scientist but the man has done his homework. He reveals the damage done to the surface station record by Hansen et al. It’s all there, but you have to dig a bit for all the details.

      http://chiefio.wordpress.com/gistemp/

      The GISS record is the only one showing a distinct warming trend over the past 10 years. IPCC lead author and AGW guru, Kevin Trenberth, admitted in the climategate emails that the warming has stopped. He explains it away as modern instrumentation not being adequate for separating the warming signal from background signal created by ENSO. That raises the question as to why ENSO warming is not being recognized as the true force behind the warming.

      I have not read your paper on radiation but I have read physicist/meteorologist Craig Bohren’s entire book on atmospheric radiation. In the book, he refers to GHGs trapping heat as a metaphor at best, and at worst, plain silly. He gives more credence to the back-radiation theory but points out that it is a simple model. Here is a link to comments by Bohren:

      http://www.usatoday.com/tech/columnist/aprilholladay/2006-08-07-global-warming-truth_x.htm?csp=34

      Bohren makes a comment pertinent to this thread:

      “My biases: The pronouncements of climate modelers, who don’t do experiments, don’t make observations, don’t even confect theories, but rather [in my opinion] play computer games using huge programs containing dozens of separate components the details of which they may be largely ignorant, don’t move me”.

      I think it’s important to recognize that many modelers are way in over their heads when it comes to physics, meteorology and the degreed discipline of climate science.

      I mean you no disrespect, and I have not read your work. However, I think some scientist are far too focused on radiation as a mechanism for heat movement. G&T claim in their article that a huge difference exists between experiments on CO2 using high densities of the same, and CO2 in the atmosphere where the anthropogenic component is exceedingly rare.

      They claim further that the properties attributed to CO2 in the atmosphere have never been demonstrated in the lab. Inferring that CO2 is responsible for more than 15% of atmospheric warming, given its rarity, goes far beyond what has been found experimentally. Satellite data has verified that CO2 is not causing the warming. Only modeling theory holds to that theory.

    • Subsequent comments have made the point that in mathematics and physics “boundary value problems” differ somewhat from what he typical GCM is trying to do. Some have suggested that they fall into the category of “initial value problems”. I’m not sure that if they do the GCMs are well posed.

      I think people might be talking past one another here.

      The reason mathematicians etc study these kinds of problems is because they are interested in the conditions under which the systems allow well defined solutions. The issues of interest are things like the conditions where unique solutions exist.

      However I think we can say with some confidence that the nature of GCMs is such that characterising them as either type of problem, or both, still leaves them outside the conditions where they can be solved uniquely. They (or at least the phenomena they are modeling at this level of granularity) are just too complex for well behaved solutions (not withstanding their relatively simple boundary conditions and the seductive deterministic nature of the models used). So this issue is all a bit irrelevant.

      If I was going to forecast future climates I’d be looking at stochastic modeling. Now GMCs could help here using Monte Carlo techniques if only they more systematically incorporated uncertainty and we had unlimited computer resources, so that isn’t going to be a solution.

      However on a more positive note I think I saw a link on this thread to the recent seminar at Issac Newton Institute looking at how to model incorporating stochastic models into GCMs to help deal with this issue.

      To round off, reading through this thread this rather leads me to ask two questions:

      1. If we want to investigate the impact of CO2 on the climate why would we use a GCM? and
      2. Why incorporate all this level of detail about who absorbs what, and what they emit in a model designed to tell us in in inevitably crude terms what our climate might be in 2050?

      Two different horses for different courses in my view.

  78. Tomas Milanovic

    Climate is what physicists call a boundary value problem.

    No it is not and there is not the beginning of a proof that it could be.
    It is a dynamical problem highly likely dependent on initial conditions.
    Some climate scientists call it like that but certainly not physicists .
    I already explained why I don’t think that the climate can be a “boundary problem” in the thread Testimony follow up , here is the argument:

    This paradigm sees the climate as a non dynamical problem what is obviously wrong.
    Already from the mathematical point of view, as the climate states (descibed by the values of the dynamical fields such as velocities, temperatures, pressures, densities etc) are given by the solution of a non linear PED system, it is obvious that this solution can be only obtained when one specifies both the boundary AND the initial conditions.
    Saying that the initial conditions don’t matter, what is exactly what the statement that this is only a “boundary value” problem does, condemns to stay forever in ignorance about the dynamics of the system.

    On the other hand non linear dynamics is (almost) all about initial conditions and the dependence on initial conditions of the evolution of the states of the system .
    As a great number of examples in fluid dynamics shows, this dependence is paramount and even a very slight modification of the boundary and/or initial conditions may change dramatically the dynamics of the system.
    Specially studies of spatio-temporal chaos show that the systems are only rarely if ever described by an invariant stochastical distribution of future states which would be independent of the initial conditions.
    Conversely when such an invariant probability distribution exists, it is called the ergodic hypothesis.
    But the ergodic hypothesis is not a given, it must be proven and it is extremely unlikely that the hypothesis is valid for the complex climatic system.

    The Earth system is neither in equilibrium nor in a steady state.
    There is not a beginning of the proof that it could be ergodic for the simple reason that it probably is not.
    Btw if somebody could come up with a proof of climate ergodicity, he could apply the same method on the 1M$ Clay Institute problem of Navier Stokes equations which is “easier”.
    It is not happening to my knowledge.

  79. Alexander Harvey

    There may well be reasons as to why the IPCC CO2 doubling forcing is relatively easy to calculate with some accuracy.

    I haven’t, possibly couldn’t, think this all through but I can see that there may be a simplication that means that things like the H2O concentration with height may not make much of a difference and the same may be the case for clouds and the weather in general.

    Doubling CO2 attenuates the spectrum primarily in the window left after all the offer GHG are taken account of, limiting the effect oof other gasses on the calculation.

    For much of the CO2 absorption spectrum the effective radiative height is already above the level of the weather. Where this is not so the lines are weaker and may not contribute much change to the forcing if the concentrations are doubled.

    It may just be that deltaF (the change in radiative flux) is much easier to calculate with some accuracy than say F (the magnitude of the radiative flux).

    As I say I haven’t thought this through but in essence all that one needs to do is calculate the the change in the effective radiative height for each line to give one the change in effective radiative temperature for each line and hence the the change in flux for each line. This change in flux may well be dominated by lines whose radiative height is alredy above the weather. One does need to know the lapse rate but (for all I know) this maybe easy for heights above the weather.

    Cloud top heights will make a difference but again calculating the deltaF may still be relatively straightforward (even if calculatinf F isn’t) and the final result may not be very sensitive to the distribution of cloud types (a speculation).

    It could turn out that the IPCC Co2 forcing is one of the few things that can be calculated with such precision.

    Alex

  80. All this theorising to the minutia of molecular scale, is hardly going to convince anyone, let alone growing army of sceptics.
    In order to do that the experts (from both sides of the fence) have to concentrate on climate aspects that matter and majority of people can relate to: jet-stream, AMO, PDO, Sothern Oscilation, Arctic, Antarctica etc.
    Omnia mutantur nihil interit

  81. Tomas – You responded to Andy Lacis as follows: “Climate is what physicists call a boundary value problem (Andy’s statement).

    “No it is not and there is not the beginning of a proof that it could be.
    It is a dynamical problem highly likely dependent on initial conditions.”

    It will be best for Andy Lacis or other climate modelers to respond. In the meantime, I see this as gettings back to the “climate vs weather” issue, because the latter is clearly an initial value problem. To my mind, climate is less so. Where I live, in the Northeastern U.S., I can predict with confidence that the temperature here next July will be much warmer than it is today (December 10), and will probably average in the 80’s in degrees F. I can make that prediction with equal confidence regardless of whether today is unseasonably cold or unseasonably warm, or whether it is snowing, raining, or the sun is shining – the initial conditions make no difference. Just as in the real world, climate model ensembles can start with different initial conditions, incorporate various dynamical considerations and boundary conditions, and yield output that converges over time toward similar results. This doesn’t operate equally well in all areas of model performance, but for long term global responses of temperature to CO2 as an example, the simulations aren’t bad.

    • Alexander Harvey

      Fred,

      Is the temperature anomaly for next July independent of the temperature anomaly for last July?

      Probably not, but eventually such memory patterns fade. I think there is a fading memory of passed dynamics, in the atmosphere perhaps for a week or so but also longer for some aspects. In the oceans it perhaps extends to years or decades, perhaps somewhat longer.

      At what timescale could the dynamics of a “changing” climate be said to be purely a boundary problem.

      Alex

  82. Tomas Milanovic

    Fred Moolten

    Where I live, in the Northeastern U.S., I can predict with confidence that the temperature here next July will be much warmer than it is today (December 10), and will probably average in the 80′s in degrees F. I can make that prediction with equal confidence regardless of whether today is unseasonably cold or unseasonably warm, or whether it is snowing, raining, or the sun is shining – the initial conditions make no difference.

    This is the classical but highly misleading argument that actually has little to do with the issue I have treated.
    Yes it is trivial, if you live on the South pole, to estimate that the temperature in winter will be lower than the temperature in summer.
    But this example has nothing to do with the initial conditions vs boundary conditions problem.
    It has to do with 2 facts:
    – you use a local (not global) region
    – this region is chosen such as the local energy flow (Sun’s radiation) is seasonaly extremely variable and quasi periodical between the 2 chosen time points.

    You could of course not use your example if you happened to live in Singapour because the temperature on a 6 month scale would depend there on initial conditions – not only in Singapour but all over the globe.
    And even in NE US you could make no such prediction for something else than one cherry picked time scale example (of the summer-winter kind)

    Most of these arguments I read regarding the “independence of initial conditions” over long time scales for the whole globe have always only been hand waving, impressions, convictions without any solid scientific foundation.

    • Alexander Harvey

      Tomas,

      I believe that to model a changing climate one is more or less forced to make the assumption that it must be treated as a boundary problem.

      One doesn’t know the real world initial conditions and even if one did, they cannot be imposed on the model without causing a transient, as the world and the model are two different dynamic systems, and the real world state may be a virtually impossible model state.

      I cannot see that we could make much progress without letting the model stabilise (run up) which in effect allows it to come up with its own initial conditions.

      If you are saying that we must theorise about the real world as being subject to initial conditions. I cannot see how you can be wrong. But what does that imply and where does it get us? I can see the point, but I cannot see how it helps beyond allowing us to say something about persistence and the realised variability (distance from equilibrium) in any particular epoch and its dependence on the prior epoch.

      Alex

      • Alexander Harvey

        Tomas,

        Is part of your point that my use of the term equilibirum is crazy as no such thing can be defined?

        Alex

      • Alexander,
        It is probably true that most of the analysis of climate change is based on assuming that climate is a boundary problem. That is, however, a property of the models, not necessarily of the real earth with oceans. The assumption is perhaps likely to be true for the earth over time spans of several hundred years, if the boundary conditions remain stable over that period. Over a couple of years ENSO and other similar factors add a lot of dependence on initial conditions. AMO at other multidecadal influences may be important initial conditions for several decennia. There is also evidence that even very much slower processes are influencing glacial cycles. How much confidence we may now have that initial values are not also very important on the time scale of a century or two?

      • Alexander Harvey

        Pekka,

        I think I can see both sides of the argument. I think that as long as we are talking about a particular realisation, (which in the real world is all there is) initial values are important to varying degrees according to the timescale.

        Such things would be true for systems that are linear and hence not chaotic so, I think Tomas must be saying something more than this.

        Alex

  83. The overly-simplistic mis-characterization offered by A Lacis is one that contains no useful information. There are several other examples of this approach within climate science. The mathematical problem considered by GCMs, and other models / codes, is always set as an initial-value boundary-value problem ( IVBVP ). If time is an independent variable a mathematical BV problem would require that information be specified at future time. Kinda hard to accept.

    Even when those famous steady-state energy budget diagrams are presented, so simple that all aspects could be discussed in detail, never a word is said about the range of the values that the few constants included in the analysis could attain; the Solar Constant, the albedo, the ( always missing ) emissivity for the Earth’s surface, and the Earth’s “greenhouse factor”. Small changes in the values assumed for these will make the perfectly balanced budget become un-balanced. The few parameters are simply assigned numerical values with the sense, based solely on authority, that there are well-defined and that the values presented are known to be correct with great accuracy and that they are forever un-changing.

    Plus at this point CO2 is introduced while conveniently over-looking the plain, well-known fact that the phases of water in the Earth’s atmosphere are responsible for the greatest part of the albedo. Except when our well-established-from-first-principles radiative energy transport model needs a little tuning and we have to throw in some other radiatively-interacting stuff.

    • I said,

      If time is an independent variable a mathematical BV problem would require that information be specified at future time.

      To expand a bit. That means that the calculated response at any time, let’s say the present time, is a function of information at some future time.

  84. Alex – I believe that there are extensive records for July, probably going back to around 1850 in the Hadcrut series. The anomalies are not independent, nor are they hopelessly wide-ranging so as to confuse July with December. They reflect internal climate variations (ENSO and probably others) and also exhibit clear warming trends consistent with anthropogenic influences. The influence of ocean changes probably persists for millennia, but the rate dwindles to insignificance as the time is stretched out.

    The trends don’t conflict with the principle of boundary values as I understand it, because I don’t interpret “boundary values” to necessarily mean unchangeable values. For example, an analysis of the effects of CO2 over time might utilize CO2 concentrations at time T1 and T2 as boundary values. Perhaps the modelers will comment.

    • Is the “reply” function misbehaving or am I just being careless? My response was to Alex, but in my view, the empirical, “climate vs weather” type of evidence that long term global climate behavior is mainly a boundary value problem also applies to other comments above. How well the models can simulate climate is a separate issue and depends on what type of simulation is attempted.

  85. Tomas Milanovic

    Alexander Harvey

    At what timescale could the dynamics of a “changing” climate be said to be purely a boundary problem.

    At none because this question is precisely a question about the dynamics which can’t be answered (understand solving the relevant dynamical equations where time appears explicitely) without the knowledge of the initial conditions.
    At short time scales you can neglect slow variations like orbital parameters , ocean currents or continental movements but they kick in when you get to longer time scales.
    So there is always something whose initial condition matters.
    This is simply a property of differential equations.

    • Yes, that’s right, there is a common misunderstanding here (fed by climate scientists and the IPCC). Weather is chaotic because of short-term, medium-space-scale interactions between things like cyclones and fronts and clouds. Climate fluctuates in a probably chaotic way on longer times, involving larger structures such as ice sheets and ocean currents. But many climate scientists appear to live under the delusion that the climate would be stable and stationary were it not for “radiative forcings” (this relates to some of Tomas’s earlier points). It is absurd to try to “explain” every little wiggle in the temperature history is being the result of some (usually man-made) “forcing”.

      Here is a nice quote on this from Roger Pielke about the IPCC AR4:

      “Their claim that

      Projecting changes in climate due to changes in greenhouse gases 50 years from now is a very different and much more easily solved problem than forecasting weather patterns just weeks from now.

      is such an absurd, scientifically unsupported claim, that the media and any scientists who swallow this conclusion are either blind to the scientific understanding of the climate system, or have other motives to promote the IPCC viewpoint. The absurdity of the IPCC claim should be obvious to anyone with common sense.”

  86. Tomas Milanovic

    Dan Hughes

    The mathematical problem considered by GCMs, and other models / codes, is always set as an initial-value boundary-value problem ( IVBVP ). If time is an independent variable a mathematical BV problem would require that information be specified at future time.

    Yes. This is really standard PED theory. There should actually be no discussion about such trivia. Unless, of course, one postulates that the PED theory doesn’t apply in climate science.

  87. Alexander Harvey

    Tomas and Fred,

    To begin with I have no definition of climate but I think I know what a standard climatology is:

    The monthly mean value of a measured variable over a thrity year period.

    Now obviously even in a world or model where all the forcings stay the same the climatologies for successive 30 year periods are not going to be indentical and to that extent each could be seen as a particular realisation influenced by the initial conditions at the start of each period. Here strict determinism has been assumed which can be inforced in the modelled case.

    (Interestingly, I believe strict determinism is not always inforced in models.)

    Now given that the system is chaotic, viewing each realised climatology as capable of being modelled as a stochastic variation about some expected value for the climatology is not strictly valid, but could be “effectively” valid, e.g. not making any material difference.

    Now I can accept that in a strict sense the initial values problem never goes away and in a completely deterministic world or model they make a real difference to particular realisations.

    What I wish to know is whether each of you thinks this makes a significant difference to the particular realisations of the climatology beyond that which could be explained as a stochastic variation on an underlying expected climatology.

    My guess is that you would disagree on this point, but I don’t know.

    Alex

    • Alex – When it comes to initial value/boundary value issues in climate modeling, my understanding is too shallow for me to be dogmatic, and I would defer to the modelers who do this sort of thing for a living.

      Here is my take on it, however. Utilizing initial and boundary values, appropriate equations for momentum, energy, mass conservation and other physical relationships, and parametrizations, models are constructed so as to yield output that tends to oscillate around a climate state that is relatively stable over time but subject to the stochastic variations you mention. When the model varies from the behavior of real world climate, its parameters are adjusted to tune it so that it corresponds as well as possible.

      The “tuning” ends there, however. If one is evaluating response to a forcing such as changes in CO2, aerosols, or solar irradiance, the values of these changes are added as input to the model. The model, or an ensemble of models, is run under a variety of initial conditions and the output is recorded. If the output tends to reproduce observed values (in hindcasts or the occasional forecasts), so much the better. If it does not, the modeler cannot retune it to make it “come out right”. Under these circumstances, models have varied in performance, but simulate some long term global changes (e.g., in temperature) better than shorter term or more regional changes, with initial values increasingly uninfluential as intervals increase from a few years to multiple decades. Part of the disparity between modeled and observed behavior reflects the stochastic variations you mention, but this not preclude useful results based on the elements of climate that are reasonably predictable from basic physics principles. In theory, the chaotic elements within climate could overwhelm the predictable ones, but empirically, the observed behavior of climate over timescales of interest to us (multiple decades or centuries) does not support such an interpretation.

      • Alexander Harvey

        Fred,

        Thanks for taking the time.

        I don’t disagree but there is a bridge that needs constructing that shows why a non-linear chaotic system spontaineously gives rise to a climate that has near linear stochastic aspects.

        The global temperature record and the global temperature data from one model I have looked at (HadGEM1) have certain properties in particular spectra of a certain form. They do not differ much from what one might expect from filtered noise plus warming signal. The level of the noise can be estimated and one can come to some conclusion about estimating the variance due to the stochastic component.

        It is though all that marvelous chaotic behaviour results in something that when averaged over monthly periods is not readily distinquishable from a white noise flux, plus a warming flux, forcing a linear system (in fact an LTI system).

        Now there are reasons to suspect that cannot be quiet true, but on the scale of just one century or so the data may not be able to show any non-linearity in the response.

        One of these is ENSO, widely thought to have a chaotic basis but it is a well damped phenomenom, (the critical time of the damping much shorter than the periods involved) and is tolerably modeled by a damped resonance driven by noise.

        Another is the PDO, which does seem to be a real effect and may be chaotic but again the ability to explain variance in the global temperature record, although probably non-zero is at a level I judge similar to the level of the noise making attribution difficult.

        Now nature occassionally gives the system some hard knocks due to volcanism. I am not aware that any of these experiments have ever shown any significant non-linearity in the the response-recovery profile. The effect on global temperatures seems to be what one might expect of a linear model, a decrease in temperatures followed by a relaxation.

        All in all we seem to have complex deterninistic non-linear origins giving rise to effects well explained by simple stochastic linear models.

        It is that bridge that requires some characterisation.

        On the other hand, those that believe that no such bridge exists must surely need to show cases where the global response has differed markedly from that which one could model as the result of a forcing plus noise impinging on a linear system at the standard climatic timescales of months/years.

        Such a linear stochastic noise plus forcing model is still feature rich, it is capable of exhibiting natural variability due to external, internal including stochastic forcings.

        Alex

      • Alex,
        The relationship between chaotic and stochastic behaviour is very interesting. Many climate models are deterministic models that lead to chaotic results, but one could modify these models by adding small stochastic perturbations at each time step. The resulting model would be stochastic but might lead to very similar results when model runs are repeated and the resulting range of future climates is presented.

        Actually I would consider the original deterministic model doubtful unless the results remain the same when small stochastic perturbations are added. By small I mean something of the order of uncertainties of the variables at each time step.

      • Alexander Harvey

        Pekka

        Thanks, such things are it seems done when evaluating long-term weather forecasts.

        This may interest you:

        “Stochstic representation of model uncertainties in ECMWF’s forecasting system”

        http://sms.cam.ac.uk/media/952887

        The video is about 40 minutes long but the presenter seems to know his stuff and gives a competent performance.

        From the blurb:

        “The stochastic schemes used for the model error representation will be presented. These are the Spectral Stochastic Backscatter Scheme (SPBS) and the Stochastically Perturbed Parametrization Tendency Scheme (SPPT). … The two schemes address different aspects of model error. SPPT addresses uncertainty in existing parametrization schemes, as for example parameter settings, and therefore generalizes the output of existing parametrizations as probability distributions. SPBS on the other hand describes upscale energy transfer related to spurious numerical dissipation as well as the upscale energy transfer from unbalanced motions associated with convection and gravity waves, process missing in conventional parametrization schemes.”

        Alex

  88. Willis Eschenbach

    Dr. Lacis, thank you for your participation. As you know, I have commented on your work at WUWT. In addition, I have analysed the GISS ModelE results using a totally different metric here. It would be great if you could find something wrong with what I have written, and let me know what it is … science at its finest.

    It’s a sincere invitation.

    Here’s an encapsulated version of the problem. We have no estimates of how accurate the models are. We have no estimates of error propagation through the models. We have no V&V or SQA on the models. Forget the unknown errors, we have no idea of how even the known errors in the models affect the outcomes.

    For example, in “Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data” (warning: 2.2 Mb PDF), the GISS NASA modelers (James Hansen, Gavin Schmidt, et al. including you yourself) say:

    Occasionally, divergence along a particular direction might lead to temporarily negative gridbox masses. These exotic circumstances happen rather infrequently in the troposphere but are common in stratospheric polar regions experiencing strong accelerations from parameterized gravity waves and/or Rayleigh friction. Therefore, we limit the advection globally to prevent more than half the mass of any box being depleted in any one advection step.

    Let me be clear about this. It is “common” in your “physics-based” GISSE model to end up with “negative masses”.

    I must have missed that part of my physics class … and if I found that in one of my models, I would take it as a clue that something is very wrong.

    But that’s not what you guys did. Rather than actually fix the problem, you dodge it. You limit the advection globally to half the mass … and what error results from that procedure? Given that the parameterized gravity waves lead to those physically impossible results, what other errors are created by that obviously incorrect process? We don’t know. At a minimum it must screw up the conservation of energy … but since the GISS Model E doesn’t conserve energy, it’s not clear what violating conservation of energy does either.

    Actually, to be fair, your GISSE model does conserve energy. It does it with as much grace as it handled the parameterized gravity waves … with a kludge. And a clumsy kludge at that.

    At the end of every time cycle, the GISSE model simply takes all the global excess or shortage of energy and distributes it evenly around the globe, without even checking how much out of balance the calculation is … true word, confirmed to me by Gavin Schmidt. No routine printout of the error, and no error-trap in the code to see if the error might be big on one cycle or in a certain situation. No matter how far the model is off of the rails at the end of a cycle, the error is arbitrarily spread out evenly around the globe, with no checking, and everything is just fine.

    I mean, who’d want to check conservation of energy? If you checked that, you might find an error, and then you’d have to fix it …

    It is this constant ad-hoc patching of the errors with kludges and arbitrary parameters that gives me little confidence in your code. For example, the code had a problem handling melt-pools on the ice. It was getting melt-pools when it shouldn’t, during the winter.

    Your solution? At least you were consistent. You didn’t fix whatever the errors were in the underlying code that made the bogus melt-pools.

    Instead, GISSE now arbitrarily states that melt-pools can only form on the ice during the six months of the summer. Otherwise, no pools. I can show you the code. I can explain the code to you if you are not a programmer.

    That’s what your vaunted “physics-based” model is doing. When it encounters a problem that indicates there is an error in the physics, you just say something like “OK, no melt pools in March” or the like, and keep on going.

    And on a more important scale, the model is artificially enforcing an energy balance. If you guys were accountants and tried that, you’d be sued so fast your wallets would be spinning. Gavin Schmidt described the error as “small”, and said it was almost always under a couple of W/m2, and usually under 1 W/m2 … an error of 1 W/m2 in each step of an iteration is many things, but it is not “small”. In particular, when Hansen claims an accuracy of 0.85 ± 0.15 W/m2 in his “smoking gun” paper (discussed here), a one-watt error at each and every timestep looms very large …

    I’ve spent a lifetime programming computers. I know the limits of models from bitter and costly experience. And while I find your faith in your model encouraging, it would mean more if your models didn’t constantly fail the simplest of tests.

    But to be fair, your model is great at forecasting warming. Heck, your model shows warming even if there is no change in the forcings. The CMIP Model Intercomparison Project shows that in their control run. And how much did the GISSE model warm in the CMIP control runs, when the forcing didsn’t change in the slightest?

    Oh, only about 0.7°/century … or about the same rate that the globe warmed last century.

    Which is kind of cool, when you think about it, because your model predicts the 20th century warming pretty well (correlation =0.59) with no inputs at all. And that’s a real achievement for a model even with calm seas and a following wind … but it is still an error.

    And with all of those errors and failed tests, you want us to believe your model is accurate, not just for the tiny effects of incremental variations as in the past, but to be accurate regarding pulling out all of the non-conducting GHGs?

    Me, I’ll pass. I’m old-fashioned, I like my models to pass real-world tests before I’ll trust them. Because at the end of day, while your faith in your model is certainly impressive, I’m a follower of Mark Train, who remarked:

    “Faith is believing something you know ain’t so.”

    w.

    • Hmmmm this is getting interesting.
      I’ve cancelled all family functions, I’ve stocked up on nutrients and refreshments and am comfortable in front of my laptop.

      Over to you Dr Lacis

    • Seeing no reply to this yet, I will give my perspective. GCMs can show why Mexico is warmer than the US, July is warmer than March, etc. You can choose to believe these aspects of them or just to look at the real world and use your own knowledge of how climate works. Similarly, at another level, they show what happens when you add CO2, and again you choose to believe them, but only based on the physical interpretation of what they suggest, not blindly. In the case of climate models, the physical interpretation is very straightforwards, to the same degree as understanding seasons and latitude variations, and this makes them credible.
      Some people do actually prefer the heuristic arguments of Spencer and Lindzen, but for most scientists such hand-waving is not an explanation, and quantification is the key. Models ranging from GCMs to back-of-the-envelope one-dimensional estimates point in the same direction regarding CO2-doubling.

  89. Willis Eschenbach

    For Dr. Lacis, who thinks that forecasting the climate is a “boundary problem”, a question.

    Over at MathWorld, a “boundary problem” is defined as:

    Boundary Value Problem

    A boundary value problem is a problem, typically an ordinary differential equation or a partial differential equation, which has values assigned on the physical boundary of the domain in which the problem is specified.

    An “initial value problem”, on the other hand, is defined as:

    Initial Value Problem

    An initial value problem is a problem that has its conditions specified at some time . Usually, the problem is an ordinary differential equation or a partial differential equation.

    That seems quite clear. If we know the temperatures of the edges of a sheet of steel at various times, the rest is a differential equation.

    So Dr. Lacis, my questions about the climate boundary problem are:

    1. What is the “physical boundary of the domain in which the problem is specified”?

    2. What are the variables whose values are known at the physical boundary?

    3. What are the values of those variables?

    4. At what time are the values of the variables at the boundary specified (time of measurement)?

    5. What evidence do we have that climate is actually a boundary problem? I ask because no less of an authority than Benoit Mandelbrot said that there is no statistical difference between weather and climate:

    Among the classical dicta of the philosophy of science is Descartes’ prescription to “divide every difficulty into portions that are easier to tackle than the whole…. This advice has been extraordinarily useful in classical physics because the boundaries between distinct sub-fields of physics are not arbitrary. They are intrinsic in the sense that phenomena in different fields interfere little with each other and that each field can be studied alone before the description of the mutual interactions is attempted.

    Subdivision into fields is also practised outside classical physics. Consider for example, atmospheric science. Students of turbulence examine fluctuations with time scales of the order of seconds or minutes, meteorologists concentrate on days or weeks, specialists whom one might call macrometeorologists concentrate on periods of a few years, climatologists deal with centuries and finally paleoclimatologists are left to deal with all longer time scales. The science that supports hydrological engineering falls somewhere between macrometeorology and climatology.

    The question then arises whether or not this division of labour is intrinsic to the subject matter. In our opinion, it is not in the sense that it does not seem possible when studying a field in the above list, to neglect its interactions with others, We therefore fear that the division of the study of fluctuations into distinct fields is mainly a matter of convenient labelling and is hardly more meaningful than either the classification of bits of rock into sand, pebbles, stones and boulders or the classification of enclosed water-covered areas into puddles, ponds, lakes and seas,
    Take the examples of macrometeorology and climatology. They can be defined as the sciences of weather fluctuations on time scales respectively smaller and longer than one human lifetime. But more formal definitions need not be meaningful. That is, in order to be considered really distinct, macrometeorology and climatology should be shown by experiment to be ruled by clearly separated processes, In particular there should exist at least one time span on the order of one lifetime that is both long enough for micrometeorological fluctuations to be averaged out and short enough to avoid climate fluctuations…

    It is therefore useful to discuss a more intuitive example of the difficulty that is encountered when two fields gradually merge into each other. We shall summarize the discussion in M1967s of the concept of the length of a seacoast or riverbank. Measure a coast with increasing precision starting with a very rough scale and dividing increasingly finer detail. For example walk a pair of dividers along a map and count the number of equal sides of length G of an open polygon whose vertices lie on the coast. When G is very large the length is obviously underestimated. When G is very small, the map is extremely precise, the approximate length L(G) accounts for a wealth of high-frequency details that are surely outside the realm of geography. As G is made very small, L(G) becomes meaninglessly large. Now consider the sequence of approximate length that correspond to a sequence of decreasing values of G. It may happen that L(G) increases steadily as G decreases, but it may happen that the zones in which L(G) increases are separated by one or more “shelves” in which L(G) is essentially constant. To define clearly the realm of geography, we think that it is necessary that a shelf exists for values of G near λ. where features of interest to the geographer satisfy G>=λ and geographically irrelevant features satisfy G much less than λ. If a shelf exists, we call G(λ) a coast length”

    After this preliminary, let us return to the distinction between macrometeorology and climatology. It can be shown that to make these fields distinct, the spectral density of the fluctuations much have a clear-cut “dip” in the region of wavelengths near λ with large amounts of energy located on both sides. But in fact no clear-cut dip is ever observed.

    When one wishes to determine whether or not such distinct regimes are in fact observed, short hydrological records of 50 or 100 years are of little use. Much longer records are needed; thus we followed Hurst in looking for very long records among the fossil weather data exemplified by varve thickness and tree ring indices. However even when the R/s diagrams are so extended, they still do not exhibit the kinds of breaks that identifies two distinct fields.

    In summary the distinctions between macrometeorology and climatology or between climatology and Paleoclimatology are unquestionably useful in ordinary discourse. But they are not intrinsic to the underlying phenomena.

    I get the impression that saying “it’s a boundary problem” is regarded as a shibboleth for the true faith, the mere utterance of which clears the way.

    But me, I’m not much into faith, so until I get answers to my five questions above, I will continue to be a heathen disbeliever …

    Now here, Judith, we have the opportunity for what you are discussing in another thread, some education. If Dr. Lacis comes back and answers questions, we have some hope for fighting my ignorance about the boundary problem.

    But I have little faith that Dr. Lacis will actually answer them … I’m happy to be pleasantly surprised, however.

  90. While we wait for Lacis to explain, I’ll give an idea of why the climate problem is a boundary problem. Typical boundary problems are steady-state solutions, such as sensitivity of climate to doubling CO2. This is a fixed forcing, where forcing has a specific meaning, and includes the steady climate drivers, solar, and CO2. Climate models may generalize these forcings to be specified but time-dependent, such as CO2 scenarios , solar variations, or specified volcanoes and aerosols, but this is still a boundary problem because of the climate evolution being determined by the specified forcings. The “boundary” is not a physical boundary in every sense, though the top boundary condition of solar radiation from space is a part of it, but it is a boundary in the sense of a constraint given by the specified forcings.

    • Willis Eschenbach

      Jim D, thanks for the response. You say:

      While we wait for Lacis to explain, I’ll give an idea of why the climate problem is a boundary problem. Typical boundary problems are steady-state solutions, such as sensitivity of climate to doubling CO2. This is a fixed forcing, where forcing has a specific meaning, and includes the steady climate drivers, solar, and CO2.

      But as I’ve shown above, the GISSE model gives incorrect answers even when all of the forcings are set to zero. If climate is truly a boundary problem, and the model is truly simulating the climate, and the forcings are set to zero, why is the temperature changing in the GISSe model? Your choices are:

      1. GISS model no workee.

      2. It’s not a simple boundary problem.

      3. Both.

      Also, saying it is a “boundary problem” if the “boundary” is the TOA implies that we understand all of the forcings. That seems doubtful.

      Thank you for answering. I didn’t really expect Dr. Lacis to answer me, once the going gets tough, the tough go answer an easier question … but like I said, I’m still happy and willing to be surprised.

      • I didn’t really expect Dr. Lacis to answer me, once the going gets tough

        You any relation to Bruce Willis? Maybe Andy just hasn’t seen the connection yet.

      • Willis Eschenbach

        I’m happy to be surprised, like I said.

      • You any relation to Bruce Willis? Maybe Andy just hasn’t seen the connection yet.

        No relation but there are some similarities in character.

        Both Willises cop a caning from the bad guys over and over again, but they don’t get deterred one iota, they just keep plugging away doing the right thing.
        And they’re both universally regarded as the good guy.

        By the way, any relation to Vince? He is a pretty bad comedian IMHO
        He fires off words like a gattling gun and one gets tired of hearing his voice after a short while.

      • By the way, any relation to Vince? He is a pretty bad comedian IMHO. He fires off words like a gattling gun and one gets tired of hearing his voice after a short while.

        Any relation to Bah Humbug?

  91. A Lacis

    ….” The G&T paper is so far off the wall that it is simply beyond just being wrong. “…..
    This is hardly an adequate reply to Gordon Robertson.

    So it should be easy to pick one major defect in the G&T paper and point it out.

    For instance G&T say that a major error in most versions of Greenhouse Theory is in violations of the Second Law.
    A fairly frequently expressed climate science understanding of this law is ;
    Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.
    This is wrong.

    • I fail to understand, what you want say.

      There is no doubt that the second law makes a statement about the net flow. There is nothing wrong in describing radiative transfer of energy between two bodies at different temperatures as a stronger radiative transfer from the hotter to the colder and a weaker radiative transfer from the colder body to the hotter. After all, radiation definitely occurs in both directions.

      The second law tells only that the net transfer is from the hotter to the colder.

      When the situation can be described in alternative ways, physicists usually prefer the choice that makes quantitative calculations easiest or most straightforward. In the case of radiative heat transfer this leads often to calculating separately both directions of radiative heat transfer and then their difference. Any paper that claims that this is in contradiction with the second law of thermodynamics is seriously wrong and shows that its authors know nothing about thermodynamics.

    • I believe you’re confusing heat with energy.
      The nettflow of energy is from the warmer to the colder body. Energy flows both ways between the two bodies, but there’s always more flow from the hotter to the colder than there is from the colder to the hotter – unless work is done.

    • So it should be easy to pick one major defect in the G&T paper and point it out.

      The hard part is having so many choices.

      • Vaughan Pratt

        Pick the first one you come to.
        Assuming you know how to count to one!

      • Pick the first one you come to.

        A truly major one is their claim that the 2nd law of thermodynamics disproves back radiation. As has been pointed by far too many people to keep track of, it does no such thing.

        Assuming you know how to count to one!

        Hmm, let me try it. 3, 2, 1, buggeroff! (Pardon my French.)

      • If one is going to use a foreign language one must first understand it. Now can you tell us exactly where in the G&T paper they claim that

        2nd law of thermodynamics disproves back radiation.

        I am having a lot of difficulty finding it.

      • Now can you tell us exactly where in the G&T paper they claim that 2nd law of thermodynamics disproves back radiation. I am having a lot of difficulty finding it.

        If you’re unable to find the problems in a paper after a whole slew of people have pointed them out, this might help explain why you consider the paper to be free of problems. (David Hagen appeared to be having the same problem with Miskolczi’s paper back when he was laboring under the delusion that Viktor Toth had arrived at the same number (2) that FM was claiming when VT had in fact obtained 0.4.)

        Unfortunately the ability to count to one will not help here because the claim is on page two of the arxived journal version.

        “The atmospheric greenhouse eff ect, an idea that many authors trace back to the traditional works of Fourier (1824), Tyndall (1861), and Arrhenius (1896), and which is still supported in global climatology, essentially describes a fi ctitious mechanism, in which a planetary atmosphere acts as a heat pump driven by an environment that is radiatively interacting with but radiatively equilibrated to the atmospheric system. According to the second law of thermodynamics such a planetary machine can never exist.”

        If one is going to use a foreign language one must first understand it.

        The same goes for English. What they’re saying there is that the second law of thermodynamics proves there cannot be back radiation, by virtue of being inconsistent with it. It is a proof by contradiction, reductio ad absurdum.

      • Pratt quoting G&T:

        “The atmospheric greenhouse eff ect, an idea that many authors trace back to the traditional works of Fourier (1824), Tyndall (1861), and Arrhenius (1896), and which is still supported in global climatology, essentially describes a fi ctitious mechanism, in which a planetary atmosphere acts as a heat pump driven by an environment that is radiatively interacting with but radiatively equilibrated to the atmospheric system. According to the second law of thermodynamics such a planetary machine can never exist.”

        Nowhere in that statement does it even remotely claim

        that the 2nd law of thermodynamics disproves back radiation.

        it doesn’t even imply it. It’s very clever of you to be able to count to two but it seems you can’t go far beyond that. I would have thought that at the very least you would have gotten to section 3.9 before quoting text that did not prove your claim.

        Now to an earlier matter:

        . However the “cannon ball” is really an N-atom molecule

        Naah the “cannon ball” is not a molecule of any sort but a parcel of air free to move vertically by convection. Advection (lateral movement) is irrelevant.

      • (Oops, sorry, only noticed your reply just now. I keep getting lost in these blogs.)

        Nowhere in that statement does it even remotely claim
        that the 2nd law of thermodynamics disproves back radiation. it doesn’t even imply it. I would have thought that at the very least you would have gotten to section 3.9

        We seem to have wildly different interpretations of what G&T meant by what I quoted. As I read that quote (from the abstract), they are saying that the 2nd law disproves the possibility of such a planetary machine. Now I would have thought, but you seem to disagree, that it is obvious from context that they’re referring here to “the idea that many authors trace back to the traditional works of Fourier (1824), Tyndall (1861), and Arrhenius (1896), and which is still supported in global climatology,” which they develop in more detail in Section 3.6.

        So what do you think they’re referring to there?

        I found Section 3.6 utterly unreadable because they kept quoting statements by other authors that I fully agreed with, and then sarcastically shot them down via logic I was utterly unable to follow. Their slogan could well have been “Everyone is wrong except Wood and us.” I just couldn’t follow it, it was like trying to hang on to a roller coaster.

        Somehow their observation that almost everyone but them in the past two hundred years of the subject is out of step doesn’t seem to bother them.

        That all comes well before Section 3.9, which is about a bizarre reinterpretation of “the traditional works” in which G&T make amazing statements like “The second law is a statement about heat, not about energy,” contradicting every physics textbook that says “Heat is a form of energy,” and “the Stefan-Boltzmann constant is not a universal constant” when it clearly is (unless you believe π or Boltzmann’s constant or Planck’s constant or the speed of light is different on Mars or Arcturus, which may be but not by much). Statements like that are what one might expect on a first year exam by a student who should not have enrolled in physics in the first place. How is one supposed to make sense of arguments based on nonsensical statements about physics?

        Did you say you work with Hitran data? How are you able to do that and yet find your thinking fully compatible with the G&T paper? You must be able to compartmentalize your thinking far better than I. Quite apart from the spiteful polemics lacing the paper, I found their arguments devoid of both logic and basic physics. Were I the handling editor for this paper I would not have dreamed of sending it out for refereeing, I’d lose face with my referees.

      • blockquote>We seem to have wildly different interpretations of what G&T meant by what I quoted. As I read that quote (from the abstract), they are saying that the 2nd law disproves the possibility of such a planetary machine

        The “machine” is the atmospheric heat pump that heats up the surface from a cooler source. It seemed straight forward enough to me so why not to you? I think I can see why:

        “The second law is a statement about heat, not about energy,” contradicting every physics textbook that says “Heat is a form of energy,”

        Heat might be a form of energy but that is no reason to confuse the two. For example there may be energy in an an isolated system but there will be NO HEAT if the entropy is maximised.

      • I don’t want to argue whether or not heat is energy any more, it’s a pointless discussion. It’s a waste of time attacking the G&T paper’s faults one by one, it’s like trying to exterminate an ant nest by squishing the ants one by one. The only other paper I’ve seen in a journal with anything like the same number of ridiculous statements is the Sokal paper. In Sokal’s case he admitted he did it deliberately.

        It is inconceivable that G&T could have Ph.D.s in physics and be able to write such utter nonsense. The only possible explanation is that, like Sokal, they did it deliberately, but unlike Sokal they won’t admit to having done so.

        Anyone who can’t see that has no business claiming to be knowledgeable about physics.

  92. Pekka Pirilä

    The statement that you seek to defend is;
    “Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.”

    I’m sure you’ll agree that the contentious part of the sentence is
    Heat can flow from a colder to a hotter object ……we will assume the other part is a condition.
    I f you look up the thermodynamics section of any reputable Physics Textbook it will explain that HEAT has the thermodynamic capacity to do WORK.
    So whatever the status of this radiation is it is not HEAT.

    • I know perfectly well, what I am talking about. I have been teaching thermodynamics and there is nothing controversial in what I said. Read again my previous message, and try to understand, what it tells.

      • Pekka Pirilä
        Peter above is correct.
        Who do you teach thermodynamics to?

      • I’m sorry Bryan, Peter and Pekka. After reading your discourse, I just can’t help but post this.

        Who do you teach thermodynamics to?

        Answer: Climatologists

      • I have teached at the Technical University of Helsinki for students of technical physics (those who understand most about physics) and to students of energy engineering.

      • Pekka
        Thanks for the reply.
        Does your syllabus include the Carnot Analysis(Carnot Cycle) of the perfect heat engine?

      • Naturally, and other cycles as well. For energy engineering students the various thermodynamical cycles are the most important thing to learn.

      • Pekka

        You will understand that the Carnot Analysis is the usual introduction to the Second Law.
        Is that your understanding?
        It also gives the most efficient method of transferring heat from a lower to a higher temperature.

      • In engineering Carnot cycle is rather a theoretical concept than a practical cycle because other cycles are much more easily realized as practical engines.

        It is always the first cycle to be taught as it is used to relate the second law to the best possible efficiency, but after that the other cycles get most emphasis.

      • Yes the Carnot Analysis is for an “ideal” or perfect engine.
        So any transfer of heat from a lower to a higher temperature will be less effiient!

        What law will govern the effect of radiative energy from lower to higher bodies?
        Will it be the first law such as Heat gained = CxdeltaT
        Or the Second Law?

      • The answer to your question is above in my message of December 11, 2010 at 6:41 am.

        The radiative heat exchange between two bodies is not a complete thermodynamic cycle. The second law tells only that more energy flows from the hotter to the colder than from the colder to the hotter, or in other words the net flow is from the hotter to the colder.

      • Pekka

        I gather the way you see it is lets say 300J of radiation arrive from the colder body lets say at 200K.(With the usual spread of wavelengths)
        This radiation is fully absorbed by a body at 250K.
        If we know the mass and specific heat(C) of the warmer body we might try a first law solution such as ;
        CMdelta T
        However the alarm bells might ring in our heads and say ;
        No this is the province of the Second Law!
        Where do you stand?

      • I’m mystified by these objections to Pekka’s statement about bidirectional heat flow. Take two square sheets of perfectly black metal, each of side 42 cm (about 1.9 sq.ft.), at respective temperatures T₁ hK (hectokelvin, i.e. 100T₁ K, so 3.17 hK = 317 K) and T₂ hK , and place them side by side 1 cm apart. Then by Wien’s displacement law, radiation at wavelengths peaking at 29/T₁ microns will flow from sheet 1 to sheet 2 while radiation at wavelengths peaking at 29/T₂ microns will flow from sheet 2 to sheet 1. By the Stefan-Boltzmann law, the power at the first wavelength received by sheet 2 is T₁⁴ watts (the sheets were sized to make the constant 1 to within .02%) while that received by sheet 1 is T₂⁴ watts. The net flux is then T₁⁴ − T₂⁴ watts, with the sign determining the direction of flow.

        This is completely consistent with everything Pekka has been saying. In particular it is clear that radiation is flowing in both directions, and we can even measure each direction separately when the temperatures, and hence wavelengths, are sufficiently far apart.

        Based on the comments I would estimate the fraction of the n contributors to this discussion who regularly teach thermodynamics to be 1/n.

      • I don’t think anyone disputes that radiation emanates from both objects. The issue is about the net flow. heat is a flow. The transfer of heat between objects is the result of the net flow. The presence of the cooler object slows the rate of cooling of the hotter object, but to talk of the cooler object ‘heating’ the hotter object just confuses the issue, and is contrary to natural language usage. The natural usage of ‘heating’ is that if something is being heated, it will get hotter. The cooler object doesn’t do this to the hotter object. It radiates towards it yes, but heats it? No.

        I understand why proponents of the AGW hypothesis wish to torture language this way though, given the average atmospheric temperature is lower than the average oceanic temperature. They have to convince people the tail wags the dog somehow.

      • to talk of the cooler object ‘heating’ the hotter object just confuses the issue

        Was someone claiming that the cooler object heated the hotter one? I thought the phrase Bryan was objecting to was “Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.”

        and is contrary to natural language usage.

        I looked up the dictionary definition of radiant heat and got

        radiant heat

        noun Thermodynamics
        heat energy transmitted by electromagnetic waves in contrast to heat transmitted by conduction or convection.

        That definition seems completely consistent with the wording of the phrase in question. The term “radiant heat” is in wide use in English, with the above dictionary meaning.

      • “The term “radiant heat” is in wide use in English, with the above dictionary meaning.”

        Certainly when sitting in front of a bar radiator which is hot and you by comparison are cold. Try not to confuse heat with energy (radiant or otherwise). Pekka who “teached (sic) physics” may have an excuse that you don’t.

      • Vaughan Pratt
        |There is a difference between the vernacular use of the word HEAT and the Thermodynamic meaning of the word.
        I had assumed that climate science is indeed a science and hence will use the scientific meaning of the word
        In fact Heat is more like a verb as it describes a PROCESS by which thermal energy is transferred from a higher to a lower temperature.
        If you are in any doubt about this look up a Physics textbook.
        Further anyone like Pekka who has been anywhere near a thermodynamics course knows I am correct.
        Clausius stated the Second Law in a most explicit way.
        If any climate science practitioners want to get on the wrong side of the law they must join the other crackpots for it is a massive obstacle and will not be faulsified.

      • There is a difference between the vernacular use of the word HEAT and the Thermodynamic meaning of the word.

        That’s irrelevant to my point, which if you look at my comment you will see was merely in response to the claim I was objecting to, “is contrary to natural language usage.” I take “natural language usage” to mean something like “vernacular use,” what do you take it to mean?

        If you are in any doubt about this look up a Physics textbook

        Tried a bunch, haven’t found one yet. Sounds like you’re the one unfamiliar with physics. How many units of physics did you take in college (assuming 3-5 unit courses)?

        Further anyone like Pekka who has been anywhere near a thermodynamics course knows I am correct.

        I’ll let Pekka be the judge of that. I thought he was contradicting you but if not then I am clearly at fault here.

        and will not be faulsified

        If you’re a native speaker of English you shouldn’t be complaining about Pekka’s “teached.”

      • Vaughan

        If you are in any doubt about this look up a Physics textbook

        ……….”Tried a bunch, haven’t found one yet. “…….
        Name one of the bunch.
        I dare you because I know you cant

        You will also find in any Physics textbooks a scientific meaning for words like WORK , FORCE and POWER
        I repeat what kind of crackpot wants to challenge the Second Law?

      • ……….”Tried a bunch, haven’t found one yet. “…….
        Name one of the bunch.
        I dare you because I know you cant

        Oh Bryan, when you lift a line straight out of “Trolling for Dummies” you really ought to try to obfuscate its source at least a little.

        The divine Richard S. Courtney advised us all not to feed the trolls (I would quote the thread but WordPress’s search mechanism won’t cooperate). I really should follow RSC’s advice but I’m just such a sucker for them with their pleading ways. Sigh, oh well, here goes.

        Resnick and Halliday, 6th printing, Oct. 1963, Part I, p. 466: “heat is a form of energy.”

        That contradicts what you wrote:

        In fact Heat is more like a verb as it describes a PROCESS by which thermal energy is transferred from a higher to a lower temperature.
        If you are in any doubt about this look up a Physics textbook.

        I didn’t need to look it up because I was in no doubt that physicists treats heat as a noun—I was trained as one (a physicist, that is, not a noun), in the early 1960s as you can tell from the print date of my copy of H&R.

        You may be thinking of your sustenance provider, who presumably thinks of “heat” as a verb when in the kitchen. (And so do physicists when they’re in their kitchens.)

        I was tempted to quote ten more introductory physics texts (20 feet of the shelf space in my home office is physics texts, a few percent of which are introductory) but that would be promoting troll obesity.

        I’ve told Al Tekhasski (aka Alexey Predtechensky of Austin, TX, trained in Novosibirsk under Victor L’vov who went to Israel rather than the US) and Arfur Bryant (no idea who he is) that, after much discussion, I have nothing further to say to them. You’re coming close to the end of the same line after much less discussion since you’re rather more more transparent than those two.

      • Vaughan

        ……”Resnick and Halliday, 6th printing, Oct. 1963, Part I, p. 466: “heat is a form of energy.””…..

        Of course HEAT is a form of energy but I think R&H went a bit further than that did they not?.
        You are adamant that you hold some kind of position in an American University.
        Therefor you will have access to library facilities which will allow you to verify these modern textbooks.
        University Physics by Harris Benson page 382
        Modern definition of Heat
        Heat is energy transferred between two bodies as a consequence of a difference in temperature between them.
        ……………………………………………
        University Physics Young and Freedman

        Energy transfer that takes place sole because of a temperature difference is called heat flow or heat flow transfer and energy transferred in this way is called heat page 470

        Heat always flows from a hot body to a cooler body never the reverse. pg 559
        ……………………………………………..

        When Feynman finished the thermodynamics sections in the famous 3 volume lectures he recommended interested readers who wanted to take the matter further only one book.
        The book he recommended was Heat and Thermodynamics by Zamansky.
        If you read pages 57 and 58 (Fourth Edition)
        We find him defining heat as the transfer of energy from a higher temperature to a lower temperature.
        On page 147 he gives the Clausius/Kelvin statement on the Second Law.

      • Bryan,
        Classical thermodynamics (in contrast to statistical thermodynamics) is a surprisingly abstract mathematical construction, where defining the concepts gets difficult. Heat and entropy are two prime examples of this difficulty. From the formulation of the theory follows that heat is defined through an equation that presents energy flows. Classical thermodynamics stops at this point. It cannot tell more about the nature of heat. The quotes that you gave are expressions of this fact.

        When one goes further to microscopic phenomena, one can derive classical thermodynamics from the laws of microphysics (best from quantum mechanics) and mathematical statistics. When this approach is taken, heat gets a better explanation. It is internal energy of matter or more precisely kinetic and potential energy of the atoms, molecules and even electrons within atoms. Looking this way it is also easier to understand the connection between radiative energy transfer and heat.

      • I add one point.

        The first law of thermodynamics tells that energy is conserved. It is a postulated “external” law also in microphysics. The second law of thermodynamics is, however, derived in the statistical thermodynamics. It is not any more a separate law that must be postulated.

      • Pekka

        I framed by posts in terms of Classical Thermodynamics .
        Statistical Thermodynamics (from Mechanics) takes a different road to say the same thing.
        As far as I know Statistical Mechanics does not contradict in any way the classical approach.
        So I will stick with Clausius, Feynman and Zemansky.
        I hope you agree that my posts here are consistant with the Classical Approach.

        Now surely with your background you can pick one detail of the G&T paper and explain how it is in error.
        If you do, as far as I know you will be the fist to achieve this.
        The Halpern et al Comments paper was a failure, as it appears they did not read the G&T paper carefully enough.

      • Bryan,
        Statistical thermodynamics does not contradict classical thermodynamics, but it extends it. Sticking with classical thermodynamics you miss the oppprtunity to know more. If you would know well statistical thermodynamics, you would not any more have those problems that you have now. Then you would understand everything better.

      • Pekka.
        I don’t think I have any problems here.
        On the other hand you have made serious allegations .
        The G&T paper you say is full of mistakes.
        School level equations have been mixed up with advanced equations in a cut and paste mess without coherence.
        I think you must substantiate your position or else withdraw these statements.
        You do appear to have competence in thermodynamics.
        As far as I know you are the first person with any competence in this area to dispute the G&T paper.
        None of the Halpern et al group have any special knowledge of thermodynamics.
        You can use Classical Thermodynamics or Statistical Mechanics to explain your position, its up to you.
        The economy of the World is being dislocated because of speculated unfortunate radiative properties of CO2.
        The stakes could not be higher.
        People with knowledge in this area should not regard themselves as being on one side or the other.
        Clear objective science is the only way forward.

      • For those, who know physics, I need not justify my claims, because they see the facts very easily themselves.

        For those, who know so little physics that they do not see the weaknesses themselves, I cannot teach enough through this kind of discussion.

      • Pekka.
        You might as well not post such an admission of defeat.
        Why post anything, nobody twists your arm.

        However you come on and make wild accusations. When asked to explain, you refuse to substantiate them, hiding behind ‘well I just know and I’m not telling you’ kind of remark.

        What would an independent observer think!
        You leave a very poor impression of yourself.

      • The Halpern et al Comments paper was a failure, as it appears they did not read the G&T paper carefully enough.

        The main service rendered by the Miskolczi and G&T papers is as a convenient test of whether to engage in conversation about climate science when so invited at a cocktail party. Just ask their opinion of those papers. If their opinion agrees with yours then you’ll be able to spend a pleasant evening together discussing the subject constructively. If it doesn’t then further conversation will be a pointless waste of time for both of you.

      • @Bryan (quoting Young and Freedman) Heat always flows from a hot body to a cooler body never the reverse. pg 559

        I’ll be happy to let you have the last word on this. Here’s mine.

        “Always” is just as dangerous a word to write in a physics text as “never.” In this case Young and Freedman must be restricting attention to conducted heat without making this clear, since this statement is demonstrably false for both radiant heat and convected heat.

        For radiant heat it fails whenever the cooler object is at some distance from the hotter one and is radiating a large amount of sufficiently narrow-band radiant heat focused in a sufficiently narrow beam at the hotter object, which is radiating diffusely as a black body. The temperature of the cooler radiator is defined by the Wien Displacement Law and can be considerably lower than that of the hotter body, yet the cooler one heats the hotter one to a greater degree than vice versa because more net power is being transferred from the cooler to the hotter one than vice versa.

        For convected heat, an ordinary desktop fan provides a counterexample. If it blows dry air at 10 °C over a wet cloth at 9.9 °C, one might expect the hotter dry air to heat the cooler cloth by convection. In fact it cools the cloth yet further by evaporative cooling (what we Aussies call a Coolgardie safe).

      • The only torturing of the language is by those skeptics who claim that the second law forbids the greenhouse effect.

      • What amazes me is the number of supposedly technically competent people who have been taken in by the G&T paper. How have they failed to see either the polemics or the illogic?

        I think what we’re seeing here is a bunch of information technologists who think that because they understand information technology very well they therefore understand technology very well, and therefore understand physics.

        Sadly it doesn’t follow, as a 30-minute physics exam would quickly reveal for any of them. I have not run across a single climate denier yet who could maintain a coherent discussion of physics for even two sentences. (Ok, Willis Eschenbach, I’ll make an exception in your case. Ten sentences.)

      • Willis Eschenbach

        Thanks for the vote of semi-confidence, Vaughn. FYI, I also think the G&T paper is nonsense …

      • My pleasure, Willis. But what does your WUWT fan base have to say about your position on G&T? Do you expect to lose closer to 10% or 90% of them?

        I ask this because Judith still has some G&T supporters on her blog. I can’t see any of them keeping you among their Facebook friends.

        But this also raises another question, whether the Curry-haters rank Judith above or below you in their rogues gallery of climate heretics.

        Either way I consider both of you worthy opponents in case there’s any general interest in debates about confidence in the IPCC’s judgments. For all I know they may prove I have less confidence in the IPCC than either of you. Theoretically no, but climate is too complex a matter to be analyzed other than empirically (Pratt’s Axiom in case it hasn’t already been named for someone else).

      • Willis
        ….”FYI, I also think the G&T paper is nonsense …”…..

        Perhaps instead of throwing out a pointless insult you could explain where some of the major errors in the G&T paper are!
        You could consult the Halpern et al paper if you are stuck for an answer.
        However it is generally considered that the the Halpern et al paper was a dismal failure.

      • However it is generally considered that the the Halpern et al paper was a dismal failure.

        To quote Ron Cram, “I call BS.” Name a single nondenier that has ever suggested anything remotely like that.

        I had been meaning to write my own critique of the G&T paper until I saw the Halpern et al paper, which made all the points I was going to make and more, so I didn’t need to.

        Stop kidding yourself. The G&T paper is utter trash, through and through, unless you’re looking for examples of how to incorporate extreme polemics into your next paper.

        I challenge you to name the last time in the past 700 years when a paper as polemical as that of G&T had the slightest influence on science textbooks of the following two centuries.

        You’re not a sceptic, you’re a denier. There’s a difference.

      • Vaughan Pratt

        I gather you couldn’t find any part of the G&T parer in error till you read the Halpern paper.
        You seem to be inspired by this deeply flawed comment to the G&T paper

        Ill make it really easy for you then!

        Pick one point from the Halpern paper that you think shows the G&T paper is “utter trash”
        Or continue with a blanket smear that fools no one.

      • Bryan,
        The reason that you get the same answer from all directions is in the nature of the G&T paper.

        It’s main fault is not in individual details and so easy to pinpoint. It’s main fault is that it contains nothing substantive and in the fact that it presents strong conclusions not supported by its main text. It picks bits and peaces from various fields of physics, but does not even try to justify the conclusions it presents. It is empty in content and strong only in unjustified claims.

        This is the whole real criticism that this paper is worth of. If you disagree, you should tell, where in the content you find justification for any single one of the strong conclusions. I claim that the paper does not contain one single example of that. What more can you expect from a critic without challenging this claim by a counterexample. If you think that you have a counterexample, we can proceed to study, whether we can disprove it.

      • Pekka Pirilä

        I would have thought an easier approach for yourself would have been to develop one of the Halpern et al attempts at a criticism of the G&T paper.

        However here is a start.

        G&T use the familiar approach of physicists to a problem they first look at the experimental reality.
        1. The famous experiment by R W Wood.
        This proved two things
        a. The reason a glasshouse is hot is not from trapping radiation.
        b. The radiative effects of atmospheric gases at typical temperatures are so small they can be ignored.

        2. They make use of the experimental work of A Schack who came to the same conclusions as Wood but added precise measurement of the effects of CO2.

        G&T then contrast this with the claims made for the Greenhouse Effect such as the claimed increase in average Earth Surface temperature of 33K and find these claims unproven and unphysical.
        Now I could go on but all I would be doing is rewriting their paper.
        Now its your turn to prove that their line of reasoning is in error.

      • Bryan,
        a. The reason a glasshouse is hot is not from trapping radiation.

        Not true as discussed by Vaughan Pratt, who studied it even experimentally. The glasshouse warms both through radiative effects and by limiting convection. Both factors are very important. Furthermore this is completely irrelevant for the main issue.

        b. The radiative effects of atmospheric gases at typical temperatures are so small they can be ignored.

        Empty claim, which is patently false and not justified in the paper.

        2. They make use of the experimental work of A Schack who came to the same conclusions as Wood but added precise measurement of the effects of CO2.

        The same ideas that Schack used have been repeated with much better data and they lead to the present estimate of radiative forcing. They cannot be used to counter these estimates.


        G&T then contrast this with the claims made for the Greenhouse Effect such as the claimed increase in average Earth Surface temperature of 33K and find these claims unproven and unphysical.

        What is their argument? I cannot find any. This is just the point that I made.

        They write into their paper many things about physics (most of them true, some less true), but these things have no value as arguments to support their conclusions as they do not even try to present the connection. They just jump to conclusions without any real supporting arguments.

      • Pekka Pirilä

        a. The reason a glasshouse is hot is not from trapping radiation.
        ………………………….
        Not true as discussed by Vaughan Pratt, who studied it even experimentally. The glasshouse warms both through radiative effects and by limiting convection. Both factors are very important. Furthermore this is completely irrelevant for the main issue.
        ……………………………………………..
        So Vaughan disproves the experiment by R W Wood.
        Where can I get hold of the evidence!
        ……………………………………………….
        b. The radiative effects of atmospheric gases at typical temperatures are so small they can be ignored.
        Empty claim, which is patently false and not justified in the paper.
        …………………………………………..
        Well R W Wood made such a claim and there is certainly plenty of evidence to back it up.
        ………………………………………..
        2. They make use of the experimental work of A Schack who came to the same conclusions as Wood but added precise measurement of the effects of CO2.

        The same ideas that Schack used have been repeated with much better data and they lead to the present estimate of radiative forcing. They cannot be used to counter these estimates.
        …………………………………………………………..
        So you are now disputing the experimental work of
        Schack!
        What evidence do you have that he was mistaken!
        …………………………………………………………….

        G&T then contrast this with the claims made for the Greenhouse Effect such as the claimed increase in average Earth Surface temperature of 33K and find these claims unproven and unphysical.

        They go into various models of the Earth to show that the average temperature of 15C has been arrived at without justification.
        They include a calculation showing Sun/Earth illumination results in an average of total radiation from Earth implying an average temperature of 279K without any speculation as to atmospheric effects.
        Richard Fitzpatrick arrives at a similar figure.

      • Bryan,
        Your additional comments were empty. They did not contain any indication contradicting my claim that G&T do not even try to justify their conclusions. They do not present any logical chain linking their other material to the conclusions.

        Why do you ask for evidence, when you know that you do not believe anything.

        The claims that I made are true, but there is no way I can prove them or any other claim through messages on Internet, when you just disregard any arguments without any attempt to find out, whether they are true or not.

        I have decided already a couple a times that it is useless to argue with you, but I cannot always keep my decisions. Now I try once more.

      • Pekka Pirilä

        I have supplied you with some of the evidence that G&T use to substantiate their paper.
        You dispute this but more or less say I should accept what you say without evidence!
        Take our first item for instance;

        a. The reason a glasshouse is hot is not from trapping radiation.
        ………………………….
        Not true as discussed by Vaughan Pratt, who studied it even experimentally. The glasshouse warms both through radiative effects and by limiting convection. Both factors are very important. Furthermore this is completely irrelevant for the main issue.
        ……………………………………………..
        So Vaughan disproves the experiment by R W Wood.
        Where can I get hold of the evidence!
        ………………………….
        As yet not supplied!
        ……………………………
        …” Furthermore this is completely irrelevant for the main issue.”……..
        Not true, the main issue is whether the atmospheric radiative gases produce a greenhouse effect that is so significant that it adds 33K to the planet’s surface temperature.
        Wood found that the radiative effect was very small almost negligible at room temperature.

  93. Surely the above discussion (425 technical comments and counting) makes it clear that the science is far from settled on this topic. In fact the controversy seems quite far reaching. This is important.

  94. i could use some help on my next post, which is on the topic of the no feedback CO2 sensitivity. If you know of any references that provide a specific value (like 1C or whatever) I would appreciate your pointing them out. thx.

    • I am curious as to why you want to continue to focus on CO2 forcing issues when the greatest uncertainties lie elsewhere?

      • trying to nail down what we actually understand; currently tearing my hear out since I can’t even make sense of how people are coming up with 1C and how to interpret it. So everyone seems to believe this number, i’m trying to figure out why people have confidence in this (I’m having less and less).

      • That makes sense, since it is your area of expertise. I look forward to the debate.

      • Judith, you write “trying to nail down what we actually understand; currently tearing my hear(t) out since I can’t even make sense of how people are coming up with 1C and how to interpret it. ”

        You wont find aynthing. The 1C rise in temperature for a doubling of CO2 without feedbacks is a purely hypothetical and completely meaningless number, which cannot be measured, and for which no error has been assigned. As an undergraduate at Cavendish Labs, has I present the scientific garbage in Chapter 6 of the TAR, any professor would automatically have awarded me 0 out of 100. I would never have dared present it to my tutor and mentor; a gentleman who went on the be Prof. Sir Gordon Sutherland, Head of NPL. Had I done so, he would have given me a tongue lashing so severe, I dont think that I would have ever recovered.

        Your words are most encouraging that I have read so far on this blog.

      • I agree with Jim
        There are so many assumptions and such unknown error bars that I don’t think there will be “proof” any particular number is the correct one for a long time.

        Welcome to the sceptics world, some of us have lost hair scratching our heads so hard. I hope you keep yours.

      • James Hansen mentioned 1-1.2DegC in his presentation to the Committee on Commerce, Science and Transportation
        United States Senate in May 2001

        His figure 2 is the one where he adds all the forcings, natural and anthropogenic (1.7Wm2) to come up with 1Wm2=0.75DegC ~ 1.7Wm2 = 1.2-1.3DegC

      • The real atmosphere has feedbacks. Thus the “no feedback CO2 sensitivity” is an artificial construct whose precise definition can not be inferred from nature. Instead it must be defined using some theoretical framework – or model, where one can tell what is feedback and what is not.

        Such a number is an intermediate result in some approaches of calculating the full CO2 sensitivity with feedbacks. In other theoretical frameworks it is just an additional result that may help in comparing certain components of different models or theoretical frameworks.

        I would no go as far as Jim Cripwell and claim that the number is meaningless, but I agree with him that the number has no direct connection to the real atmosphere or earth.

      • This is from a book “The Human Impact Reader” by A. Lacis, et al. On page 232 there is a table of models. The first model is for doubling of CO2 with no feedbacks. The calculations do not appear to be shown, but the 1.2 C number is there.

      • “Thirteen years ago, Danny Braswell and I did our own calculations to explore the greenhouse effect with a built-from-scratch radiative transfer model, incorporating the IR radiative code developed by Ming Dah Chou at NASA Goddard. The Chou code has also been used in some global climate models.

        We calculated, as others have, a direct (no feedback) surface warming of about 1 deg. C as a result of doubling CO2 (“2XCO2”). ….” – Roy Spencer

      • The are a couple of steps to explain. First we have the commonly used 3.7 W/m2 from doubling CO2. Then we convert that to a delta T using the derivative of Stefan-Boltzmann (4 sigma T^3). If you use the top-of-atmosphere effective T(=255 K), you get very close to 1 C for doubling CO2 at the top of the atmosphere. Even Lindzen and Choi posit these arguments between their equations (1) and (2), so it is quite uncontroversial. The next step is how this relates to surface warming, and that is where lapse-rates have to be considered, but it is generally going to be a similar magnitude, though this number would be very hypothetical, and less easily definable considering global variation, while the TOA temperature change is more firmly founded as an equivalent radiative temperature there.

      • Judith,

        Fred Moolten gave an explanation for that figure here
        However, as I subsequently pointed out further on, an increase of 3.8W/m2 energy flux does not necessarily lead to a 1.2C increase in temperature.

      • Pekka writes “I would no go as far as Jim Cripwell and claim that the number is meaningless, ”

        I should have explained why I claim the numnber is meaningless. No-one has ever associated an error to this number; no +/-. Surely any student of physics knows that a number with no +/- is meaningless.

      • Thanks peter, this is what I was looking for. I agree 100% with your last sentence, this is what I am trying to investigate. hope to have this post ready later tonite.

      • Judy – For the temperature response to CO2 doubling (absent other feedbacks), 1 deg C is an approximation, and the models yield values of about 1.2 C. The difference reflects the inclusion in the models of heterogeneity involving latitude, seasonality, etc., regarding lapse rates and other variables.

        For one reference regarding the models, see Soden and Held, Journal of Climate 19:3354, 2006 . The estimated parameters for the Planck response alone (i.e., the “no added feedback scenario) average about -3.1 to -3.2 W/m^2 per deg K. For a forcing from CO2 doubling of 3.7 W/m^2 (from Myhre), a rise of about 1.2 C is therefore required.

        Regarding the 1 deg C approximation, it is implied although not directly calculated in the Hansen et al 1981 Science paper – Hansen et al . Confusingly, they end up with a higher value than 1 C, but that is because their CO2 doubling estimate exceeds the 3.7 W/m^2 value. Ignore their values, but instead consider the basis of their approximation, which is a linear lapse rate and a mean Earth radiating temperature of 255 K. The value of 1 then follows (although they don’t do the arithmetic). The Stefan-Boltzmann equation gives Flux = sigma (a constant) x T^4 for a black body (assumed for the Earth for the absorbed, non-reflected radiation). Differentiating this to get dF/dT, we find it to be 4 sigma x T^3, and inverting, we find that dT/dF = 1/(4 sigma T^3). This makes dT = dF/(4 sigma T^3), and substituting 3.7 for dF yields a temperature change of almost exactly 1 deg C.

      • Fred, these are the exact papers i’m reading. The soden and held paper seems to be the most significant one in this regard, but i can’t figure out how they actually did their calculation for the Planck response. I have the same concern that Peter 317 has. I hope to have my post ready later tonite so we can dig into this topic.

      • The 3.1-3.2 Planck response estimates come from the models. The approximation I described calculates out to a value of 3.7, hence the 1 deg C temperature rise.

        I’m not sure I understood Peter’s point. Energy escapes to space only via radiaton, at an average temperature of 255 K. How it moves from the surface via radiation, convection, and a small amount of conduction is irrelevant. As long as the lapse rate is linear, the temperature change at the surface will be the same as at the radiating altitude. Forcing is defined as specifying an unchanged lapse rate, although feedbacks can subsequently alter lapse rates.

      • Peter’s point is about how this radiative forcing actually translates to increasing surface temperature. An extremely simple (and arguably unrealistic) model is being used to relate troposphere radiative forcing to surface temperature change. Look at the surface forcing, not the tropopuase forcing, and you don’t have to make all those assumptions linear lapse rate etc. Do the problem both ways, and see how different your answer is in terms of temperature change.

      • We have to think of the 1 degree warming at the TOA as a way to express the 3.7 W/m2 as an equivalent temperature change, so it is synonymous with the forcing. I don’t think it makes sense to translate this to the surface, because whatever you do needs assumptions beyond the forcing or even needs global models with the water vapor feedback disabled somehow, which is not going to be a very satisfactory result in terms of gaining agreement.

      • I wonder whether we’re talking about the same thing. As far as I know, forcing, except for some stratospheric adjustments is defined as the instantaneous change in radiative balance at the tropopause (in vs out) due to, in this case, a CO2 doubling. By definition, therefore, it is what happens before there is any change in temperature, humidity, lapse rate, etc. These things may change, but those changes are responses to the forcing-mediated radiative imbalance. In other words, the absence of response is a matter of definition, not assumption. I suppose the only assumption is that at the instant of doubling, lapse rates are linear or very close to it in the relevant altitudes. To the best of my knowledge, this is not in serious dispute, at least as a global average. Is there evidence to the contrary. What happens to lapse rates in response to forcing is a question of feedback.

        As long as lapse rates are linear, a 1.2 C change at any altitude must mathematically translate into a 1.2 C change at the surface. Of course, the feedback issue involves the magnitude of the actual change.

      • To elaborate slightly on my above comment, the calculations we’re discussing show, via Stefan-Boltzmann, that a 3.7 W/m^2 reduction in radiative balance at the tropopause reduces the radiating temperature from 255 K to a level 1-1.2 deg C lower. To restore balance requires the same magnitude of temperature increase, and via lapse rate, the same change at the surface, regardless of the way surface energy is transmitted upwards.

      • Peter,
        Do you refer to the fact that 1K increase of temperature from 255CKto 256K corresponds to 3.78 W/m^2 while the increase from 288K to 289K corresponds to 5.45 W/m^2 according to the Stefan-Boltzman law.

        The diffrence must obviously be covered by back radiation, if the temperature increases are equal.

      • Fred, I think we are agreeing, that this is the forcing we are talking about. The assumption of constant lapse rate is an added one to translate it to a surface temperature from a TOA temperature. It is a question to me: if we uniformly increase the whole atmosphere (and surface) by this temperature do we exactly get back the 3.7 W/m2 we lost by doubling CO2? If so, I have no argument with translating the temperature as a surface change too.

      • Jim – If I understand your question correctly, my response is that the surface, at 288 K, will experience a larger W/m^2 change for a 1.2 C increase than the mean radiating altitude at 255 K, the difference being due to back radiation. It is only at the radiating altitude that a 3.7 W/m^2 restoration is the quantity needed to restore temperature to 255 K. As an aside, but relevant to other comments, the Planck (no feedback) response is the basis for the calculated temperature change. It is purely radiative by definition, since it excludes changes in evaporation, humidity, lapse rate, clouds, etc. These things change, but their effect is combined with the Planck response to estimate the true temperature response.

      • Fred, understood, but can we say that 1.2 C applies to the surface (and all other levels) too? That was the gist of my question, because it is not obvious the TOA outgoing longwave radiation would be 3.7 W/m2 more when you warm the whole atmosphere by this amount. It won’t be far off, so I think this is just quibbling on my part.
        I am making a distinction between the effective TOA radiative temperature (a two-dimensional field) and the atmospheric temperature (a three-dimensional field) that produces that TOA temperature, where there are clearly some added degrees of freedom.

      • Fred,

        I agree that the surface will experience a larger W/m2 change for a 1.2 C increase than the mean radiating altitude, in fact I assert that it must do, because, besides the 3.7W/m2 radiation loss, it will also be losing energy through convection and evaporation.
        Here, we can assume that either the increase in back-radiation is just enough to balance the equation and hold the surface change to 1.2C, or the actual surface increase is less (or more) than 1.2C

        Apologies if I’m a bit unclear – I’ve rewritten the above about five times now and it still doesn’t quite say what I want it to. It’s been a long day for me ;-)

      • I’m tackling the non feedback co2 sensitivity, which is defined as Ta metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2concentration.” So, how does the CO2 forcing change the surface temp, in the absence of feedbacks? No reason to expect lapse rates to be linear (they really and truly are not linear), and this also assumes zero heat capacity for the surface, which is not correct.

      • Besides the zero heat capacity, the other assumption is that the w/m2 figure calculated for the average temperature is the correct one. Because of the non-linear relationship between the two, this cannot be true unless the range of temperatures making up the average is tiny.

      • MODTRAN (as in the U Chicago online code) can be used to answer this because it has a selection of profiles and outputs the TOA flux. It has a sensitivity of only 2.8 W/m2 to doubling CO2, but you can displace the temperature profile by 0.9 C to restore the TOA flux to the original, keeping H2O constant ( I used the US standard profile which is also not bad as a proxy for the global mean).

      • Looking at earth from the space the 1K increase in the effective radiative temperature of the earth will not be uniform over wavelengths.

        The radiation will increase most at wavelengths passing thtough the atmosphere from earth surface. On the other hand the temperature of the top of troposphere does not necessarily rise at all as the change may influence the altitude of tropopause.

        Looking in this way the change turns out to be quite complicated.

      • I was considering the surface temperature change which, assuming things like a linear lapse rate, should be equivalent to the TOA equivalent. What I suppose I’m really questioning is whether the latter assumptions are indeed valid.

    • Re: curryja (undefined NaN NaN:NaN),
      May I suggest a presentation paper with back of the envelope calculations using 6 different observational methods to measure sensitivity.

      Although 5 are WITH feedbacks and only ONE without, comparisons between them is quite interesting.

      • Below is by Vincent Gray in the greenhouse Bulletin 121 1999

        “There is a considerable difference of opinion between various authorities making these calculations. Cess et al. (1993) found a range of calculated figures for the radiative forcing for a doubling of carbon dioxide concentration (considered on its own, without other greenhouse gases or “feedbacks”) of between 3.3 and 4.8Wm-2 for fifteen models, with a mean of 4Wm-2 . This is a variability of ±0.75Wm-2 or ±19%. Since each modellist will have chosen “Best Estimate” figures for his model the actual variability of possible forcing would be larger than ±19%.

        The Intergovernmental Panel on Climate Change (IPCC) in their first Report (Houghton et al 1990) gave the following formula for calculating the radiative forcing (Delta F) in Wm-2, of changes in atmospheric concentration of carbon dioxide:

        Delta F = 6.3 ln (C/Co), (1)

        where C is CO2 concentration in parts per million by volume and Co is the reference concentration. The formula is said to be valid below C = 1000 ppmv and there were no indications of the accuracy of the formula. The formula predicts a radiative forcing of 4.37 Wm-2 for a doubling of carbon dioxide concentration. This is 9% greater than the mean value assumed by the models (Cess et al. 1993).

        This formula is said to derive from a paper by Wigley (1987), but the formula in this paper is not quite the same. Wigley’s formula, derived from the model of Kiehl and Dickinson (1987), is

        Delta F = 6.333 ln (C/C0) (2)

        considered accurate over the range 250ppmv to 600ppmv; and “is probably accurate to about +10%”..

        Formula (1) has been used by the IPCC scientists for their calculations of radiative forcing “since pre-industrial times”, and for their calculations of future radiative forcing (and so, temperature change) for their futures scenarios.

        In the IPCC 1994 Report (Houghton et al 1994) the authors of Chapter 4 ( K.P. Shine, Y. Fouquart, V. Ramaswamy, S. Solomon, J. Srinivasan) sought to counter the prevalent belief that infra red absorption of carbon dioxide is saturated by proving an example showing the additional absorption from 1980 to 1990. Their graph (Figure 4.1, page 175) integrates to give a forcing of 0.31Wm-2 (Courtney 1999). If the Mauna Loa figures for carbon dioxide concentration of 338.52 ppmv for 1980 and 354.04 ppmv for 1990 are substituted in formula (1) you get 0.28Wm-2, 9% lower than the IPCC illustration.

        A revised formula for calculation of radiative forcing from changing concentrations of carbon dioxide has recently been published (Myhre et al 1998).

        Delta F = 5.35 ln (C/Co) (3)

        The authors express the view that the IPCC estimates “have not necessarily been based on consistent model conditions”. They carry out calculations on the spectra of the main greenhouse gases by all three of the recognised radiative transfer schemes, line by line (LBL), narrow-based model (NBM) and broad-based model (BBM). They calculate the Global Mean Instantaneous Clear Sky Radiative Forcing for 1995, for atmospheric carbon dioxide, relative to an assumed “pre-industrial” level of 280ppmv, as 1.759Wm-2 for LBL, 1.790Wm-2 for NBM and 1.800Wm-2 for BBM; a mean of 1.776Wm-2 with BBM 2.3 % greater than LBL.

        The new formula gives 3.71Wm-2 for doubling carbon dioxide; 15% less than the previous formula. It is also below the mean of 4.0Wm-2 of the models (Cess 1993).

    • David L. Hagen

      Judith
      Some other items per your request for CO2 references:
      Dr. Roy Spencer’s blogs on CO2 feedback
      http://www.drroyspencer.com/?s=co2+feedback

      e.g., On the Relative Contribution of Carbon Dioxide to the Earth’s Greenhouse Effect September 10th, 2010 by Roy W. Spencer, Ph. D.
      http://www.drroyspencer.com/2010/09/on-the-relative-contribution-of-carbon-dioxide-to-the-earth%E2%80%99s-greenhouse-effect/

      Why 33 deg. C for the Earth’s Greenhouse Effect is Misleading September 13th, 2010 by Roy W. Spencer, Ph. D.
      http://www.drroyspencer.com/2010/09/why-33-deg-c-for-the-earths-greenhouse-effect-is-misleading/

      Spencer cites Richard Lindzen 1990
      Some Coolness Concerning Global Warming.
      http://www-eaps.mit.edu/faculty/lindzen/cooglobwrm.pdf

      Note also: Richard Lindzen 1986
      CO2 Feedbacks and the 100 K climate cycle
      http://www-eaps.mit.edu/faculty/lindzen/127co2~1.pdf

      Further papers by Lindzen CO2
      http://scholar.google.com/scholar?q=lindzen+co2&hl=en&btnG=Search&as_sdt=800001

      Spencer cites:
      Manabe & Strickler 1964
      https://www.gfdl.noaa.gov/bibliography/related_files/sm6401.pdf
      Note 408 citations to Manage & Strickler 1964
      http://scholar.google.com/scholar?cites=12138464137035298601&as_sdt=800005&sciodt=800001&hl=en

      Miskolczi 2010 quantitatively evaluates absorption of each of the greenhouse gases. He calculates separate and combined sensitivities for CO2, H2O and temperature based on the TIGR radiosonde record and NOAA data.
      THE STABLE STATIONARY VALUE OF THE EARTH’S GLOBAL AVERAGE ATMOSPHERIC PLANCK-WEIGHTED GREENHOUSE-GAS OPTICAL THICKNESS
      http://www.friendsofscience.org/assets/documents/E&E_21_4_2010_08-miskolczi.pdf

      For the NIPCC review, see:
      Ch 1: Global Climate Models and Their Limitations
      http://www.nipccreport.org/reports/2009/pdf/Chapter%201.pdf

      Chapter 2 Feedback Factors and Radiative Forcing
      Climate Change Reconsidered, 2009 NIPCC Report, pp 27-61.
      http://www.nipccreport.org/reports/2009/pdf/Chapter%202.pdf
      (For Non-CO2 feedbacks)

      Further reviews are provided at CO2 Science.
      http://www.co2science.org/

      See: Carbon Dioxide etc.
      http://www.co2science.org/subject/c/subject_c.php

      CO2 Temperature Correlations
      http://www.co2science.org/subject/c/co2climatehistory.php

      See feedback factors
      http://www.co2science.org/subject/f/subject
      _f.php

      (The NIPCC & CO2 Science primarily address the issues of feedbacks rather than the CO2 itself.)

    • Why is there such a big fat silence from our resident professional radiative physics ‘experts’ on this question? Maybe they are keeping their powder dry for the main post?

      • actually, i think everybody is headed to the annual meeting of the american geophysical union in san francisco. I will post something on that meeting tomorrow nite

  95. Pekka Pirilä

    Get an online graphing tool and use its facilities to produce blackbody radiation curve (spectral radiant exitance against wavelength) for temperatures below for the same black body radiating at;

    600K
    900K
    1200K”

    We notice as expected that the higher temperatures produce short wavelengths that lower temperatures don’t reach.
    Pick any particular wavelength (say 4um);
    We notice that there are many more photons of this frequency for higher temperatures than for lower temperatures.
    Now imagine two of the temperatures examples radiating to one another.
    What sense does it make to say the colder one is heating the hotter one?

    • Radiation is transferring energy in both directions. It is like having a two-way street from East to West. During one time interval there may be 200 cars driving East and 300 driving West. The net fow of cars is 100 going West, but that does not tell that no car would be driving East.

      There is nothing more diffcult in the radiative heat transfer going both ways at the same time.

      • Pekka

        The net flow is what is called HEAT

      • The real point is not arguing about words. The real point here is that completely false claims have been presented and perpetuated by people who either do not know about the things they are talking about or are creating confusion against their better knowledge.

        All the claims that second law of thermodynamics proves that additional CO2 cannot heat the earth surface and lower atmosphere are absolute and full nonsense.

      • Words like HEAT and WORK have a special precise meaning in thermodynamics.
        We must use them carefully.
        Have you read the G&T paper?
        Most negative comment has come from people who have not read the paper

      • It occurs to me that G&T are confusing conduction, where heat can only flow in one direction, with radiation where it can flow in both. Note that IR radiation is also called heat, and its flow is also measured in W/m2, the units of energy flux.

      • Jim D
        If you read their paper you will find that they fully understand the radiative properties of CO2 and H2O.
        Their point of view is that the effects do not amount in magnitude what is claimed for the “greenhouse effect”.

      • Do they talk about the measurements of 200-300 W/m2 of IR from the sky to the ground, and how that affects surface temperature, because if not, they are missing the point.

      • Jim D
        Why don’t you read their paper and find out for yourself

      • When I read in the original paper that photons can’t flow from a colder to a hotter body, I stopped reading and dismissed T&G as cranks. Even a high school physics student could probably see that is wrong. And they weren’t talking about a dynamic equilibrium or net flow of photons either. The last time I googled for their paper, there were redactions, so I’m not sure what’s going on.

      • Jim or Jim D

        Perhaps you could quote the page number for your remarkable claim
        …..”When I read in the original paper that photons can’t flow from a colder to a hotter body”……..

      • I should have added that I don’t think you can.

      • There is this comment earlier by someone called Bryan
        “For instance G&T say that a major error in most versions of Greenhouse Theory is in violations of the Second Law.
        A fairly frequently expressed climate science understanding of this law is ;
        Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.
        This is wrong.”
        Perhaps this misled people into thinking G&T said that. I am now looking at G&T and will see for myself if they did.

      • …and I don’t find they did, so it may be from the famous Dragon book that is all about this issue.

      • Jim D
        I hope nobody thinks that G&T said…….

        Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder….

        Their position is exactly opposed to that statement.

      • I included the “This is wrong.” sentence. This implies they don’t believe that statement. However, G&T say none if this, so it is irrelevant. My reading of G&T shows it to be mostly complaining about wording in science explanations than the science itself. I find it interesting that they completely denounce Fig. 23 which is a typical energy balance diagram, and this would also shoot down any idea Miskolczi had as a side-effect. But I could not see why they denounce those diagrams, and I haven’t seen anyone explain what they mean by that.

      • That was how I interpreted page 78. I could be wrong.

      • Jim

        Yes I agree that this page could have been a bit clearer
        Their basic opinion in the rest of the paper is quite clear
        Hotter and colder bodies can radiate to one another.
        However HEAT only flows one way from hotter to colder body.

        This is expressed in their reply to the Halpern et al comment

        (www.skyfall.fr/wp-content/gerlich-reply-to-halpern.pdf)

      • Bryan,
        Heat is a form of energy and their way of separating them is without any basis and misleading or erroneous. The chapter 3.9 contains many correct statements but there are no reasonable conclusions. For me it is unclear whether I should say that there are no conclusions or that the conclusions are wrong, because the text is formulated so vaguely.

        What is certain is that this chapter does not give any real justification for the main conclusions presented in the abstract. It is also clear that the text presents erroneous critique of other publications. This critique tells only that they have not understood the real issues correctly.

      • Pekka
        They make a very helpful summary of 16 points.
        Arthur Smith produced a paper taking issue with summary point 2.
        Someone then replied to Arthur Smith objection with a paper contesting some points in his paper.
        This is all healthy its the way science progresses.
        I think that this is the attitude that Judith is trying to establish.
        We are however well off topic here and on most other sites we would have been snipped.

      • Bryan,
        This paper is not science. It is not worth listing its weaknesses before there is any reason to think that the paper contains anything of any value.

      • There is actually not so much difference between conduction and radiative heat transfer. When one looks at the molecular level the conductive processes move also energy in all directions. The difference is that in conduction this occurs only at very small distances, because energy is transferred in very small steps.

        In a large crystal one phonon can transfer energy from one edge of the crystal to the opposite edge and this may happen from the colder edge to the hotter edge. Still the heat transfer through phonons is considered conduction.

        The second law of thermodynamics is based on a statistical rule that tells, which processes are more common and which less common. One clear example is that quants of energy move more often from hot to cold than from cold to hot.

      • Pekka, yes, conduction can be considered the net result of phonon vibrations, but is calculated as just a one-direction flux proportional to local temperature gradient, while radiation calculations have to explicitly treat both streams because of their longer-range dependencies.

      • Jim D,
        I may be rather expanding on your message than answering to it.

        The radiative heat transfer may in some cases also be calculated in the similar way. This would require that the radiation is the dominant mechanism and that the pathlenngth of the quanta is always small compared to the dimensions of the full compartement considered.

        Neither of these condiotions is valid for the atmosphere because the radiative heat transfer does not alone dominate the heat transfer (conduction is very important as well) and because some wavelengths have very long pathlengths and the quanta may escape the whole atmosphere.

        This is the reason that one cannot simply use diffusion equation for radiative heat transfer as one can for conduction. Further this is the reason that it is useful to consider back radiation separately.

      • In my message I write “conduction is very important as well”. I meant convection.

      • Pekke, saying it is nonsense and claiming your opponent is either ignorant or evil is not a counter argument. If you cannot explain (or teach) your opponent’s position then you simply do not understand it. That is what you should say if you cannot respond to their specific technical argument, that you do not understand them. It does not follow that they must be wrong.

      • It is not possible to teach anybody who absolutely refuses to learn.

      • Pekka, I am not talking about teaching anyone anything. Being able to teach something is a measure of understanding. You have not demonstrated that you understand the specific argument of G&T, which you must do to ague effectively that they are wrong. You merely say it is nonsense.

      • I have looked at G&T. It appears to be an incoherent collection of physical equations without any clear logic. The abstract claims:

        “The atmospheric greenhouse e ffect,[..] essentially describes a ctitious mechanism, in which a planetary atmosphere acts as a heat pump driven by an environment that is radiatively interacting with but radiatively equilibrated to the atmospheric system. According to the second law of thermodynamics such a planetary machine can never exist.”

        This is an extreme and obviously wrong claim, but I’m unable to find any place in the text where they would really try to justify this claim.

        Several of these quasiscientific skeptical papers including G&T make very strong claims without specific justification. They are also so incoherent that it is impossible to tell what they are really trying to tell. As one cannot find out, what they are trying to tell, it is also difficult to be specific in the critisism. One can only notice that the papers miss all real substanse.

      • Pekka
        I notice that you have yet to reply to my post of
        December 11, 2010 at 11:16 am
        Could it be that you realised you had dug yourself into a hole and had better stop digging?
        Your comments about G&T show you are unable to engage in any substantive dialog on the Second Law.
        Their mistakes you say are many but you cant be specific about any one apparently!
        Continually saying its rubbish but refusing to substantiate fools no one.

      • Pekka

        The point is NO HEAT goes from cold to hot.

        Lets see what Professor Clausius says;

        Rudolph Clausius (1822-1888) Germany

        Heat cannot of itself pass from a colder to a hotter body.

        It is impossible to carry out a cyclic process using an engine connected to two heat reservoirs that will have as its only effect the transfer of a quantity of heat from the low-temperature reservoir to the high-temperature reservoir. (1854?)

        “Es existiert keine zyklisch arbeitende Maschine, deren einzige Wirkung Wärmetransport von einem kühleren zu einem wärmeren Reservoir ist.”;
        It does not exist, a cyclically operating machine, whose only effect is heat transport from a cooler to a warmer reservoir.

        It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature. [kelvin's translation]

        Do you get the poinPekka?

        Its not that heat mostly travels from hot to cold as you seem to think.
        No HEAT moves from colder object tahotterer one!

      • Can you be more precise in your question? Do you claim no photons can go from colder to hotter objects? I ask because heat can be transferred by photons.

      • Bryan, If heat is defined as the nett flow of energy, then yes.
        Just as, if I give you $10 and you give me $5 change, although $5 has flowed from you to me, the nettflow has been $5 from me to you.
        I think your argument with Pekka is 99% semantics – apologies if I’m wrong.

      • Bryan