Confidence in radiative transfer models

by Judith Curry

The calculation of atmospheric radiative fluxes is central to any argument related to the atmospheric greenhouse/Tyndall gas effect.  Atmospheric radiative transfer models rank among the most robust components of climate model, in terms of having a rigorous theoretical foundation and extensive experimental validation both in the laboratory and from field measurements.   However, I have not found much in the way of actually explaining how atmospheric radiative transfer models work and why we should have confidence in them (at the level of technical blogospheric discourse).  In this post, I lay out some of the topics that I think need to be addressed  in such an explanation regarding infrared radiative transfer.  Given my limited time this week, I mainly frame the problem here and provide some information to start a dialogue on this topic, I hope that other experts participating can fill in (and I will update the main post).

Atmospheric radiative transfer models

The Wikipedia provides a succint description of radiative transfer models:

An atmospheric radiative transfer model calculates radiative transfer of electromagnetic radiation through a planetary atmosphere, such as the Earth’s.  At the core of a radiative transfer model lies the radiative transfer equation that is numerically solved using a solver such as a discrete ordinate method or a Monte Carlo method.  The radiative transfer equation is a monochromatic equation to calculate radiance in a single layer of the Earth’s atmosphere. To calculate the radiance for a spectral region with a finite width (e.g., to estimate the Earth’s energy budget or simulate an instrument response), one has to integrate this over a band of frequencies (or wavelengths). The most exact way to do this is to loop through the frequencies of interest, and for each frequency, calculate the radiance at this frequency. For this, one needs to calculate the contribution of each spectral line for all molecules in the atmospheric layer; this is called a line-by-line calculation.  A faster but more approximate method is a band transmission. Here, the transmission in a region in a band is characterised by a set of coefficients (depending on temperature and other parameters). In addition, models may consider scattering from molecules or particles, as well as polarisation.

If you don’t already have a pretty good understanding of this, the Wikipedia article is not going to help much.  There are a few good blog posts that I’ve spotted that explain aspects of this (notably scienceofdoom):

You find scienceofdoom’s treatments to be beyond your capability to understand?   Lets try more of a verification and validation approach to assessing whether we should have confidence in the radiation transfer codes used in climate models.

History of atmospheric (infrared) radiative transfer modeling

I don’t recall ever coming across a history on this subject?  Here are a few pieces of that history that I know of (I hope that others can fill in the holes in this informal history).

Focusing on infrared radiative transfer,  there is some historical background in the famous Manabe and Wetherald 1967 paper on early calculations of infrared radiative transfer in the atmosphere.  As a graduate student in the 1970’s, I recall using the Ellsasser radiation chart.

The first attempt to put a sophisticated radiative transfer model into a climate model was made by Fels and Kaplan 1975, who used a model that divided the infrared spectrum into 19 bands.  I lived a little piece of this history, when I joined Kaplan’s research group in 1975 as a graduate student.

In the 1980’s, band models began to be incorporated routinely in climate models.  An international program of Intercomparison of Radiation Codes  in Climate Models (ICRCCM) was inaugurated for clear sky infrared radiative transfer, with results described by Ellingson et al. 1991 and Fels et al. 1991 (note Andy Lacis is a coauthor):

During the past 6 years, results of calculations from such radiation codes have been compared with each other, with results from line-by-line models and with observations from within the atmosphere.  Line by line models tend to agree with each other to within 1%; however, the intercomparison shows a spread of 10-20% in the calculations by less detailed climate model codes.  When outliers are removed, the agreement between narrow band models and the line-by-line models is about 2% for fluxes.

Validation and improvement of atmospheric radiative transfer models

In 1990, the U.S. Department of Energy initiated the Atmospheric Radiation Measurement Program (ARM) program targeted at improving the understanding of the role and representation of atmospheric radiative processes and clouds in models of the earth’s climate (see here for a history).

A recent summary of the objectives and accomplishments is provided in the 2004 ARM Science Plan.  A list of measurements (and instruments) made by ARM at its sites in the tropics, midlatitudes and the arctic is very comprehensive.  Of particular relevance to evaluating infrared radiative transfer codes is the Atmospheric Emitted Radiance Interferometer.  For this of you who want empirical validation, the ARM program provides this in spades.

The ARM measurements have become the gold standard for validating radiative transfer models.  For line-by-line models, see this closure experiment described by Turner et al. 2004 (press release version here).   More recently, see this evaluation of the far infrared part of the spectra by Kratz et al. (note: Miskolczi is a coauthor).

For a band model (used in various climate models), see this evaluation of the RRTM code:

Mlawer, E.J., S.J. Taubman, P.D. Brown,  M.J. Iacono and S.A. Clough: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16,663-16,682, 1997 link

This paper is unfortunately behind paywall, but it provides an excellent example of the validation methodology.

The most recent intercomparison of climate model radiative transfer codes against line-by-line calculations is described by Collins et al. in the context of radiative forcing.

There is a new international program (the successor to ICRCCM) called the Continual Comparison of Radiation Codes (CIRC), which established benchmark observational case studies and coordinates intercomparison of models.

Conclusions

The problem of infrared atmospheric radiative transfer (clear sky, no clouds or aerosols) is regarded as a solved problem (with minimal uncertainties), in terms of the benchmark line-by-line calculations.   Deficiencies in some of the radiation codes used in certain climate models have been identified, and these should be addressed if these models are to be included in the multi-model ensemble analysis.

The greater challenges lie in modeling radiative transfer in an atmosphere with clouds and aerosols, although these challenges are greater for modeling solar radiation fluxes than for infrared fluxes.   The infrared radiative properties of liquid clouds are well known; some complexities are introduced for ice crystals owing to their irregular shapes (this issue is much more of a problem for solar radiative transfer than for infrared radiative transfer).  Aerosols are a relatively minor factor in infrared radiative transfer owing to the typically small size of aerosol particles.

However, if  you can specify the relevant conditions in the atmosphere that provide inputs to the radiative transfer model, you should be able to make accurate calculations using state-of-the art models.  The challenge for climate models is in correctly simulating the  variations in atmospheric profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols.

And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of  addressing this issue (this will be the topic of the next greenhouse post).

Note: this is a technical thread, and comments will be moderated for relevance.




1,207 responses to “Confidence in radiative transfer models

  1. David L. Hagen

    Ferenc Miskolczi posts papers and comments detailing the development of his quantitative Line By Line (LBL) HARTCODE program, testing it against data, and comparing it against other LBL models. He then used it to quantitatively evaluate the Earth’s Global Planck-weighted Optical depth.

    For the detailed discussion of the methodology and code, see:
    F.M. Miskolczi et al.: High-resolution atmospheric radiance-transmittance code (HARTCODE). In: Meteorology and Environmental Sciences Proc. of the Course on Physical Climatology and Meteorology for Environmental Application. World Scientific Publishing Co. Inc., Singapore, 1990. 220 pg.
    He provides extensive theoretical basis and experimental foundations. He printed the complete code in appendix C (for A. Lacis and others who might care to learn how LBL models calculate.)

    For comparative performance, see: Kratz-Mlynczak-Mertens-Brindley-Gordley-Torres-Miskolczi-Turner: An inter-comparison of far-infrared line-by-line radiative transfer models. Journal of Quantitative Spectroscopy & Radiative Transfer No. 90, 2005.

    Miskolczi published the first quantitative evaluation of the Global Optical Depth. He found it to be effectively stable over the last 61 years. See:
    The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-weighted Greenhouse-Gas Optical Thickness, Energy & Environment, Special issue: Paradigms in Climate Research, Vol. 21 No. 4 2010, August. See his Context Discussion:
    Using a quantitative <a href="Planck weighted Optical Depth, Miskolczi found:

    (1) A theoretically predicted GHG-invariant constant, tau = 1.86756….. ;
    (2) The global average calculated on the TIGR2 radiosonde data archive (GAT), tau = 1.8693;
    (3) One derived from the TFK2009 global energy budget, tau = 1.8779; and
    (4) The average of 61 NOAA NCAR 1948-2008 reanalysis annual means, tau = 1.8688.

    In the cases (1), (2), and (4), tau was calculated as tau = – ln TA, TA=ST/SU ; while in the case (3), tau = – ln (1– ED/eSU) , with e : emissivity.

    • I recall Nullius’s milk analogy about the way a small increase in the concentration of milk in water can affect transparency with enough depth (at least that’s how I remember it) illustrates the effect of small increases in the concentration of CO2 in the atmosphere. Does optical depth refer to this? If it does, and the optical depth has remained constant over 60 years, what does that say about the analogy?

    • David L. Hagen

      See Ferenc Miskolczi’s latest April 2011 results:
      The stable stationary value of the Earth’s global average atmospheric infrared optical thickness Presented by Miklos Zagoni, EGU2011 Vienna

      From quantitative Line By Line evaluations of the global optical depth using the best available dat from 1948-2008, Miskolczi finds:

      The dynamics of the greenhouse effect depend on the dynamics of the absorbed solar radiation and the space-time distribution of the atmospheric humidity. The global distribution of the IR optical thickness is fundamentally stochastic. The instantaneous effective values are governed by the turbulent mixing of H2O in the air and the global (meridional ) redistribution of the thermal energy resulted from the general (atmospheric and oceanic) circulation. . . .

      Global mean IR absorption does not follow the CO2 increase (from 1948 to 2008). Greenhouse effect and the 21.6% increase of CO2 in the last 61 years are unrelated. Atmospheric H2O does but CO2 does not correlate with the IR optical depth. . . .Atmospheric CO2 increase can not be the reason of global warming. . . .IR Optical Depth has no correlation with time. The strong CO2 signal in any time series is not present in the IR optical depth data.

      Thus Miskolczi finds there is NO correlation of global optical depth with CO2, only with H2O.
      Furthermore, the global average is about constant – with very little trend.
      So would “stationary” or “static” be better words than “saturated”?

      Some will argue that teh TIGR2 data is flawed. What better is there?
      Miskolczi has also adjusted to match satellite data.

      Has anyone else taken the effort to actually quantitatively evaluate the global optical depth and how it changes – or explain why it does not?

      • Christopher Game

        The term Planck-weighted greenhouse-gas optical thickness (PWGGOT) means that the optical thickness is evaluated for diffuse thermal radiation from a black surface (Planck weighting) at the bottom of the atmosphere for transmission to space. Only the greenhouse-gas effects are considered, and the calculation does not include explicit effects from clouds.

        David Hagen’s sentence “Thus Miskolczi finds there is NO correlation of global optical depth with CO2, only with H2O” means that there was no linear trend of the global year-round average PWGGOT over the 61-years. It would misread David’s sentence to suppose it meant that immediate CO2 effects do not affect the PWGGOT; of course they do. It is the 61-year linear trend that he refers to.

        David’s sentence says that Miskolczi found a trend effect only from H2O. Not so. Miskolczi’s Figure 11 shows also a trend effect from temperature. Miklos Zagoni’s presentation, to which David refers, also does not mention this. There is also a trend effect from CO2. These two trend effects (CO2 and temperature) must be making real contributions to the calculated values of the PWGGOT (in the sense that those values are determined partly by the quantities of CO2 and temperature and the method of calculation of the PWGGOT), but the magnitudes of those contributions are not to be regarded as showing a statistically significant linear trend when one has to judge only from the 61-year time series of values of the PWGGOT and does not have this a priori information. It is the overall value of the PWGGOT that shows no significant linear trend; this trend cannot be predicted, from just the method of calculation of the PWGGOT alone, because it depends essentially also on the trends of the time series of radiosonde data, which contain the information of the CO2, temperature, and H2O profile time series.

        Miskolczi entitled his 2010 paper ‘The stable stationary value of the earth’s global average atmospheric Planck-weighted greenhouse-gas optical thickness’ and his paper does not use the term ‘saturated’. Christopher Game

    • David L. Hagen

      Miskolczi provides further details, comparing
      Linear trends in the NOAA time series for the first and last 50 years. i.e., for 1948-1997 with 1959-2008. The first show small decline while the latter a small rise in the global optical depth.

  2. I will just summarize some areas to fill in on the post.
    The radiative transfer models in GCMs are band or “integral-line” type models that are calibrated from line-by-line models that themselves are calibrated on theory and direct measurement of spectral lines from radiatively active gases. In this way, this part of the GCM has a direct grounding in physics, and is very easy to verify with spectroscopic measurements. These models are crucial for quantifying the forcing effects of increased CO2 and H2O, and cloud-forcing effects in the atmosphere. The radiative transfer codes get their clouds from other parts of the GCM physics, and are not responsible for clouds per se, but can affect clouds through the processes of radiative heating and cooling in and on the surfaces of clouds that may impact the cloud lifetimes and development. More sophisticated GCMs also carry dust and other aerosols and the interaction of radiation with those directly or via their cloud effects. Obviously another place that radiative forcing is important, apart from the atmosphere, is the ground where longwave and shortwave fluxes interact with the land or ocean energy budgets. Radiation also helps to define the tropospheric and stratospheric temperature profiles, and how well GCMs reproduce these is an important metric. For climate models, GCMs have to obey a realistic global energy budget, as viewed from space, and this is mainly down to their radiative transfer model and cloud distribution.

    • Thanks!

    • Here’s a good site for the ocean’s absorption of SW vs. LW for non-scientists. Unfortunately, I don’t see any reference to heating of the ocean from down-welling LWR. From the site:

      “Note that only 20% of insolation reaching Earth is absorbed directly by the atmosphere while 49% is absorbed by the ocean and land. What then warms the atmosphere and drives the atmospheric circulation shown in Figure 4.3? The answer is rain and infrared radiation from the ocean absorbed by the moist tropical atmosphere. Here’s what happens. Sunlight warms the tropical oceans which must evaporate water to keep from warming up. The ocean also radiates heat to the atmosphere, but the net radiation term is smaller than the evaporative term. Trade-winds carry the heat in the form of water vapor to the tropical convergence zone where it falls as rain. Rain releases the latent heat evaporated from the sea, and it heats the air in cumulus rain clouds by as much as 500 W/m2 averaged over a year (See Figure 14.1).

      At first it may seem strange that rain heats the air. After all, we are familiar with summertime thunderstorms cooling the air at ground level. The cool air from thunderstorms is due to downdrafts. Higher in the cumulus cloud, heat released by rain warms the mid-levels of the atmosphere causing air to rise rapidly in the storm. Thunderstorms are large heat engines converting the energy of latent heat into kinetic energy of winds.

      The zonal average of the oceanic heat-budget terms (Figure 5.7) shows that insolation is greatest in the tropics, that evaporation balances insolation, and that sensible heat flux is small. Zonal average is an average along lines of constant latitude. Note that the terms in Figure 5.7 don’t sum to zero. The areal-weighted integral of the curve for total heat flux is not zero. Because the net heat flux into the oceans averaged over several years must be less than a few watts per square meter, the non-zero value must be due to errors in the various terms in the heat budget.”

      http://oceanworld.tamu.edu/resources/ocng_textbook/chapter05/chapter05_06.htm

    • Jim

      you said:
      “………….The radiative transfer models in GCMs are band or “integral-line” type models that are calibrated from line-by-line models that themselves are calibrated on theory and direct measurement of spectral lines from radiatively active gases……..”

      Please could you explain (in layman’s terms) a little bit more about this calibration process. What is it? What is done to ensure that errors are not carried over from one type of model into the next and then magnified iteratively? Or have I misunderstood…in which case sorry in advance:)

    • Richard S Courtney

      Jim D:

      You say;
      “For climate models, GCMs have to obey a realistic global energy budget, as viewed from space, and this is mainly down to their radiative transfer model and cloud distribution.”

      Hmmm.
      Yes, that is literally true, but it is misleading.

      As I have explained on two other threads of this blog, the “radiative transfer model” in each GCM is significantly affected by the climate sensitivity to CO2 in each model, and agreement with the “global energy budget” is obtained by adopting an appropriate degree of aerosol forcing (i.e. “cloud distribution”) in each model.

      The values of climate sensitivity and aerosol forcing differ by ~250% between the models. Hence, the GCMs emulate very different global climate systems.

      The Earth has only one global climate system.

      Richard

      • Yes, a part of the radiative transfer model is how it handles aerosols, both in clear air and in clouds. This includes pollution and volcanic emissions. Global observations of aerosol effects haven’t been enough to constrain this very well, and it is a complex effect, especially when clouds are involved. Given this lack of observational constraints the aerosol part of the radiative model and aerosol amounts have some wide error bars (as we see honestly portrayed in the IPCC report). As long as models do something within these error bars, they are plausible, but this is an area where more observations are needed and are being actively carried out (e.g. in the DOE ASR program) to do it better. It is closely tied to the cloud-albedo variation.

  3. David L. Hagen

    Regarding the “confidence” in radiation models, Ferenc Miskolczi addresses radiation errors in his comments on Kiehl-Trenberth 1997/IPCC and in Trenberth-Fasullo-Kiehl (TFK) 2009:

    The longwave section of the Kiehl-Trenberth 1997 (= IPCC 2007 AR4 WG1 Chapter1 FAQ1.1 Figure1) is wrong, both in the concept and in the numbers. The definition of the “Atmospheric Window” in the text does not match with the physical quantity shown in their chart (see Miskolczi’s Comments on KT97). The value of it, given as all-sky top-of-the-atmosphere window flux, should be about 66 Wm^-2, instead of 40 W m^-2, given in both the original KT97 and in the updated Trenberth-Fasullo-Kiehl (TFK) 2009 BAMS publication.

    A further serious problem is that KT97 used the U.S. Standard Atmosphere 1976 (USST76), with appendix B of Liou 1992, for vertical profiles of temperature and water vapor. But that atmosphere contains only about half of the real global average precipitable water. Calculating the greenhouse effect on that reduced GHG content, the KT97 distribution should have an atmospheric window radiation about 99 Wm^-2. This is even more unacceptable for real global average than the given 40 Wm^-2 value.
    For the correctly defined physical quantity (surface transmitted radiation, ST , on the whole spectrum) in his set of data collection, Miskolczi’s computations give a global average value of about 60 Wm-2 (see Table 2 of his recent publication and his Table 1 and Figure 1 for definitions of the quantities). This number is close to what NASA measured on their instrument with their methods, and fits well with the required value of the equilibrium distribution.
    Correcting the TFK2009 Atmospheric Window with the real global average, the resultant greenhouse effect again sits very close to the stable stationary value. In this sense, these quantities support the idea that the Earth’s greenhouse effect maintains a kind of constancy.

    For earlier color graphs, see Zagoni’s 2008 presentation on Miskloczi.

    • Miskolczi here is just quibbling with how the arrows are partitioned on these global energy budget summary diagrams. It is nothing fundamental about the radiative transfer models themselves.

      • David L. Hagen

        Jim D
        That difference is “only” a 30% error in attribution of upward emission St (to the top of atmosphere) that is not absorbed and re-emitted — identifying a major error in atmospheric radiation absorption/emission! Is that typical of GCM/energy balance accuracy expected? (Most other parameters were fairly close.) I thought Curry noted above:

        What is accurate enough in terms of fitness for purpose? I would say calculation of the (clear sky, no clouds or aerosol) broadband IR flux to within 1-2 W m^-2 (given perfect input variables, e.g. temperature profile, trace profiles, etc.)

        I seem to recall a a climategate email bewailing: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t”. with some some clarification.

        See the paper by: Kevin E Trenberth An imperative for climate change planning: tracking Earth’s global energy, Current Opinion in Environmental Sustainability 2009, 1:19–27
        Trenberth notes the observed energy flow is 145 while the “residual” (error) is 30–100 * 10^20 J/year. (ie 21-29% unaccounted for “residual”).

        Radiative forcing [2] from increased greenhouse gases (Figure 4) is estimated to be3.0 W m^2 or 1.3% of the flow or energy, and the total net anthropogenic radiative forcing once aerosol cooling is factored in is estimated to be 1.6 W m^2 (0.7%), close to that from carbon dioxide alone (Figure 4). The imbalance at the top-of-the-atmosphere (TOA) would increase to be 1.5% (3.6 W m^2) once water vapor and ice-albedo feedbacks are included.. . . To better understand and predict regional climate change, it is vital to be able to distinguish between short-lived climate anomalies, such as caused by El Nin˜no. An imperative for climate change planning: tracking Earth’s global energy or La Nin˜0 events, and those that are intrinsically part of climate change, whether a slow adjustment or trend, such as the warming of land surface temperatures relative to the ocean and changes in precipitation characteristics.

        Perhaps this 30% radiation error that Miskolczi identified together with the factor of 2 error in precipitable water column could help refine/find Trenberth’s missing energy?

      • He redefined what the window region was, and says the previous paper was wrong because they defined it differently from him. Seems like an opinion piece.
        The other part about not accounting for interannual variability with existing data sources is well known. Is it random error or bias? It would be good to know because random error decreases with the length of the time series. Bias implies we need more or better instrumentation.

    • David L. Hagen

      Judith
      Re: Global Optical Depth & Precipitable Water

      Planck-weighted Global Optical Depth
      Ferenc Miskolczi has evaluated:
      1) The Planck-weighted Global Optical Depth <a href=” ( tau a = – Ln (Ta) = 1.874)
      2) The sensitivity of Optical Depth versus water vapor
      3) The sensitivity of Optical Depth versus CO2
      4) The sensitivity of Optical Depth versus temperature, and
      5) The trend in Optical Depth for 61 years – very low.
      Each of these parameters are testable against each of the Global Climate Models, and vice versa. These would provide independent objective tests of the “confidence” in the radiative code and atmospheric properties in each of the GCMs and Miskolczi’s evaluations and 1D model.

      Precipitable Water
      In his work, Miskolczi reevaluated the atmospheric profile, obtaining water vapor, CO2, and temperature vs depth (pressure). In doing so he found:
      6) Precipitable water u=2.533 prcm
      6) This precipitable water is factor of two higher than the standard atmospheric profile USST-76 (u=1.76 prcm).
      e.g. See Fig 5 in Miskolczi Greenhouse effect in semi-transparent planetary atmospheresQuarterly Journal of the Hungarian Meteorological Service Vol. 111, No. 1, January–March 2007, pp. 1–40

      A. Lacis (below) noted that:

      In particular, given the nature of atmospheric turbulence, a ‘first principles’ formulation for water vapor and cloud processes is not possible. Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. However, once these coefficients have been set, they become part of the model physics, and are not subject to further change.

      Global climate models have been found to poorly, especially in predicting precipitation. Anagnostopoulos, G. G. , Koutsoyiannis, D. , Christofides, A. , Efstratiadis, A. and Mamassis, N. ‘A comparison of local and aggregated climate model outputs with observed data’, Hydrological Sciences Journal, 55:7, 1094 – 1110
      GCMs appear to markedly overpredict warming compared with global temperature changes. See:
      McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290.
      Global Climate Models apparently do not predict the significant correlation between precipitation / runoff and the 21 year Schwabe solar cycle. See:
      WJR Alexander et al. Linkages between solar activity, climate predictability and water resource development Journal of the South African Institution of Civil Engineering, Volume 49 Number 2 June 2007

      Miskolczi’s reported water vapor results with evidence of poor GCM performance raises key questions:
      9) What is the precipitable water vapor relative to Miskolczi’s atmospheric column (u=2.53 prcm) versus the USST-76 standard atmospheric column (u=1.76 prcm)?
      10) Which atmospheric profiles were used to tune the GCM’s?
      11) Could tuning to USST-76 etc. having low water vapor cause GCM’s to over predict climate feedback and poorly predict precipitation?

      I look forward to learning more on these issues.

      Recommend running another thread just on the uncertainties in atmospheric profiles.

  4. David,

    Your unmitigated faith in Ferenc nonsense does not speak well of your own understanding of radiative transfer.

    I include below an excerpt from my earlier post on Roger Pielke Sr’s blog http://pielkeclimatesci.wordpress.com/2010/11/23/atmospheric-co2-thermostat-continued-dialog-by-andy-lacis/

    The basic point being that of all the physical processes in a climate GCM, radiation is the one physical process that can be modeled most rigorously and accurately.

    The GISS ModelE is specifically designed to be a ‘physical’ model, so that Roy Spencer’s water vapor and cloud feedback ‘assumptions’ never actually need to be made. There is of course no guarantee that the model physics actually operate without flaw or bias. In particular, given the nature of atmospheric turbulence, a ‘first principles’ formulation for water vapor and cloud processes is not possible. Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. However, once these coefficients have been set, they become part of the model physics, and are not subject to further change. As a result, the model clouds and water vapor are free to change in response to local meteorological conditions. Cloud and water vapor feedbacks are the result of model physics and are thus in no way “assumed”, or arbitrarily prescribed. A basic description of ModelE physics and of ModelE performance is given by Schmidt et al. (2006, J. Climate, 19, 153–192).

    Of the different physical processes in ModelE, radiation is the closest to being ‘first principles’ based. This is the part of model physics that I am most familiar with, having worked for many years to design and develop the GISS GCM radiation modeling capability. The only significant assumption being made for radiation modeling is that the GCM cloud and absorber distributions are defined in terms of plane parallel geometry. We use the correlated k-distribution approach (Lacis and Oinas, 1991, J. Geophys. Res., 96, 9027–9063) to transform the HITRAN database of atmospheric line information into absorption coefficient tables, and we use the vector doubling adding method as the basis and standard of reference for GCM multiple scattering treatment.

    Direct comparison of the upwelling and downwelling LW radiative fluxes, cooling rates, and flux differences between line-by-line calculations and the GISS ModelE radiation model results for the Standard Mid-latitude atmosphere is shown in Figure 1 below. (available on Roger Pielke Sr’s blog)

    As you can see, the GCM radiation model can reproduce the line-by-line calculated fluxes to better than 1 W/m2. This level of accuracy is representative for the full range of temperature and water vapor profiles that are encountered in the atmosphere for current climate as well as for excursions to substantially colder and warmer climate conditions. The radiation model also accounts in full for the overlapping absorption by the different atmospheric gases, including absorption by aerosols and clouds. In my early days of climate modeling when computer speed and memory were strong constraints, the objective was to develop simple parameterizations for weather GCM applications (e.g., Lacis and Hansen, 1974, J. Atmos. Sci., 31, 118–133). Soon after, when the science focus shifted to real climate modeling, it became clear that an explicit radiative model responds accurately to any and all changes that might take place in ground surface properties, atmospheric structure, and solar illumination. Thus the logarithmic behavior of radiative forcings for CO2 and for other GHGs is behavior that has been derived from the GCM radiation model’s radiative response (e.g., the radiative forcing formulas in Hansen et al., 1988, J. Geophys. Res., 93, 9341–9364) rather than being some kind of a constraint that is placed on the GCM radiation model.

    Climate is primarily a boundary value problem in physics, and the key boundary value is at the top of the atmosphere being defined entirely by the incoming (absorbed) solar radiation and the outgoing LW thermal radiation. The global mean upwelling LW flux at the ground surface is about 390 W/m2 (for 288 K), and the outgoing LW flux at TOA is about 240 W/m2 (or 255 K equivalent). The LW flux difference that exists between the ground and TOA of 150 W/m2 (or 33 K equivalent) is a measure of the terrestrial greenhouse effect strength. We should note that the transformation of the LW flux that is emitted upward by the ground, to the LW flux that eventually leaves the top of the atmosphere, is entirely by radiative transfer means. Atmospheric dynamical processes participate in this LW flux transformation only to the extent of helping define the atmospheric temperature profile, and in establishing the local atmospheric profiles of water vapor and cloud distributions that are used in the radiative calculations.

    Armed with a capable radiative transfer model, it is then straightforward to take apart and reconstruct the entire atmospheric structure, constituent by constituent, or in any particular grouping, to attribute what fraction of the total terrestrial greenhouse effect each atmospheric constituent is responsible for. That is where the 50% water vapor, 25% cloud, and 20% CO2 attribution in the Science paper (for the atmosphere as a whole) came from. “Follow the money!” is the recommended strategy to get to the bottom of murky political innuendos. A similar approach, using “Follow the energy!” as the guideline, is an effective means for fathoming the working behavior of the terrestrial climate system. By using globally averaged radiative fluxes in the analysis, the complexities of advective energy transports get averaged out. The climate energy problem is thereby reduced to a more straightforward global energy balance problem between incoming (absorbed) SW solar energy and outgoing LW thermal energy, which is fully amenable to radiative transfer modeling analysis. The working pieces in the analysis are the absorbed solar energy input, the atmospheric temperature profile, surface temperature, including the atmospheric distribution of water vapor, clouds, aerosols, and the minor greenhouse gases, all of which can be taken apart and re-assembled at will in order to quantitatively characterize and attribute the relative importance of each radiative contributor.

    Validation of the GCM climate modeling performance is in terms of how well the model generated temperature, water vapor, and cloud fields resemble observational data of these quantities, including their spatial and seasonal variability. It would appear that ModelE does a generally credible job in reproducing most aspects of the terrestrial climate system. However, direct observational validation of the GCM radiation model performance to a useful precision is not really feasible since the atmospheric temperature profile and absorber distributions cannot all be measured simultaneously with available instrumentation to the required precision that would lead to a meaningful closure experiment. As a result, validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations that utilize laboratory measurements for cloud and aerosol refractive indices and absorption line radiative property information.

    • Thanks Andy. I agree that that validation is a mess owing to specifying the atmospheric characteristics, but clear sky validation has been done quite successfully by the ARM program.

    • Hi A Lacis,
      Given your background as a GISS climate modeler, I am interested in your view of Willis Eschenbach’s analysis of the performance of various models at http://wattsupwiththat.com/2010/12/02/testing-testing-is-this-model-powered-up/#more-28755.

      Thanks,
      Chip

    • Andy,
      Does the GISS ModelE cover all wavelengths between 200 nm (UV) and 50000 nm (longer IR) ? Or are there gaps? More specifically, does it cover the 2400 -3600 nm wavelengths?

    • David L. Hagen

      A. Lacis
      Judith has called for “Raising the level of the game”
      Will you rise to the level of professional scientific conduct?
      Or need we treat your comments as partisan gutter diatribe?
      Your diatribe on “unmitigated faith in Ferenc nonsense does not speak well of your” professional scientific abilities or demeanor. I have a physics degree and work with combustion where most heat transfer is from radiation. (“Noise” can distort a round combustor into a triangular shape!) Though not a climatologist, I think I am sufficiently “literate” to follow the arguments – and challenge unprofessional conduct when I see it.

      Per professional science, I see Miskolczi has created a world class software program to professionally calculate radiative exchange in his Line By Line HARTCODE. He then validates and round robin compares his HARTCODE LBL code with peer reviewed published papers.
      Miskolczi’s HARTCODE was used at NASA to correct/interpolate satellite data: AIRS – CERES Window Radiance Comparison, AIRS-to-CERES Radiance Conversion

      He then applies this radiative code using published data to evaluate atmospheric radiative fluxes, analyze them, and form a 1D planetary climate model.
      He takes the best/longest available 61 year TIGR radiosonde data and the NOAA reconstruction data. Miskolczi then calculates a “Planck-weighted spectral hemispheric transmittance using M=3490 spectral intervals, K=9 emission angle streams, N=11 molecular species, and L=150 homogeneous atmospheric layers.” That appears to be the first quantitative evaluation of the Global Optical Depth and absorption.

      He has posted preliminary results showing that NASA satellite data show supporting trends parallel to his analysis. See Independent satellite proof for Miskolczi’s Aa=Ed equation Note that TIGR Su = Ed + St is linear the surface upward flux Su, and the NASA data is parallel to that.

      From the scientific method, I understand that professionally you could challenge:
      1) The data
      2) The code
      3) The uncertainty, and bias, or
      4) Missing/misunderstood physics.

      You say: radiation “is the part of model physics that I am most familiar with, having worked for many years to design and develop the GISS GCM radiation modeling capability.” You could have evaluated Miskolczi’s definition of the “greenhouse effect”, his Planck Weighting, or calculation of the optical depth or atmospheric absorption. As you didn’t, I presume they are correct.

      Having developed them, you presumably have the tools to conduct an alternative evaluation to check on the accuracy of Miskolczi’s method and results. Judith cites your paper: “Line by line models tend to agree with each other to within 1%; however, the intercomparison shows a spread of 10-20% in the calculations by less detailed climate model codes. When outliers are removed, the agreement between narrow band models and the line-by-line models is about 2% for fluxes.”Presumably your radiation model has not more than double the error of Miskolczi’s HARTCODE (<2% vs <1%).
      Why then do you not try to replicate Miskolczi’s method in a professional scientific manner? Are you afraid you might confirm his results?

      You observe: “The reason this “Miskolczi equality” is close to being true is that in an optically dense atmosphere (the atmospheric temperature being continuous) there will be but a small difference in the thermal flux going upward from the top of a layer compared to the flux emitted downward from the bottom of the layer above.” Spencer makes a similar criticism. Miskolczi (2010) quantitatively measures and shows a small difference between up and downward flux. For the purpose of Miskolczi’s 1D planetary greenhouse model, that difference between upward and downward flux appears to be a second order affect that does not strongly bear on the primary results of his global optical depth or absorption. He still accounts for the atmospheric variation in temperature, water vapor, CO2 in the empirical data.

      The only serious issue you implied in you posts is how representative the TIGR and NOAA atmospheric profiles are to the global atmosphere: “. . .the atmospheric temperature profile and absorber distributions cannot all be measured simultaneously with available instrumentation to the required precision that would lead to a meaningful closure experiment.”

      Regarding Miskolczi’s Planck weighted Global Optical Depth and Absorption, other issues that have been raised is how well he treats clouds, and the accuracy of the TIGR/NOAA data.
      You could quantitatively show contrary evidence that
      1) there are systemic false experimental error trend in the data over the last 61 years.
      2) there are major errors due to how clouds and convection are treated in this 1D model; or
      3) poor data distribution causes major errors in his results.

      Instead I see you respond with scientific slander, asserting that Miskolczi imposed the results of his subsequent simplified climate model on his earlier calculations.

      In your comments on Ferenc Miskolczi’s greenhouse analysis: you said that “There is no recourse then but to rely on statistical analysis and correlations to extract meaningful information.” You claim that “radiative analyses performed in the context of climate GCM modeling, have the capability of being self-consistent in that the entire atmospheric structure (temperature, water vapor, ozone, etc. distributions) is fully known and defined.”
      You go on to state: A Lacis | December 5, 2010 at 12:12 pm
      “We also analyze an awful lot of climate related observational data. Data analysis probably takes up most of our research time. Observational data is often incomplete, poorly calibrated, and may contain spurious artifacts of one form or another. This is where statistics is the only way to extract information.” Consequently you claim: “And this climate model does a damn good job in reproducing the behavior of the terrestrial climate system, . . .”

      However, when another scientist, Miskolczi, conducts such “statistical analysis and correlations to extract meaningful information” you scientifically slander him, asserting that he did not conduct his analysis as you believe it should. Yet he conducted a much more detailed analysis along similar lines to your proposed method.

      You assert: “Instead of calculating these atmospheric fluxes, Miskolczi instead assumes that the downwelling atmospheric flux is simply equal to the flux (from the ground) that is absorbed by the atmosphere.”
      You claim that he imposed the results of a consequent simplified model on his detailed calculations. I challenged you that you were asserting the exact opposite of his actual published method. When confronted, you refused to check his work or to correct your statement, or show where I was wrong. I find your polemical diatribes to sound like the Aristotelians criticizing Galileo. Your words border on professional malpractice.

      You state: “Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions. . . . However, once these coefficients have been set, they become part of the model physics, and are not subject to further change.”
      Yet when Miskolczi does the same “tuning” of the atmospheric profiles with the available TIGR and NOAA data to obtain empirical composition, temperature, pressure and humidity, you accusing him of imposing his simple model on the calculations.

      You have not shown ANY error in the radiative physics he built on, nor in the coding of his HARTCODE software, nor in the atmospheric profiles he generated.
      You say: “The working pieces in the analysis are the absorbed solar energy input, the atmospheric temperature profile, surface temperature, including the atmospheric distribution of water vapor, clouds, aerosols, and the minor greenhouse gases”. I understand Miskolczi to include these with the effect of “clouds and aerosols” in the atmospheric profiles.

      You state: “validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations that utilize laboratory measurements for cloud and aerosol refractive indices and absorption line radiative property information.”

      It appears that Miskolczi has done exactly that, with independent data. It appears that when he comes up with results opposite to your paradigm, you conduct a viscous ad hominem attack scientifically slandering him instead of responding in a scientifically professional objective way.

      You again accuse: “Roy Spencer” of making “water vapor and cloud feedback ‘assumptions’”. I understood Spencer to have done the opposite. In his recent papers, he actually measures dynamic phase/ feedback magnitudes form satellite data and phase angle.
      In your cited Pielke post you state: “there is really nothing that is being assumed about cloud and water vapor feedbacks, other than clouds and water vapor behave according to established physics. Climate feedbacks are simply the end result of model physics.” However, you make assumptions on the stability of Total Solar Insolation, on the variability of clouds with cosmic rays, and on the cause and magnitude ocean oscillations. The cause and magnitude of natural CO2 changes vary strongly with each of those assumptions. The strong difference between your results and those of Miskolczi and Spencer raise serious questions on your results and your (low) estimates of uncertainties involved.
      At the International Conference on Climate Change
      Ferenc Miskolczi presented: Physics of the Planetary Greenhouse Effect
      http://www.heartland.org/bin/media/newyork08/PowerPoint/Tuesday/miskolczi.pdf
      And
      Miklos Zagoni, presented: Paleoclimatic Consequences of Dr. Miskolczi’s Greenhouse Theory
      http://www.heartland.org/bin/media/newyork08/PowerPoint/Tuesday/zagoni.pdf

      Physicist Dr. Ir. E. van Andel,has addressed The new climate theory of Dr. Ferenc Miskolczi.
      Physicist Mikolas Zagoni has presented on: The Saturated Greenhouse Theory of Ferenc Miskolczi.
      There are numerous technical posts on Miskolczi at Niche Modeling.

      From preliminary reading Miskolczi’s work appears professional and it had been peer reviewed. I would have preferred further refinement of its language. I do not have the time or tools to professionally review, reproduce or test Miskolczi’s work. I may be wrong in my “lay scientific” perspective. However antiscientific diatribes don’t persuade.

      You have the radiation modeling tools. Use them. You refused to read, or follow Miskolczi’s methods and calculations. Your actions speak much louder than your words. Until you provide credible qualitative or quantitative scientific rebuttal, I will stick to believing in the scientific method. I continue to hold that peer reviewed published papers like Miskolczi’s and Spencer’s have more weight than alarmist polemical posts.

      Will you step up to the challenge of “Raising the level of your game” with a professional scientific response?

      • David,

        In my remarks about Miskolczi’s paper, I never claimed the his line-by-line HARTCODE results were erroneous. I am not familiar with HARTCODE, so I said, let’s assume the Miskolczi is doing his line-by-line calculations correctly. I am more familiar with the line-by-line results of radiation codes like FASCODE and LBLRTM, which agree with out line by line model to better than 1%. Our GCM ModelE radiation model agrees with our line-by-line model also to better than 1%.

        The real problem with Miskolczi’s results in not HARTCODE, but what he uses it for. Why on Earth would anybody want to calculate all of atmospheric absorption in the form of a useless “greenhouse gas optical thickness” parameter. You should know that atmospheric gaseous absorption is anything but uniform. You need to preserve as much of the spectral absorption coefficient variability as possible, and to treat this spectral variability properly in order to calculate the radiative transfer correctly. This kind of approach is something that might have been used a hundred years ago when there were no practical means to do numerical calculations.

        Can Miskolczi calculate the well established 4 W/m2 radiative forcing for doubled CO2, and its equivalent 1.2 C global warming, with his modeling approach?

        And you are misinterpreting Roy Spencer’s remarks about the results of our Science paper where we zeroed out the non-condensing greenhouse gases. Roy thought that the subsequent collapse of the greenhouse effect with water vapor condensing and raining out was because we had “assumed” that water vapor was a feedback effect. I just pointed out that we had made no such assumption. Water vapor in the GISS ModelE is calculated from basic physical relationships. The fact that water vapor condensed and precipitated from the atmosphere was the result of the thermodynamic properties of water vapor, and not assumptions of whether we wanted the water vapor to stay or to precipitate.

      • David L. Hagen

        A. Lacis
        Thanks for your response and queries. It is encouraging to hear that: “Our GCM ModelE radiation model agrees with our line-by-line model also to better than 1%.” That is very respectable compared to the 2004 intercomparison of LBL models using HITRAN 2000. I believe performance and resolution have improved since then. E.g. Miskolczi uses: “wavenumber integration [] performed numerically by 5th order Gaussian quadrature over a wavenumber mesh structure of variable length. At least Δnj ≈ 1 cm−1 spectral resolution is required for the accurate Planck weighting.”
        AL: “Why on Earth would anybody want to calculate all of atmospheric absorption in the form of a useless “greenhouse gas optical thickness” parameter.”
        DH: One foundational reason is to uphold the very integrity of science against authoritarianism. A second foundational reason is to provide an independent check on the validity of predictions of Catastrophic Anthropogenic Global Warming (CAGW) compared to natural causes for climate change. What is the relative magnitudes of these competing factors?
        The orthodox model dominated by “anthropogenic CO2 warming + H2O Feedback” implies an increasing optical absorption or optical depth with increasing fossil combustion CO2. Alternative theories seek to quantify natural causes of climate change including the 21 year (“double”) magnetic Hale solar cycle, perturbations of Jovian planets on the sun and in turn the earth, solar modulation of galactic cosmic rays impacting clouds, rotation of the milky way galaxy affecting galactic cosmic rays etc. One or more of these are being tested to explain the strong correlation between variations in the Earth’s Length Of Day (LOD) with climate etc. Miskolczi (2004) (Fig 18, 19) demonstrates greater accuracy of LBL evaluations of atmospheric emission compared to conventional bulk models. That helps evaluation of global energy budgets. Quantitative LBL based 1D models can also provide an independent test for the “runaway greenhouse” affect.
        Both NASA and Russia lost rockets & space craft due to programming errors. We the People are now being asked for $65 trillion for “climate mitigation”. Many of scientists, engineers and concerned citizens are asking for “a second opinion” and for exhaustive “kicking the tires” tests.
        AL:

        validation of the GCM radiation model performance must necessarily rely on the established theoretical foundation of radiative transfer, and on comparisons to more precise radiative transfer bench marks such as line-by-line and vector doubling calculations

        DH: In light of this validation difficulty you noted, Miskolczi’s comprehensive detailed quantitative calculation of the Planck weighted global optical depth directly from original TIGR radiosonde or summary NOAA data provides an independent test of this primary CO2 radiation climate sensitivity with the best available data extending back 61 years. If the global optical depth does NOT vary as expected, then that suggests other parameters have greater impact than conventionally modeled. To my knowledge, these Planck-weighted global optical transmittance and absorption parameters have never been calculated before. Nor have they been used as an independent test of GCM results. 1) Please let us know of any other publications that have done so.

        AL: “You should know that atmospheric gaseous absorption is anything but uniform.”
        DH: I agree. Miskolczi (2004), Miskolczi (2008) and Miskolczi (2010) show detailed nonlinear results of the absorption for water vapor vs CO2, for the surface, as a function of altitude, at Top of Atmosphere, as a function of latitude, grouped for atmospheric WINdow, MID Infra Red, Far Infra Red, for atmospheric down emission and up emission.
        2) Can you refer us to any other paper(s) that provide equal or higher detail on this non-uniformity? – especially with full LBL quantitative calculations?

        AL: You need to preserve as much of the spectral absorption coefficient variability as possible, and to treat this spectral variability properly in order to calculate the radiative transfer correctly.
        DH: I agree. Miskolczi retains full “spectral absorption coefficient variability” for each absorbing gas species, across 3459 spectroscopic ranges; as a function of atmospheric column including altitude calculated over 150 layers, as a function of latitude, and as a function of radiant angle.
        AL: This kind of approach is something that might have been used a hundred years ago when there were no practical means to do numerical calculations.”
        DH: It is precisely because high power computational resources are now available that Miskolczi is now able to conduct these extremely detailed quantitative computations (compared to prior highly simplified bulk calculations. I would never have dreamed of doing this on my sliderule). Miskolczi calculates absorption for individual cells as a function of altitude and latitude, incorporating all the variations in temperature, pressure, and water vapor, as a function of wavelength including direct short wave (visible) absorption, and reflected absorption (when the surface is not black); as well as Long Wave (broken into sub groups of the atmospheric WINdow, Mid and Far IR). Each cell views IR emissions from other cells along 11 directions. Miskolczi calculates Planck weighted absorption to give a true solar weighting. All this is then integrated to obtain a Planck-Weighted Global Optical transmission tau, and then the corresponding Planck-weighted Global Optical Absorption Aa or Optical Depth. See Miskolczi 2010, Fig 10. This is then repeated for each of the 61 years of available TIGR/NOAA data.
        From your description, I understand GCMs to only calculate simplified absorption and approximate this quantitative level of detail to reduce computational effort.
        3) Do you know of any other publications calculating transmission and absorption to this high LBL level of detail? Have any others reported this 61 year mean giving of atmospheric transmission tau A = 1.868754 and atmospheric absorption Aa = 0.84568?

        AL: Can Miskolczi calculate the well established 4 W/m2 radiative forcing for doubled CO2, and its equivalent 1.2 C global warming, with his modeling approach?
        DH: Yes, Miskolczi (2010) Sect 2 pg 256-247 addresses that and calculates detailed sensitivities for other parameters – only possible by high resolution LBL calculations:

        “In other words, CO2 doubling would virtually, with no feedback, increase the optical thickness by 0.0246. Calculations here show that an equivalent amount of increase can be caused by 2.77 per cent increase in H2O. . . the dependence of the optical thickness . . . is not feasible to express this by a summary explicit analytical function.. ..the spectral overlapping of the absorption bands of the individual absorbers. The dependence of the optical depth on the temperature is also extremely complex and again cannot feasibly be described by an explicit analytical expression. The above dependences can only be diagnosed by using the LBL method for the transmittance computation in conjunction with a realistic properly stratified spherical refractive real (or model) atmosphere which is subjected to temperature and absorber amount perturbations.”

        See also his detailed discussion of sensitivities in Sect. 4 p 259-260.
        4) Do GCMs accurately calculate the full Beer-Lambert law with increasing saturation? See Jeff Glassman below.

        AL: the results of our Science paper where we zeroed out the non-condensing greenhouse gases.
        DH: Thanks for the clarification. How have you addressed the enormous buffer capacity of the ocean with numerous related salts? E.g. See Tom V. Segalstad. Segalstad highlights the arorthite-kaolinite buffer, clay mineral buffers, and calcium silicate-calcium buffer which are at least three orders of magnitude above the ocean’s carbonate solution buffers. CO2 variation is thus highly constrained within “small” dynamic variations over geologically “short” periods.

        Miskolczi finds the atmospheric long wave (LR) transmission and Planck weighted Global Optical Depth (absorption) are highly stable. This suggests other factors such as solar & cosmic modulation of clouds and ocean oscillations may have much higher primary impact that currently modeled. There may be major systemic trend errors in the available TIGR data – that likely contaminate both GCM and Miskolczi’s models, and combinations thereof.
        5) We look forward to seeing how well GCM’s can reproduce and/or disprove Miskolczi’s results. It will be fascinating to discover the causes of these dramatic differences in these 61 year long term sensitivities based on the best available data.

    • Andy Lacis:
      Validation of the GCM climate modeling performance is in terms of how well the model generated temperature, water vapor, and cloud fields resemble observational data of these quantities, including their spatial and seasonal variability. It would appear that ModelE does a generally credible job in reproducing most aspects of the terrestrial climate system.

      Hi Andy. Does the GISS ModelE GCM successfuly reproduce the fall in tropospheric relative humidity since 1948 empirically measured by radiosonde balloons?

      If so, does this help explain the non-appearance of the tropospheric hotspot predicted by earlier, unrealisticl models?

      I’m very pleased to see your comment on the ‘best of greenhouse’ thread that your models now take account of oceanic oscillations.

      How much of the warming from 1975 to 2003 is now being attributed to them?

      Thanks.

    • A. Lacis

      “Because of his, there are a number of adjustable coefficients that have to be ‘tuned’ to ensure that the formulation of evaporation, transport, and condensation of water vapor into clouds, and its dependence on wind speed, temperature, relative humidity, etc., will be in close agreement with current climate distributions.”

      This is the major problem I have with climate models. I count 6 inter-related parameters that have to be calibrated, plus an unknown number of unknown parameters(cosmic rays, electromagnetic effects such as lightning, turbulence, droplet size, and ???). That would require n factorial(I’m not sure) interaction matrices that have to be produced by measurements(which I am sure no one has done, unlike the radiation transfer models). If at any time during the model solution a parameter ends up outside it’s measured range the whole calculation falls apart. You can’t project fitted data outside it’s measured range. I don’t see how such a model can be made reliable and testable. The fact that it can be tuned to mimic a series of measurements doesn’t prove a thing about its ability to produce accurate results in a long term prediction.

      The radiative transfer through the atmosphere is only one part of how energy is transferred and at what rate. Just compare the rate of energy transfer from radiation, a few hundred watts per m^2 to that moved by evaporation and convection. Yes, the clear sky problem is likely OK. Unfortunately that is only a small part of the problem.

  5. Very interesting thread.

    For line-by-line calculations, absorption is computed on a prescribed spectral grid (at every model pressure and temperature level), with the equations of radiative transfer to calculate upwelling /downwelling radiative fluxes. Most of the absorption arises from molecular transitions, which from quantum physics is discrete, giving rise to absorption lines. Despite the discrete nature of molecular absorption and emission, this process is not monochromatic however (confined to a single wavelength). Rather, absorption for some transition becomes strongest at the line center and decays away from the center due to various ‘broadening’ mechanisms which arise simply from the Heisenberg uncertainy principle, pressure effects (dominant especially in the low atmosphere), or the motions of the molecules (where absorption is doppler shifted relative to an observer). The convolution of pressure and doppler effects gives rise to the so-called Voigt line-shape.

    There is some absorption however that is much more continuous and stronger, moreso than is explained by local absorption lines. This is the ‘continuum absorption’ and some mechanisms for this feature, depending on the gas or region on the EM spectrum are still a matter of uncertainty. It is important in some areas of the H2O and CO2 regimes however, and becomes important esepcially in dense atmospheres where collisions between molecules can allow transitions to occur that are otherwise forbidden. In fact, in some atmospheres (such as Titan or the outer planets), even the diatomic molecules like H2 can become very good greenhouse gases due to this process.

    Representing the continuum absorption in practical modelling has a few different approaches. Even for LBL calculations, several parameter choices must be made concerning the formualtion of the continuum, or the sub-
    Lorentzian (or super-Lorentzian, depending on the gas) absorption features in the wings of the spectral lines. Due to this, good fits can be made between LBL calculations and observational spectra. One problem is that for people interested in climates on, say, early Earth or Mars, the radiative transfer issue even in clear skies is far from resolved.

    Another approximate method involves band models, which are basically fits to transmission spectra generated by LBL calculations. Band averaging groups together many molecular transitions, and there are also other methods such as the correlated k-disribution (which I’m sure Andy Lacis will talk about due to this authorship on the 1991 paper with Oinas). One of the objects of the famous ‘Myhre et al 1998’ study which gives the ‘5.35*ln(C/Co)’ radiative forcing equation for CO2, is to compare LBL results from previous Narrow-band models and broad band models, which have different treatments of solar absorption or the vertical structure of the gas.

    For global warming applications, the strength of an individual greenhouse gas depends upon the distribution of itself and the temperature, its absorption characteristics, and the concentration itself. For very low concentrations of greenhouse gases, the absorption is approximately linear with mxing ratio in the air, weakening to a square root and then eventually to a logarthmic dependence. This is why methane is often quoted as being a ‘stronger greenhouse gas than CO2.’ It’s not an intrinsic property of the gas, but rather a consequence that methane exists in much lower concentrations and can provide a greater warming potential, molecule-for-molecule, in the current atmosphere.

    Without considering feedbacks, the temperature response to a given forcing (such as a doubling of CO2) can be almost purely determined by the radiative forcing for CO2, since the no-feedback sensitivity is merely given by the derivative of the Planck function with respect to the temperature. The ‘forcing’ depends obviously on the way forcing is defined, some authors using varying definitions, and these differences must be kept in mind when comparing literature sources. The IPCC AR4 defines ‘radiative forcing’ as the change in net irradiance *at the tropopause* while allowing the stratosphere to re-acquire radiative equilibrium (which occurs on timescales of months). Once this is done, it is found that the radiative forcing for a doubling of CO2 is given as ~3.7 W/m2. A comparison of 9 out of the 20 of the GCMs used in the IPCC AR4 shows differences in the net all-sky radiative forcing between 3.39 and 4.06 W/m2. See e.g.,
    http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3974.1

    In constrast, Table 1 of http://www.gfdl.noaa.gov/bibliography/related_files/bjs0601.pdf shows that the Planck feedback response has relatively very little spread amongst models (which agrees well with simple back of the envelope calculations as well), indicating a rather robust response for a no feedback atmosphere. It follows from all of this that most of the uncertainty assoicated with a doubling of CO2 comes from the various feedback effects, especially low cloud distribution, and these feedbacks involve not just radiative transfer but also dynamics and microphysics.

  6. steven mosher

    Thanks Judith,

    I think it needs to be dumbed down even further at least for a start.

    RTE, I would hope, would be the one aspect of AGW science that all sides can come to agree upon. One thing I’ve tried ( and failed of course) to convey to my friends on the skeptical side is that RTE is really not up for serious debate. The angle I take is that this physics is actually used to design things that actually work. That might be an angle some skeptics would appreciate. How RTE get used in engineering. We used to not be able to talk about it ( or get shot) but it would make a nice entry point for your readers to invite somebody who uses RTE in their current engineering job.

    So, I would start with observations ( measurements and experiments), then practical applications, then Theory. That fits the particular bent of thinking I see in the skeptical circles.

    • Maybe willis can have a go at this :)

    • randomengineer

      How RTE get used in engineering.

      Happens every day in semiconductor fabs. Automated critical dimension measurement and control equipment is always housed in temp controlled microenvironments.

      It’s also easy to demonstrate. Take any industrial microscope at high magnification and look at something (e.g. a feature of say 40 – 100 nanometers) and place your hand within 4 or so inches from the stage assembly. Your image feature will warp out of focus solely from the heat of your hand. Remove your hand, and focus eventually returns. The heat from your hand affects the microscope stage via RTE.

      Designing automation with this type of equipment requires tight control of the thermal environment for this reason (and probably spells out why it’s automated in the first place.)

      Yeah I know that by “engineering” you probably meant something a little fancier or more esoteric as opposed to everyday stuff… but this is, if nothing else, a simple yet practical demo of RTE knowledge being used in every day engineering.

      I hope I’ve contributed here as opposed to being intrusive.

      • thanks, this is the kind of example that people can relate to

      • steven mosher

        Thats exactly the kind of thing I am refering to.

        communications engineers, engineers in defense ( i worked in stealth .. broadband stealth) sensor specialists. We all know that RTE are accurate, they work, we rely on them day in and day out. we are not taking crazy pills and neither have we been subverted by some global watermellon conspiracy. For me its a threshhold question in having discussions with skeptics. If they wont learn enough to accept this, then we really can’t have a meaningful dialogue. So, I just ask them. can you at least accept physics that has worked for years and protected your freedom of speach?

        When a skeptic uses a chip ( his computer chips) to deny a science that makes chips possible, then he’s got a tangible example of the problem with his position.

        So, Judith How many industrial examples can we come up with.

      • randomengineer

        Since what I described is the human version of a basic apartment radiator I doubt you really need engineering examples. This is why I’d wondered if I was submitting TOO simple an example. Surely nobody really can question RTE physics ?!?

        I’m thinking you can use the temp meaurement record to prove the exact same concept *and* prove that GW is real. In fact you and I discussed this (very very briefly) some time back elsewhere. All you have to do is look at the tMin from a handful of desert (no humidity) stations. If tMin rises over time and rh doesn’t, it ain’t water vapour doing it, it’s GHG. There aren’t any other factors. IIRC you said you had the data (I don’t.)

        If I’m wrong about this I’d appreciate knowing why.

        If not, I’d like you to consider this because everyone here seems to be in agreement that it’s the simple irrefutable stuff, like you’re wanting here, that gets the message across. We don’t need juvenile pictures, really, that’s too dumbed down.

      • I should actually go back and revisit that desert study with a full dataset and my improved R skills!

        Thanks.

        On the RTE thing I would just start with the basics of radaitive transfer. I got my introduction on the job working with Radar Cross sections and then moved on to IR work. Just getting people to understand transmission windows and absorbion and scattering and reflection would be a good start.
        We also did cool things to enable short range covert communications because of windows or the lack of windows.. to be more precise.

      • Yes Steve, you do need to understand reflection.

      • I have absolutely no problem with understanding how this works planet wide.
        It is the actual physical changes that the planet is displaying blamed on Global Warming I have a problem with.
        The evidence is far closer to pressure build-up than planetary warming. But again that is not Co2 analysis totally tied to temperature and not physical changes.
        Ocean water does have defences it has build to prevent overheating and mass evaporation.

      • >Surely nobody really can question RTE physics ?!?It follows from all of this that most of the uncertainty assoicated with a doubling of CO2 comes from the various feedback effects, especially low cloud distribution, and these feedbacks involve not just radiative transfer but also dynamics and microphysics.< [Colose, December 5th above]

        then it's worth it's space here. I vaguely remember Judith promising a thread on feedbacks sometime

        I;m still interested in Dessler's use of the equation:

        DTf = 1.2C/(1-f)

        where f = sum of feedbacks (both signs)

      • ian, what are you interested in? I could probably help you

      • Thanks Chris

        I put up a post on Dessler’s use of that equation some threads ago, but no takers then so I’m most happy for you to answer here

        So, DTf (sensitivity) = 1.2C/(1-f)

        1)1.2C is Dessler’s preferred temperature rise from purely CO2 x 2 – I’ve found the range reported as 0.8C to 1.2C – how correct is this range ?

        2) Dessler included four (4) feedback elements to sum to “f” –
        f(wv) H2O vapour = +0.6)
        f(lr) = -0.3
        f(ia) = +0.1
        f(cl) clouds = +0.15
        (are there more than 4 ?)

        so “f” (summed) = +0.55

        so DTf = 1.2/1-0.55 = 2.7

        but if 0.8C is used, DTf = 1e, a 67% increase in sensitivity is required to reach 2.7

        and, what bothers me most:

        why cannot “f” (summed) = 1.0, in which case DTf is infinite (nonsensical). Is there something wrong with the equation as laid out by Dessler ?

        Last point: Dessler claimed (October 2010, MIT) that the various f values above are observed data. By nature and very long experience, I am more inclined to observed data than theory, no matter how elegant, so my obvious questions are on the reliability, range and methods for these observed data

        If you could kindly work through these questions, I am very interested

      • Ian,

        1) The temperature value of “1.2 C” is determined by the product of two things. First, it is dependent on the radiative forcing (RF) for CO2 (or the net change in irradiance at the TOA, or tropopause, depending on definition). It’s also dependent on the magnitude of the no-feedback sensitivity parameter (call it λo to be consistent with several literature sources). Therefore, dTo=RF*λo. So we can re-write the Dessler equation (although this goes back a while, Hansen, 1984 one of the earliest papers I know of to talk about things in this context, Lindzen has some other early ones) as dT=(RF*λo)/(1-f). (I will use the “o” when we are referring to no feedbacks). It turns out virtually all uncertainty in dTo is due to the radiative forcing for CO2 (~+/- 10%, the best method to calculate this is from line by line radiative transfer codes which gives ~3.7 W/m2). The uncertainty in the no-feedback sensitivity is considerably less. The 1/(1-f) factor is really at the heart of the sensitivity problem

        2) I’m not aware of any other radiative feedbacks than these

        3) The f–> 1 limit is a good question, and a large point of confusion, even for people who work on feedbacks. You are right that it is nonsensical for dT to go to infinity (or negative infinity if the forcing is causing a cooling) but there’s a couple things to keep in mind. First, the equation you cite is strictly an approximation. It neglects higher order derivative terms (think Taylor series) and thus one can add second order and third order powers to the equation, and solving for the new dT with these new terms will leave, say, a quadratic expression that needn’t blow up to infinity.

        Physically, f=1 does not necessite a runaway snowball or runaway greenhouse, nor is ‘f’ a cosntant that remains put across all climate regimes. Rather, it corresponds to a local bifurcation point, and it is fully possible to equilibriate to a new position past the bifurcation (although you need further information than the linear Dessler equation to determine where). Think of a situation where you are transitioning into a snowball Earth, and the ice-line reaches a critical (low) latitude to make the ice-albedo feedback self sustaining. In this case one needn’t create any new ‘forcing’ to complete the snowball transition. Rather, the previous state (with semi-covered ice) was unstable with respect to the perturbation, and the snowball Earth is a stable solution for the current solar and greenhouse paramters. Thus, just a little nudge in one of these forcings can tip the system past a bifurcation point, to end up in a new stable (say, ice-covered regime). But once the planet is ice-covered, or ice-free, you can’t get any more ice albedo feedback so the temperature won’t change forever.

        4) On the observational side, the water vapor and lapse rate feedbacks are well diagnosed to be positive and negative, respectively. They are usually combined by experts in this area to be a WV+LR feedback, since they interplay with each other. Brian Soden or Isaac Held (along with Dessler, and others if you follow references around…the AR4 is a good start) have many publications on this. I’m not personally familiar with how well quantified the ice-albedo feedback at the surface is observationally (maybe an ice expert here can chime in). The sign is robust, but in any case as Dessler noted, it’s a pretty small feedback. It is important for the Arctic locally, but not very big on a global scale. There’s a lot of papers talking about the ice-albedo feedback though and its role especially in sea ice loss.

        Clouds are the big uncertainty, especially low clouds. Longwave component of cloud feedbacks are pretty much always positive in models, and pretty good theories have been put out to explain this (Dennis Hartmann especially), so the shortwave (albedo) component is the big player here. I’m not in a great position to talk about every new update to cloud observations, but I don’t really know how much value they have right now to diagnose climate sensitivity. Even more so, we don’t know the 20th century radiative forcing with high confidence, mostly because of aerosols.

      • Thank you for your reply – it’s sufficient for me to plough through for now

        As for f >= 1, I had suspected

        >First, the equation you cite is strictly an approximation<

        in any case. I rather thought the "=" in the equation should have been "proportional to" and we were missing some other factor(s)

      • David L. Hagen

        chrisclose
        Thanks for the comments. Re: “Physically, f=1 does not necessite a runaway snowball or runaway greenhouse, nor is ‘f’ a cosntant that remains put across all climate regimes. Rather, it corresponds to a local bifurcation point, and it is fully possible to equilibriate to a new position past the bifurcation (although you need further information than the linear Dessler equation to determine where). ”

        Essenhigh explored the potential for bifurcation point with increasing CO2. He concluded:

        More specifically, the outcome of the analysis does not support the concept of “forcing” or precipitation of bifurcation behavior because of increased CO2. Rather, although the evidence is clear that global warming is currently occurring as discussed elsewhere,37 it would appear, nevertheless, that it is not the rising carbon dioxide concentration that is driving up the temperature but, evidently, that it is the rising temperature that is driving up the carbon dioxide by reducing the temperature-dependent saturation level in the sea and thus increasing the natural emissions

        Energy Fuels, 2006, 20 (3), 1057-1067 • DOI: 10.1021/ef050276y

        Can you refer to any studies that do find a bifurcation point from increasing CO2?

      • By nature and very long experience, I am more inclined to observed data than theory, no matter how elegant,

        Funnily enough I’m completely that way myself when it comes to climate science, unlike computer science where I’m the total theorist.

        I think it must be because in a computer we can account for every last electron, whereas the climate is subject to many effects completely beyond our ability to see, such as magma sloshing around in the mantle, inscrutable ocean isotherms, rate of onset of permafrost melting radically changing the feedbacks, etc.

        Digital climate models have something to offer, but so do analog models. The mother of all analog models of climate is mother nature herself.

        We can learn a lot from closer studies of that ultimate analog model!

        (Apologies to all the pessimists here for my being so upbeat about that.)

      • Spacing went goofy on that post. I’d wrote:

        >>Surely nobody really can question RTE physics ?!?<

        Well, no actually, but if it pushes the website closer to:
        [Colose quote]

        then it's worth it's space here … etc

      • Hi RandomE
        Is the following the type of study you’re looking for?

        “The arid environment of New Mexico is examined in an attempt to correlate increases in atmospheric CO2 with an increase in greenhouse effect. Changes in the greenhouse effect are estimated by using the ratio of the recorded annual high temperatures to the recorded annual low temperatures as a measure of heat retained (i.e. thermal inertia, TI). It is shown that the metric TI increases if a rise in mean temperature is due to heat retention (greenhouse) and decreases if due to heat gain (solar flux). Essentially no correlation was found between the assumed CO2 atmospheric concentrations and the observed greenhouse changes, whereas there was a strong correlation between TI and precipitation. Further it is shown that periods of increase in the mean temperature correspond to heat gain, not heat retention. It is concluded that either the assumed CO2 concentrations are incorrect or that they have no measurable greenhouse effect in these data.”

        The above is a paper By Slade Barker (24 July 2002)
        “A New Metric to Detect CO2 Greenhouse Effect
        Applied To Some New Mexico Weather Data”

        Here is the link http://www.john-daly.com/barker/index.htm

        hope this helps

        p.s. You say…”I’m thinking you can use the temp meaurement record to prove the exact same concept *and* prove that GW is real.”

        Have I misunderstood that sentence or is that whats called confirmation bias?

        regards

      • randomengineer

        Is the following the type of study you’re looking for?

        No, I don’t think so, but an interesting find regardless. Barker appears to be doing something else entirely.

        However… thank you, anyway.

        In what I spoke to Mosher about, only the min temps are needed, and these need to be looked at from a variety ultra low humidity stations around the world. Essentially the idea is to deal with only the coldest temps in super arid conditions, which would hopefully minimize the effect of water vapor. Rising min temps (which are expected, and from what I gather, what’s observed) would be attributable to GHG — I think, anyway. Maybe one of the experts here can say yea or nay before Mosh starts mucking about with data.

        Have I misunderstood that sentence or is that whats called confirmation bias?

        You understood it and no, it’s not confirmation bias. AGW is already proven. The goal is to help prove it to deniers.

        You’re well aware climate stuff is a political battle. Bad guys (and no, they’re not climate scientists) do exist, largely in the form of political interests, and the goal is to keep them from silly things like ruining the economy. Whether you presume ruination to be in the form of (left) socialist creep or (right) corporate sellout is none of my business, but regardless of which direction ruination approaches from, they’re going to be wielding knowledge as their weapon of choice. Read your Sun Tzu: you can choose to embrace the weapon yourself, or you can get clubbed with it. Denying the existence of the weapon guarantees the latter outcome.

      • Richard S Courtney

        Randomengineer:

        You say:
        “AGW is already proven.”

        Really? By whom, where and how?

        Or do you mean effects of UHI and land use changes?

        I and many others (including the IPCC) would be very grateful for your proof of AGW.

        Richard

      • randomengineer

        Or do you mean effects of UHI and land use changes?

        All of the above. Soot, pollution, land use, and yes, CO2 all play a role. Call it climate change if you prefer. 6 Billion souls with fire and technology and agriculture would be hard pressed to NOT change the environment.

        There’s plenty of room to be skeptical of magnitude and/or whether or not there’s a *problem* while still accepting the reality of the basic physics. For all we know the low feedback guys are correct and the effect of Co2 is minimal. That’s a great deal different however than claiming physics doesn’t work or that CO2 has no effect at all.

        It’s like Mosher says; let’s get past the silly unwinnable physics arguments and move on to what’s important.

      • Richard S Courtney

        Randomengineer:

        Please accept my sincere thanks for your good and clear answer to my question. Your response invites useful discussion, and it is a stark contrast to the typical response from AGW-supporters to such questions.

        As you say;
        “There’s plenty of room to be skeptical of magnitude and/or whether or not there’s a *problem* while still accepting the reality of the basic physics. For all we know the low feedback guys are correct and the effect of Co2 is minimal. That’s a great deal different however than claiming physics doesn’t work or that CO2 has no effect at all.”

        Absolutely right!
        If only more people would adopt this rational position then most of the unproductive ‘climate war’ could be avoided.

        The important issue to be resolved is whether or not AGW is likely to be a serious problem. You, Judith Curry and some others think that is likely, while I and a different group of others think it is very unlikely. Time will reveal which of us is right and to what degree because GHG emissions will continue to rise if only as a result of inevitable industrialisation in China, India, Brazil, etc..

        Without getting side-tracked into the importance of the land-use issues that interest Pielke, the real matters for discussion in the context of AGW are
        (a) to what degree anthropogenic GHG emissions contribute to atmospheric CO2 concentration
        and
        (b) how the total climate system responds to increased radiative forcing from e.g. a doubling of atmospheric CO2 concentration equivalent.

        So, in my opinion, the discussion in this thread needs to assess the possible effects on the climate system of changes to atmospheric GHG concentrations. And, as you say, this leads directly to the issue of climate sensitivity magnitude.

        I think most people would agree that doubling atmospheric CO2 concentration must have a direct warming effect that – of itself – implies a maximum of about 1.2 deg.C increase to mean global temperature (MGT) for a doubling of atmospheric CO2 concentration. The matter to be resolved is how the climate responds to that warming effect; i.e. is the net resultant feedback positive or negative?

        If the net feedback is negative (as implied by Lindzen&Choi, Idso, Douglas, etc.) then those on my ‘side’ (please forgive the shorthand) of the debate are right because a maximum climate sensitivity of 1.2 deg.C for a doubling of CO2 would not provide a problem. But if your ‘side’ of the debate is right that the net feedback is positive then there could be a significant future problem.

        So, the behaviour of the climate system (in terms of changes to lapse rates, clouds and the hydrological cycle, etc.) need to be debated with a view to discerning how they can be understood. And that is the debate I think we should be having.

        Again, thank you for your answer that I genuinely appreciate.

        Richard

      • randomengineer

        I’m glad we can talk, Richard. It beats name calling and such.

        So, in my opinion, the discussion in this thread needs to assess the possible effects on the climate system of changes to atmospheric GHG concentrations.

        You’re getting ahead of things just a bit. Let’s continue this on the next thread or so when our host starts discussing that part of things; I think this is planned already.

        For now, let’s concentrate on the topic du jour: we pretty much agree that humans change their environment in myriad ways. We know CO2 is a GHG and we know why. We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. What’s relevant is just knowing that the understanding is basically correct.

        Now, don’t tell your friends, but you’re in the “AGW is real” camp, same as me. That doesn’t mean we’re going to start campaigning for polar bear salvation or writing letters to the editor about imminent catstrophe. Dealing with reality and succumbing to fanatacism are different things.

        Is this a fair summary?

        If so, welcome to the dark side. We have cookies.

      • Richard S Courtney

        Randomengineer:

        I, too, am glad that we can talk. Indeed, I fail to understand why so many of your compatriots think name calling is appropriate interaction, especially when I have always found dialogue is useful to mutual understanding (although it usually fails to achieve agreement).

        You ask me if you have correctly summarised my position, and my answer is yes and no.

        My position is – and always has been – that it is self-evident that humans affect climate (e.g. it is warmer in cities than the surrounding countryside) but it seems extremely unlikely that GHG-induced AGW could have a discernible effect (e.g. the Earth has had a bi-stable climate for 2.5 billion years despite radiative forcing from the Sun having increased ~30% but the AGW conjecture is that 0.4% increase of radiative forcing from a doubling of CO2 would overcome this stability).

        So, I agree with you when you say,
        “we pretty much agree that humans change their environment in myriad ways. We know CO2 is a GHG and we know why. We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming.”

        But we part company when you say,
        “Whether this is a fraction of a degree or more isn’t relevant at this stage. What’s relevant is just knowing that the understanding is basically correct.”

        I think not. As I see it, at this stage what we need to know – but do not – is why the Earth’s climate seems to have been constrained to two narrow temperature bands despite immense tectonic changes and over the geological ages since the Earth’s atmosphere became oxygen-rich. This has to have been a result of the global climate system’s response to altered radiative forcing. So, as I said, we need to debate the response of the climate system to altered radiative forcing in terms of changes to lapse rates, clouds, the hydrological cycle, etc..

        Please note that I am on record as having repeatedly stated this view for decades.

        Hence, I am not and never have been on “the dark side”. But if adequate evidence were presented to me then I would alter my view. As the saying goes, if the evidence changes then I change my view.

        To date I have seen no evidence that warrants a change to my view. All I have seen are climate models that are so wrong I have published a refereed refutation of them, assertions of dead polar bears etc., ‘projections’ of future catastrophe that are as credible as astrology (I have published a refereed critique of the SRES), and personal lies and insults posted about me over the web because I do not buy into the catastrophism. And the fact of those attacks of me convinces me that everything said by the catastrophists should be distrusted.

        So, give me evidence that climate sensitivity is governed by feedbacks that are positive and not so strongly negative that they have maintained the observed bi-stability over geological ages despite ~30% increase in solar radiatived forcing. At present I can see no reason of any kind to dispute the null hypothesis; viz. nothing has yet been observed which indicates recent climate changes have any cause(s) other than the cause(s) of similar previous climate changes in the holocene.

        Richard

      • randomengineer

        Richard, I think I missed something.

        I’d said : “We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. “

        This isn’t a conclusion of rampant runaway warming, but a statement that merely adding GHG with all things being equal ought to raise the temp.

        Your position then switches to discussion of paleoclimate, and as I read things I think what you’re saying ultimately is that the GHGs are naturally suppressed such that the increase of GHG doesn’t cause warming due to damping.

        I don’t see a problem with asserting a natural state in real life working against a temp rise, but I find it somewhat unconvincing that the natural state that damps a temp rise is damping a temp rise that doesn’t happen in the first place.

        It seems to me that the logical position is that yes there SHOULD be a temp rise, BUT the temp rise isn’t happening due to [unspecified factors.]

        So I’m a bit confused regarding the position. Let me ask this then: should there be a temp rise with adding CO2 that doesn’t happen for [some reasons] or is adding CO2 something that never results in a temp rise?

      • Richard S Courtney

        Randomengineer:

        I apologise for my lack of clarity. It was not deliberate.

        I attempted – and I clearly failed – to state what I think to be where we agree and where we disagree. It seems that I gave an impression that I was trying to change the subject, and if I gave that erroneous impression then my language was a serious mistake.

        So, I will try to get us back to where we were.

        Please understand that I completely and unreservedly agree with you when you assert:

        “I’d said : “We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. “

        This isn’t a conclusion of rampant runaway warming, but a statement that merely adding GHG with all things being equal ought to raise the temp.”

        Yes, I agree with that. Indeed, I thought I had made this agreement clear, but it is now obvious that I was mistaken in that thought.

        And I was not trying to change the subject when I then explained why I think this matter we agree is of no consequence. The point of our departure is your statement – that I stress I agree – saying,
        “merely adding GHG with all things being equal ought to raise the temp.”

        But,importantly, I do not think “all things being equal” is a valid consideration. When the temperature rises nothing remains “equal” because the temperature rise induces everything else to change. And it is the net result of all those changes that matters.

        As I said, a ~30% increase to radiative forcing from the Sun (over the 2.5 billion years since the Earth has had an oxygen-rich atmosphere) has had no discernible effect. The Earth has had liquid water on its surface throughout that time, but if radiative forcing had a direct effect on temperature the oceans would haver boiled to steam long ago.

        So, it is an empirical fact that “merely adding GHG with all things being equal ought to raise the temp.” is meaningless because we know nothing will be equal: the climate system is seen to have adjusted to maintain global temperature within two narrow bands of temperature while radiative forcing increased by ~30%.

        Doubling atmospheric CO2 concentration will increase radiative forcing by ~0.4%. Knowing that 30% increase has had no discernible effect, I fail to understand why ~0.4% increase will have a discernible effect.

        I hope the above clarifies my view. But in attempt to show I am genuinely trying to have a dialogue of the hearing, I will provide a specific answer to your concluding question that was;

        “So I’m a bit confused regarding the position. Let me ask this then: should there be a temp rise with adding CO2 that doesn’t happen for [some reasons] or is adding CO2 something that never results in a temp rise?”

        Anything that increases radiative forcing (including additional atmospheric CO2 concentration) will induce a global temperature rise. But the empirical evidence indicates that the climate system responds to negate that rise. However, we have no method to determine the response time. Observation of temperature changes following a volcanic cooling event suggests that the response time is likely to be less than two years. If the response time is that short then we will never obtain a discernible (n.b. discernible) temperature rise from elevated atmospheric CO2. But if the response time is much longer than that then we would see a temperature rise before the climate system reacts to negate the temperature rise. And this is why I keep saying we need to determine the alterations to clouds, the hydrological cycle, lapse rates, etc. in response to increased radiative forcing. We need to know how the system responds and at what rate.

        Alternatively, of course, I may be completely wrong. The truth of that will become clear in future if atmospheric CO2 concentration continues to rise. (All scientists should remember Cromwell’s plea, “I beg ye in the bowells of Christ to consider that ye may be wrong”.)

        In conclusion, I repeat my gratitude for this conversation. Disagreement without being disagreeable is both pleasurable and productive.

        Richard

      • randomengineer

        All clear now, thanks. Summary: your position seems to be that RTE and GHE works as advertised (in a lab condition anyway) but where it concerns the real world, there are factors changing the general rule:

        But,importantly, I do not think “all things being equal” is a valid consideration.

        …which I think is fair enough.

        I can further condense this.

        Napoleon was fond of the aphorism (stolen from a Roman general) — “no battle plan ever survives contact with the enemy.” There’s nothing wrong with noting that history is replete with examples of being utterly wrong and basing one’s starting position on this.

        I’m happy to see that where we part ways is where things get murky rather than where things are clearly visible via lab experiment and the like. I’ll see you on the next GHG forcing thread. Bring your A game. :-)

        Cheers.

      • This is in reply to your comment below.

        Napoleon? Unnamed Roman General? Build railroads son! Try von Moltke.

      • randomengineer, I’m one of those sticklers for protocol and, expressing the sentiment of Australian poet C. J. Dennis’s Sentimental Bloke, worry when people are “not intrajuiced” properly. I therefore wanted to introduce Mr Courtney to you, but I regret to say that googling your pseudonym turned up nothing useful.

        So I’m sorry but I can only help with the other direction . If you google “Richard S. Courtney” you can best meet the good gentleman by skipping over the first few pages and going straight to page 5.

        I hasten to point out the risk of confusing him with another Richard S. Courtney, of the Kutztown University Department of Geography in Pennsylvania. The former appears as the 20th name on the list of more than 100 scientists rebuking Obama as ‘simply incorrect’ on global warming. Unlike the latter Courtney, the former is listed among those 100 distinguished scientists as

        “Richard S. Courtney, Ph.D, Reviewer, Intergovernmental Panel On Climate Change”

        So you should realize you are dealing here with someone who knows what he’s talking about. To my knowledge no one has invited the Pennsylvania professor to serve as a reviewer for the IPCC.

        One way to keep these two gentlemen straight is that the former has a Diploma in Philosophy (presumably what the Ph.D. stands for) from some (thus far unnamed) institution in the UK city of Cambridge. In the normal scheme of things a Diploma in Philosophy would constitute progress towards a degree in divinity, while the Pennsylvania professor is a Doctor of Philosophy from Ohio State University, whose 1993 dissertation is mysteriously titled, “A Spatio-temporo-hierarchical Model of Urban System Population Dynamics: A Case Study of the Upper Midwest.” In his dissertation he employs Casetti’s Expansion Method to redefine a rank-size model into a 27 parameter equation capable of identifying spatial, temporal, and hierarchical dimensions of population redistribution that I use to study the urban system of the Upper Midwest.

        All sounds like complete gibberish to me so I suggest you stick with the divine Mr Courtney, distinguished reviewer for the IPCC with a Diploma in Philosophy, and not let his namesake distract you.

        As I’ve never met either in person I’m afraid I have no other way of distinguishing them so you’re on your own there.

      • Willis Eschenbach

        randomengineer, you say:

        All you have to do is look at the tMin from a handful of desert (no humidity) stations. If tMin rises over time and rh doesn’t, it ain’t water vapour doing it, it’s GHG.

        You should turn in your engineer’s badge for this whopper. You are telling us that there is only one thing on the entire planet that affects minimum desert temperatures — GHGs. I don’t think even you believe that.

        You say “If I’m wrong about this I’d appreciate knowing why.” You are wrong because there isn’t a single place on earth where the climate is only affected by one single factor. Every part of the planet is affected by a variety of feedbacks and forcings and teleconnections. The desert may warm, for example, from an El Niño … and that fact alone is enough to destroy your theory that the desert temperatures are only affected by one isolated factor, GHGs.

      • randomengineer

        Rising over time = 120 years. An el nino isn’t going to change this. It certainly isn’t going to change this from a variety of stations from around the planet. Desert = low humidity. Antarctica counts.

        Yes there are always things that will affect temps, but factor out LOCAL humidity effects (over time in a desert) and you will see a better picture of the local temps unaffected by humidity.

        Land stations otherwise suffer from land use change; are they really reflecting a global temp or are they a proxy for land use? Rural stations will show temp increase once irrigation is used.

        Engineers do their best to control for the one variable they are interested in seeing. In my case, looking at temp without local land use interference. Factor out local humidity.

        If temp rises over time sure maybe there’s more water vapour (potent GHG) globally but this can be factored in/out via plenty of other studies. If temp rises and GLOBAL water vapour is factored out then what’s left is mostly effects from remaining GHG.

        I’m a “believer” and skeptic. A lukewarmer. I get physics. I’m skeptical that the alarmists are correct. My guess is what the results from the desert/low humidity study will show is ~0.2 degrees over the 120 years attributable to CO2. Others may expect higher. I don’t.

      • Rising over time = 120 years. An el nino isn’t going to change this.

        I was going to point out the same thing but you beat me to it.

        It’s even true of the Atlantic Multidecadal Oscillation, which is 10-20 times slower than El Nino events and episodes (with the variability being entirely in the latter) and therefore sneaks up on you, unlike ENNO and ENSO, but in a major way. So Willis could ding you on that one too except that I found a way to iron out the AMO as well.

        What you do is use exactly 65.5 year moving average and that minimizes the r² when fitting the composite Arrhenius-Hofmann Law, AHL, to the HADCRUT record.

        One might expect the r² to keep shrinking with yet wider smoothing, but instead it bottoms out at 65.5 years and starts climbing again.

        (Caveat: that level of smoothing impacts the CO2 rise itself a small amount, easily corrected for by applying it to both the HADCRUT data and the AHL when fitting the latter to the former to get the climate sensitivity. It doesn’t make the AHL any smoother, just distorts it in the same way it distorts the record.)

      • Stephen:

        Atmospheric RTE’s based on MODTRAN deal with relatively low levels of CO2 (if I’m not mistaken, in the order of 100 to 200 bar.cm for CO2). Combustion engineering deals with levels that can get much higher. The graph at google books here:

        http://tinyurl.com/2cgg6p6

        Page 618,

        does not fully reconcile with the MODTRAN reconstruction. Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.

        So I’ll throw it back to you. Where is Leckner’s mistake?

        Cheers

        JE

      • Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.

        So I’ll throw it back to you. Where is Leckner’s mistake?

        MODTRAN doesn’t increase forever. The upper limit is the emissivity of a black body at the same temperature when absorptivity = emissivity = 1. You can see that at the center of the CO2 band in atmospheric spectra which has an effective brightness temperature of about 220 K. I’ve calculated total emissivity for CO2 using SpectralCalc ( http://www.spectralcalc.com ) and the results agree quite well with both Leckner and MODTRAN. You just have to make sure you’re using the same mass path. The CO2 mass path for the atmosphere is ~300 bar.cm. The main difference between atmospheric emission using MODTRAN and using the Leckner model is that the atmosphere isn’t isothermal.

  7. Michael Larkin

    Sounds good to me, Steven, even though I’m not wholly sceptical in this area. I just wanna understand the basics so that I can start keying in better to the kinds of things being said earlier in the thread. If Willis felt tempted to have a bash, so much the better!

  8. This is a fascinating thread, including its historical elements. Among the various informative links, one that caught my eye was the first of the Science of Doom links provided by Dr. Curry, which references the following article by Dessler et al and reproduces a figure from that article showing the tight correlation between modeled and observed OLR values as a function of atmospheric temperature and water vapor –
    Dessler et al

    Because the radiative properties of water vapor are critical to an understanding of both greenhouse effects per se and positive feedbacks estimated from warming-mediated increases in water vapor overall and in critical regions of the upper troposphere, the concordance between predicted and observed relationships linking water vapor to OLR istruck me as worthy of special notice.

  9. Four points:

    1: For an excellent layman’s introduction to radiative heat transfer in the atmosphere, I recommend the scienceofdoom blog.
    2: A question for Steve Mosher or Willis E et al. If one increases the partial pressure of CO2 in the atmosphere (300 ppm, 400ppm, etc), will the absorbance of EMR continue to increase up to the point where there is so much CO2 that dry ice forms?
    3: What is the difference between HITRAN and MODTRAN? (may not be relevant).
    4: In blast furnace calcs, etc., we use Leckner’s curves. These provide a graph of delta q versus[CO2], that is fairly similar to the IPCC version (5.35ln[CO2]), but these do not continue exponentially beyond 200 ppm CO2. Why does climate atmospheric absorbance differ from engineering atmospheric absorbance? I realize there is a temperature and pressure gradiant, but the explanations I’ve seen do not fully explain this disconnect. See “the path length approximation” post on my blog. Please feel free to tear it apart.

    • steven mosher

      I’ll take #3.

      HITRAN is the database.
      http://www.cfa.harvard.edu/hitran/

      “HITRAN is a database, not a “simulation” code. It provides a suite of parameters that are used as input to various modeling codes, either very high-resolution line-by-line codes, or moderate spectral resolution band-model codes. These codes are based on the Lambert-Beers law of attenuation and may include many more features of modeling, for example line shape, scattering, continuum absorption, atmospheric constituent profiles, etc. ”

      MODTRAN (and LOWTRAN) is a simulation code. Its quick and dirty. you might use it to estimate, for example, the IR signature a plane would present to a sensor on the ground. Like this
      http://en.wikipedia.org/wiki/Northrop_YF-23 which we optimized for stealthiness using RTE.

  10. Michael Larkin

    Going back to my sixth-form physics over forty years ago (just scraped a bare pass at A-level! :-)), I remember the two types of spectra: absorption and emission.

    WRT absorption spectra, IIRC, then when you shine full-frequency light through a gas, and at the other side analyse the emerging light using a prism, you find that, depending on what the gas was, there are black lines in the spectrum. What has happened is that certain photons of a particular frequency/wavelength have been absorbed by the gas, kicking electrons of its constituent atoms into higher-energy states. So those photons don’t get through the gas, accounting for the black lines. Where you get the lines is characteristic for a given gas. Hope I’m right so far.

    WRT emission spectra, you get these when you heat elements up. The extra energy input causes emission of photons of specific frequencies/wavelengths, and so what you get in the spectrum is mostly black, with bright lines due to these extra photons. The pattern you get is characteristic for a given element. I think I recall this is the way that one can determine the elements in distant stars. I also remember Doppler effects when an emitting source is in motion, which shifts the location of the lines and enables estimates of velocity to be made.

    When people talk about CO2 “trapping” energy, I have this idea of it being selectively absorptive of specific frequencies of electromagnetic radiation, and that that can be observed spectroscopically. I’m assuming absorption spectra apply here. But it’s a bit confused because an excited electron won’t stay excited forever and may drop back to a lower quantum energy, re-emitting the photon it received (at the same frequency/wavelength?).

    And that radiation may in its turn excite some other CO2 molecule, and so on. We can’t talk of a single photon fighting its way through countless CO2 molecules before it hits the ground or alternatively finds its way into space. To me, it seems like what makes it through is the energy that that single photon represents; there might have been millions of re-emissions of (same-energy?) photons in the interim.

    I get the impression that this relates to the term “optical density”, and intuitively, I’d guess that the greater the optical density, the more CO2 molecules per unit volume, and the more delay there is in the system. Moreover, it seems to be the delay which accounts for the rise in temperature of the atmosphere. So the chain of logic seems to be: more CO2/unit volume => greater optical density => greater delay in photons escaping the atmosphere => increase in temperature of the atmosphere.

    I know there are other things involved, too. Such as conduction (apparently not a big factor?), convection, reflection and refraction, and so on.

    Looking at the Wikipedia article, it is talking about spectral lines which I’m kind of guessing are for absorption spectra, and it’s also talking about “monochromatic” and “line-by-line”. So I’m getting this idea of cycling through different frequencies (each one being a “chrome” or colour, I suppose, though I realise we may not be talking about visible light, e.g. infra-red or ultra-violet) and in some way picking up the lines for all the different constituents of the atmosphere.

    I’m laying all this bare even if I might have it hopelessly wrong just so some kind soul can perceive how I’m thinking and intervene where it’s necessary, at the right sort of level for me, which I think will be about the same level for anyone with (somewhat sub-par) A-level knowledge of physics (for Americans, A-levels are qualifications you need to get into university; so substitute whatever qualifications you guys need).

    I’m trying to focus on a level that isn’t too high, but then again, not too low. All this talk of panes of glass or blankets is too low, and hopefully I will have indicated what is too high, although I definitely want to go higher if possible :-)

    • Long question, so I’ll just answer a bite-sized part. Absorption and emission as applied to the atmosphere.
      When you look up in clear sky in the IR you see emission lines of CO2. This is because the CO2 is warmer than the background, which is cold space (simplified a bit), so the CO2 emission is larger (brighter) than the background.
      If you look down from a satellite in the IR you see absorption lines of CO2. This is because the background black radiation from the ground is warmer than the atmosphere, but in the CO2 bands you see only the last emission which is from higher in the atmosphere which is colder, so it appears as a dark line in the spectrum.

      • Michael Larkin

        Thank you, Jim D.

        I found that a valuable comment; I hadn’t realised that we would be talking about absorption and emission spectra according to whether we were looking from the ground or from space.

        If I might ask a question relating to that, would I be correct to assume the the absorption/emission lines would be in the same location?

      • Michael Larkin

        As you were, Jim D. I infer from a reply I got from A Lacis that the two spectra, if overlapped, would eliminate any dark lines, so that they are presumably in the same location.

      • As you were, Jim D. I infer from a reply I got from A Lacis that the two spectra, if overlapped, would eliminate any dark lines, so that they are presumably in the same location.

        Did Andy take Stokes shift into account?

      • Yes, they are the same wavelengths which are an intrinsic property of the molecules.

      • I have yet to see any consistency between these discussions of emission by CO2 and Kasha’s rule that the only emitting transitions are those from the lowest excited state of any given multiplicity (transitions preserve multiplicity). Corollaries of the rule are the independence of emission from absorption, and Stokes shift quantifying this. This has been known for 60 years, why does it never come up in these radiation discussions?

        Using directly observed absorption spectra to indirectly infer emission spectra seems a bad idea. Emission spectra should either be observed directly or calculated, whichever is more convenient provided it yields reasonable answers.

        _____
        ×⁻³⁴

      • The Kasha’s rule has a large effect when the exitation is done by a high energy radiation (high compared to the temperature). I do not think, it is important when the gas is in thermal equilibrium.

      • Ok, but is there any correlation between absorbed and emitted photons in CO2?

        And if not, can one assume that the emission probabilities are in proportion to the line strengths?

        I’d like to calculate the mean free path of a photon emitted from a CO2 molecule when the level is 390 ppmv, at STP. For photons emitted from the strongest line I’m getting a mean free path of around 4 cm based on the strengths and line widths at STP.

        For weaker lines the mfp will be longer, but the probability of such photons will be smaller, so it’s not obvious whether the sum over all 100,000 lines or so converges quickly or slowly.

        Kasha’s rule is particularly relevant to this since if it were applicable the series would converge faster.

        Has anyone done this calculation before?

      • I do not think, it is important when the gas is in thermal equilibrium.

        So are you claiming that the absorption lines work equally well as emission lines below 300 K?

        Kasha himself in his 1950 paper says nothing that would imply this. Do you have any source for your claim?

    • Michael
      Here is a nice graph from the top of tropopause looking down
      It looks very like an absorption graph to me.
      The large bite out of the black body envelope is caused by thermalisation of the 15um radiation.
      http://wattsupwiththat.files.wordpress.com/2010/12/ir_window_anesthetics.png

  11. David L. Hagen

    Radiative heat transfer is very important in modeling combustion.
    e.g. in modeling fires and developing fire fighting techniques; for boilers in electric power plants; and in modeling internal combustion in engines and gas turbines. Modeling “gray” atmospheres with combustion is particularly challenging and computationally intensive. e.g. See: CFD-Based Compartment Fire Modeling

    In gas turbines, design errors on temperature and noise can result in a combustor being “liberated” with a few million dollars of damage per turbine blade set of stream damage in the turbine.

  12. John Eggert said on December 5, 2010 at 11:44 pm:

    Atmospheric RTE’s based on MODTRAN deal with relatively low levels of CO2 (if I’m not mistaken, in the order of 100 to 200 bar.cm for CO2). Combustion engineering deals with levels that can get much higher. The graph at google books here:

    http://tinyurl.com/2cgg6p6 – Page 618,

    does not fully reconcile with the MODTRAN reconstruction. Leckner’s curves for emissivity peak at a level of CO2 and the MODTRAN work seems to increase forever.

    So I’ll throw it back to you. Where is Leckner’s mistake?

    Well, the vertical axis of his graph is annotated wrongly. But on a serious note it’s important to understand the basics.

    What is emissivity?

    Emissivity is a value between 0 and 1 which describes how well a body (or a gas) emits compared with a “blackbody”. Emissivity is a material property. If emissivity = 1, it is a “blackbody”.

    The Planck law shows how spectral intensity (which is a continuously varying function of wavelength) of a blackbody changes with temperature.

    When you know the emissivity it allows you to calculate the actual value of spectral intensity for the body under consideration. Or the actual value of total flux.

    Emissivity is sometimes shown as a wavelength-dependent graph. In the Leckner curves the value is averaged across the relevant wavelengths for various temperatures. (This makes it easier to do the flux calculation).

    Now some examples:
    -A surface at 300K with an emissivity of 0.1 will radiate 46W/m^2.
    -A surface at 1000K with an emissivity of 0.1 will radiate 5,670 W/m^2.

    Same emissivity and yet the flux is much higher for increased temperatures.

    Leckner wasn’t wrong. The question is confused.

    How come the government income tax rate reaches a maximum and yet the more I earn, the more the government takes from me in tax?

    I believe the question in your mind is about “saturation”. Maybe try CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.

    • Michael Larkin

      Thanks for this, Scienceof doom. I am hoping to graduate from this thread so that I can launch into your site!

      You say:

      Now some examples:
      -A surface at 300K with an emissivity of 0.1 will radiate 46W/m^2.
      -A surface at 1000K with an emissivity of 0.1 will radiate 5,670 W/m^2.

      Okay. So does this relate to the Stephan-Boltzmann equation: j = eoT^4 where e is the emissivity, o the proportionality constant, and T is in deg. K?

      Anything less than perfect emissivity (where e = 1, so that we have a black body): would this be the “grey body” that I hear about so often? Is the effectiveness of a grey body quantified according to the value of e?

      Elementary questions, I know, but that is what this thread is about for me and so I hope you will indulge me.

      • You are correct, these are calculated by using the Stefan Boltzmann equation. Plug the numbers in and you will get the answers I did.

        You are mostly correct about “grey body” – although generally it is used for the special case where the body (or gas) is not radiating as a blackbody, yet the emissivity is constant across wavelength.

        This doesn’t really happen in practice but can be useful to calculate the results of simple examples.

        For a graph of how emissivity/absorptivity varies with wavelength seethe comments in The Dull Case of Emissivity and Average Temperatures.

      • Michael Larkin

        Thanks once again, SoD. A valuable point about emissivity not necessarily being constant across wavelength. I wouldn’t have thought about that had you not mentioned it.

    • SOD:

      I’ve read your section a number of times over the last number of months. It is a good reference when I’m talking to people about these things. The fact remains. If you are measuring how much hotter a gas will get in a blast furnace off gas, there comes a point when increasing CO2 no longer increases the heat of the gas. What you are saying is this doesn’t happen. But it does. And there is no confusion in the question. Either the calculation of atmospheric absorbance in combustion engineering is fundamentally the same or it is fundamentally different from the calculation of atmospheric absorbance in climate. One curve has an asymptote and the other doesn’t. Otherwise, there is very little difference between the two.

      • I’ve read your section a number of times over the last number of months. It is a good reference when I’m talking to people about these things. The fact remains. If you are measuring how much hotter a gas will get in a blast furnace off gas, there comes a point when increasing CO2 no longer increases the heat of the gas. What you are saying is this doesn’t happen. But it does. And there is no confusion in the question. Either the calculation of atmospheric absorbance in combustion engineering is fundamentally the same or it is fundamentally different from the calculation of atmospheric absorbance in climate.

        I’ll take a stab at this one – I don’t think there is any fundamental difference here. If I understand that figure correctly (the earlier linked image from google books), I think those “effective emissivity” curves are integrated over all wavelengths (Eq. 8.76 in that book). This will weight the spectral emissivity of the gas by the blackbody (Planck) curve. So, what happens is when you get very high temperatures, the curve is peaking at shorter wavelengths and the CO2 absorption bands at 15 and 4 microns become less important. I don’t think there are any CO2 absorption bands at wavelengths shorter than 4 microns so it would just keep dropping for larger temperatures.

        I guess in climate applications those effects aren’t usually considered, since even for “huge” temperature increases (say 10K), there is no significant shift in the peak of the blackbody curve.

  13. Michael,
    You pretty well have the basics of absorption line formation and line emission. The detailed mechanics of how a molecule emits or absorbs a photon of a particular frequency can be a bit complicated. Any single (say CO2) molecule will be in some specific vibration-rotation energy state. If a photon comes by with just the right frequency to an allowed higher energy vibration-rotation energy state, there is some specified probability that that photon will be absorbed, raising that molecule its new vibration-rotation energy state. The molecule will sit in that state for an exceedingly brief time before a photon (of the same wavelength) is spontaneously emitted, and the molecule returns to its original energy level. But before it got a chance to radiate, that molecule might have undergone a collision with another molecule that might knock it into a different vibration-rotation energy state.
    FOrtunately, all these complications do not directly factor into calculating radiative transfer. A single cubic inch of air contains close to a billion billion CO2 molecules, so a statistical approach can be taken to doing practical radiative transfer modeling.

    I like to use the example of an isothermal cavity (with a small pinhole to view the emerging radiation) to illustrate some basic principles of radiation. As you might expect, the radiation emerging from the pinhole will be continuous Planck radiation at temperature T (emitted by the back wall of the cavity). If we now place CO2 gas (also at temperature T) into the cavity, Kirchhoff’s radiative law states that the radiation emerging from the pinhole will still be continuous Planck radiation at temperature T. This is because in a strictly isothermal cavity, everything is in thermodynamic equilibrium, meaning that every emission-absorption transition and every collisional interaction must be in equilibrium (otherwise the temperature of some part of the cavity will change).

    If this parcel of CO2 gas is pulled from the cavity, it will continue to emit radiation representative of temperature T, which if viewed against a very cold background, will appear as emission lines. If the background is heated to temperature T, the emission lines will still be there, but there will be superimposed absorption lines at the same spectral positions of the same strength yielding a blank continuous Planck spectrum of temperature T just as in the isothermal cavity. If now the background temperature is heating to a hotter temperature, absorption will win out over emission, and the resulting spectrum will be a pure absorption spectrum.

    The line spectrum that CO2 exhibits will depend on the local pressure and temperature of the gas. Pressure generally only broadens the spectral lines, without shifting their spectral position. Temperature, on the other hand, changes the equilibrium collision vibrational-rotation energy state distribution, which can make some spectral lines be stronger, others weaker. Thus a flame spectrum of CO2 will be quite different from the ‘cold atmosphere’ spectrum that is relevant to current climate applications.

    The basic atmospheric spectral line compilation is the HITRAN data base that contains line spectral position, line strength, line width, and line energy level information for more than 2.7 million spectral lines for 39 molecular species. This is the information that goes into a line-by-line model such as LBLRTM or FASCODE, together with the pressure, temperature, and absorber amount profile information that describes the atmospheric structure for which the line-by-line spectrum is to be computed. The line-by-line codes require significant computer resources to operate, but they are the ones that give the most precise radiative transfer results.

    MODTRAN is a commercially available radiative transfer program that atmospheric radiation with moderate spectral resolution (one wavenumber) and with somewhat lesser accuracy. To assure maximum precision and accuracy, we use line-by-line modeling to test the performance the radiation model used in climate modeling applications.

    • Michael Larkin

      A Lacis,

      Thank you for your extensive post. It is so valuable to know I have the basics approximately right – that means I can build further on that.

      I understood your first two paras very well.

      Para 3:

      I had to look up “isothermal” and Kirchoff’s law so I could catch your drift. It occurred to me that maybe others at my level are lurking and learning too, so:

      (from Wikipedia):

      “An isothermal process is a change of a system, in which the temperature remains constant: ΔT = 0. This typically occurs when a system is in contact with an outside thermal reservoir (heat bath), and the change occurs slowly enough to allow the system to continually adjust to the temperature of the reservoir through heat exchange. In contrast, an adiabatic process is where a system exchanges no heat with its surroundings (Q = 0). In other words, in an isothermal process, the value ΔT = 0 but Q ≠ 0, while in an adiabatic process, ΔT ≠ 0 but Q = 0.”

      Right. So it looks like we are talking about thermal equilibrium in your example of an isothermal cavity.

      (Kirchoff’s law – from Wikipedia):

      At thermal equilibrium, the emissivity of a body (or surface) equals its absorptivity.

      Right. So as much energy is coming in as is going out; Delta T = 0. The inner surface of your isothermal cavity seems to be acting as a black body (“Planck radiation”).

      Para 4:

      I’m assuming that “pulling CO2” from the cavity isn’t meant literally. It’s a thought experiment, right?

      You seemed to have answered my earlier question to Jim D about whether absorption and emission spectra for the same gas would be complementary WRT the position of their lines. At least, that’s what I thought, but…

      Para 5:

      Hmm. Pressure relates to density of CO2, i.e. locally, the number of molecules per unit volume .The broadening of the lines where the pressure is greater – is that an intensity rather than frequency change?

      I can see that a flame (presumably emission?) spectrum would be quite different from a cold atmosphere spectrum, but what I am not sure about is whether you’re saying some lines might disappear and new ones appear according to circumstances. In view of what SoD told me above, I understand that emissivity isn’t necessarily constant across wavelength. Overall, I’m a little confused about this point (probably my fault more than anyone else’s).

      Paras 6 and 7: Thanks. Removes some of the mystery from what the heck HITRAN and MODTRAN are all about!

      • “The broadening of the lines where the pressure is greater – is that an intensity rather than frequency change?”

        Normally, according to quantum theory, for a molecule to absorb a photon, the photon’s energy must exactly match the energy involved in the transition from one energy level to another within the molecule – e.g., the excitation of a vibration mode in CO2. However, if an encounter of a CO2 molecule with a neighboring molecule (N2, O2, etc.) adds or subtracts a small amount of energy, that amount can make up for a difference between the energy of the incoming photon and the energy needed for a quantum transition. This permits the total incoming energy to match what is needed.

        The higher the density of molecules (i.e., the higher the atmospheric pressure), the greater will be the likelihood of an encounter that creates the needed energy adjustment. This means that at high pressure, photons slightly different in energy from the “exact match” energy will be more likely to be absorbed, so that an absorption line at a particular energy level will broaden to encompass these additional photons whose energy doesn’t quite match the line center.

      • Michael Larkin

        Thanks for this, Fred – you put it beautifully and I can understand it very well. One more small piece of the jigsaw! :-)

  14. Miklos Zagoni

    Andy,
    May I ask: According to your data and calculations, how much is the annual global mean atmospheric longwave absorbed radiation?
    Thank you in advance

  15. Judith, you write “And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of addressing this issue (this will be the topic of the next greenhouse post).”

    When I first read Myhre et al, it states that 3 radiative transfer models were used. I have read the definition of radiative forcing in the TAR, and I was surprised that Myhre did not seem to discuss WHY radiative transfer models were suitable to estiamte change in radiative forcing. It has never been obvious to me that radiative transfer models ARE suitable to estimate change in radiative forcing. Can anyone direct me to a published discussion as to why radiaitve transfer models are suitable to estimate change in radiative forcing?

    • Jim, for clear sky radiative forcing, this thread just provided tons of documentation that the best radiation codes do a good job of simulating the spectral distribution and broad band radiative fluxes. In terms of forcing, the models have been validated from the tropics to the arctic, with over an order of magnitude in difference in total water vapor content. While the models have not been validated observationally for a doubling of CO2, we infer from the above two tests that they should perform fine. The Collins et al. paper referenced here directly addresses this issue (points out that some of the radiation transfer codes used in climate models do not perform well in this regard), but the best ones do.

      • Thank you Judith, but that is not my problem. My problem relates to the definition of radiative forcing in the TAR Chapter 6; viz

        “The radiative forcing of the surface-troposphere system due to the perturbation in or the introduction of an agent (say, a change in greenhouse gas concentrations) is the change in net (down minus up) irradiance (solar plus long-wave; in Wm-2) at the tropopause AFTER allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropo-spheric temperatures and state held fixed at the unperturbed values”.

        As I see this, the atmosphere is in two different states; one part has adjusted to radiative equilibrium and one has not. I assume that radiative transfer models reproduce this difference, but I have not been able to find out how. Can you direct me to a publication which explains this please?

      • David L. Hagen

        Jim Cripwell
        For a discussion of various sensitivities a 1D Line By Line radiation model based on radiosonde data, see Miskolczi (2010) Sect 2 pg 256-247 http://www.friendsofscience.org/assets/documents/E&E_21_4_2010_08-miskolczi.pdf

  16. Alexander Harvey

    I think there needs to be a few words of caution regarding visually interpreting spectra.

    I am not the best person to do this so I welcome correction.

    Check the coordinates:

    Are you looking at wavenumbers or wavelengths?

    Wavenumbers vary according to frequency, the reciprocal of wavelength.

    More importantly are you looking at

    Transmission (%)
    Absorption (%)
    Cross-sections (cm^2/molecule)
    Line Strengths (cm/molecule)?

    It is the last one that can give rise to the most misleading of visual interprestations. Line Strength (Integrated Intensity) lacks the line shape component. It is a useful abstraction as it gives a measure of the total “strength” (the area under the curve of the line shape) which is a measure of the dipole strength of the transition. It can be misleading as such a spectrum has pin-sharp (zero width) lines which makes it look like there are big non-absorbing gaps between the lines which is not the case.

    The actual units vary but line strengths (as in HITRAN line lists) should boil down to cm/molecule after manipulation and scaling.

    Alex

  17. The failure of catastrophic climate change as an idea is not in the basic physics.
    Just as the failure of eugenics was not in the theory of evolution.

  18. Dr. Curry said:
    “However, if you can specify the relevant conditions in the atmosphere that provide inputs to the radiative transfer model, you should be able to make accurate calculations using state-of-the art models. The challenge for climate models is in correctly simulating the variations in atmospheric profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols.”

    What about ocean cycles?

    • Ocean cycles may influence what the clouds and temperatures actually are, but once you can specify, predict, whatever the clouds etc, the radiative transfer models are up to the job of predicting the radiative fluxes. Actually predicting the clouds, water vapor, etc. is at the heart of the problem (the radiation codes themselves are not the problem, which is what this thread is about).

      • Discussing those topics further wil be very interesting. You are moving through this issue in an efective, fascinating way.
        Your students are very fortunate.
        Well, I guess many of us here are your students, in effect….

      • Dr. Curry
        Ocean cycles models using past record are inherently flawed. This is applicable to both Pacific and the Atlantic oceans’ currents.
        According to my findings there are the short term semi-periodic oscillations with an uncertain frequency (decades) and the long term (centuries) components which may or may not be reversible. None of these appear to be predictable.
        North Pacific has number of critical points; here I show what I think are the most important.
        http://www.vukcevic.talktalk.net/NPG.htm
        All of them to certain extent may contribute to the PDO with an unspecified time factor and weighting. You may indeed notice that one (red coloured) is possibly the major contributor to the PDO, some 10-12 years later.

      • i’m preparing a thread next week on decadal scale prediction, i will be referencing your site

      • Thanks; that’s fine as long as you think it merits mention. I am still working on the SSW; some interesting results there, and I may have a possible answer to the Antarctica’s 2002 SSW riddle
        http://www.knmi.nl/~eskes/papers/srt2005/png/o3col20020926_sp.png
        that the papers on the subject missed.

  19. Anyone,
    Can I conclude from Dr. Curry’s post that rise in temperature from radiative forcing is 1.2 C when the concentration of CO2 is doubled?
    I’m not asking about the others haved posted, just if what Dr. Curry says is correct, this is the result.

    Thanks

  20. Dr. Curry, you make a very strong statement:
    “Atmospheric radiative transfer models rank among the most robust components of climate model, in terms of having a rigorous theoretical foundation”

    I am not sure that we already have such a theory. Atleast it is not used in GCMs. The “theoretical” models of spectral line shapes and their behaviour are just fitted semiempirical anlogy models. Closest to a theoretical model are M. Chrysos & al.

    http://blogs.physicstoday.org/update/2008/07/collisions_between_carbon_diox.html
    http://prl.aps.org/abstract/PRL/v100/i13/e133007
    http://pra.aps.org/abstract/PRA/v80/i4/e042703

    Your reference gives a good agreement 2% between models. The NBM and other simplified methods that I have seen have been satisfied wiyh 10% accuracy compared to LBL. When we take into account that the HITRAN database claims 5% accuracy, is that accurate enough?

  21. This is really interesting. I am not an expert on the nuances of HITRAN or line by line codes, I would like to learn more about how accurate you think these are. My statement was relative to other climate model components. What is accurate enough in terms of fitness for purpose? I would say calculation of the (clear sky, no clouds or aerosol) broadband IR flux to within 1-2 W m-2 (given perfect input variables, e.g. temperature profile, trace profiles, etc.). Also calculation of flux sensitivity to the range of CO2 and H2O variations of relevance, e.g. water vapor ranging from tropical to polar atmosphere, and doubling of CO2 within 1-2 W m-2.

    This is a very good topic for discussion, thank you for bringing it up.

  22. Michael,

    The isothermal cavity is basically an idealized thought experiment. In application to radiative transfer, it is not so much about heat transfer as it is about establishing the statistical population distribution of the molecular vibrational-rotational energy states under conditions of full thermodynamic equilibrium. When the absorption spectrum of a gas is measured in the laboratory, it is done under carefully controlled pressure and temperature conditions so that both the amount of gas and its thermodynamic state are accurately known.

    A similar gas parcel in the free atmosphere is said to be in local thermodynamic equilibrium (LTE) because conditions are not isothermal, there being typically a small temperature gradient. But the population of its energy states will be close enough to those under thermodynamic equilibrium conditions that the spectral absorption by the gas will be essentially what was measured in the laboratory. It is only at high altitudes (higher than 60 km) that molecular collisions become too infrequent to maintain LTE, then corrections have to be made for a different population of energy states under non-LTE conditions. Also, water vapor condensed into a cloud droplet, or CO2 in the form of dry ice, will have very different absorption characteristics compared to the gas under LTE conditions.

    The isothermal cavity along with Kirchhoff’s radiation law is a useful concept to demonstrate that emissivity must be equal to absorptivity, that only a totally absorbing (black) surface can emit at the full Planck function rate, and that the emissivity of a non-black surface will be one minus its albedo, in order to conserve energy.

    • Michael Larkin

      A Lacis,

      Thank you for this refinement of what you said earlier. It helps a lot in transitioning conceptually from “the ideal” to the real world. I really am most grateful for your help in improving my understanding.

  23. The stratosphere is predicted to cool with increased CO2 concentrations in the troposphere.

    Is this because less IR leaves the troposphere?

    I think this is wrong. Does anyone have a conceptual explanation for this?

    Thanks

    • CO2 will increase equally in the stratosphere, so it is a local effect there where it radiates heat more efficiently with more CO2. Heat there comes from ozone absorption of solar radiation, not surface convection.

    • Just to elaborate slightly on Jim D’s explanation, an increase in a particular greenhouse gas molecule such as CO2 will increase the ability of a given layer of the atmosphere to absorb infrared (IR) radiation – the layer’s “absorptivity” – and equally increase its ability to emit IR – its “emissivity”. If that type of molecule is the only factor operating, absorptivity and emissivity will increase commensurately, and the net effect turns out to be a slight warming. On the other hand, most of the absorptivity in the stratosphere derives from the ability of ozone to absorb solar radiation in the UV range, where CO2 does not absorb. This is responsible for most of the stratospheric heating, and so CO2 contributes little, because its absorption of IR from below is a lesser source of heating. In other words, additional CO2 does not increase stratospheric absorptivity substantially. On the other hand, most radiation from the stratosphere at the temperatures there is in IR wavelengths, where CO2 is a strong emitter. As a result, CO2 increases stratospheric emissivity more than absorptivity, with a resultant cooling effect.

  24. Miklos Zagoni

    Dr. Curry,

    May I address my question also to you: According to your data and calculations, how much is the annual global mean atmospheric longwave absorbed radiation?

    Thanks in advance

  25. I think we should welcome Miklos Zagoni to this website and I hope he gets a reply to his question from Andy Lacis and Judith Curry.

    Welcome Miklos, and thank you for joining this debate.

  26. “how much is the annual global mean atmospheric longwave absorbed radiation?” ? ?

    What exactly is this a question really supposed to be asking? If we are talking about the outgoing LW radiation at the top of the atmosphere – all of that radiation is emitted radiation, some of it having been emitted by the ground surface, most of it having been emitted from some point in the atmosphere, depending on the wavelength and atmospheric temperature and absorber distribution.

    If we are talking about the LW radiation emitted by the ground surface – some of that radiation will escape directly to space without being absorbed by the atmosphere depending on the atmospheric absorber distribution. Under cloudy sky conditions, virtually all of the radiation emitted by the ground surface will absorbed by the atmosphere, unless the clouds are optically thin cirrus clouds.

    LW radiation get emitted and absorbed multiple time within the atmosphere. That is what radiative transfer is all about. What is important for climate applications is calculating the heating/cooling that takes place at the ground and within each atmospheric layer, and the total energy that is radiated out to space – all required to keep a detailed energy balance/budget at the ground surface, in each atmospheric layer, and for the atmospheric column as a whole. One can in addition keep some spectral information in the process of doing the atmospheric radiative transfer, that might be useful for diagnostic comparisons with observational data.

    Otherwise, the question by Miklos Zagoni makes no sense.

  27. chriscolose:

    Dear Chris,

    Thank you, but you must refer to another quantity. LW atmospheric absorption, according to KT97(=IPCC2007 WG1 energy budget) is about 350 W/m2, while the updated distribution (TFK2009) gives 356 W/m2.

    My question is: are these values generally accepted here, amongst radiative transfer specialists.

    Thanks

  28. Dear Andy,

    We are talking about the greenhouse effect here (or, at least, we have it in mind in the background), so I think my question is about the quantification of the general (~global annual mean) effect of the presence of IR absorbers (=GHG’s) in the air…

    Thanks, Miklos

    • Miklos,
      I am sorry that I totally misunderstood what your question was about.

      With respect to the IPCC2007 KT97 figure with 350 W/m2 of atmospheric absorption versus 356 W/m2 in an updated TFK2009 version, I would say that both figures are there primarily for illustrative purposes, rather than presenting technical results.

      Note that the KT97 figure implies a planetary albedo of 0.313 = 107/342, as the ratio of reflected to incident solar energy, with 235 W/m2 as the corresponding LW outgoing flux. This figure also illustrates a somewhat stronger greenhouse effect of 390 – 235 = 155 W/m2. This compares to the often cited nominal greenhouse effect value of 390 – 240 = 150 W/m2, which corresponds to a planetary albedo of 0.3, with absorbed solar radiation of 240 W/m2. Both cases imply a global mean surface temperature of 288 K (390 W/m2).

      In our recent Science paper using the GISS ModelE, we reported 152.6 W/m2 for the total atmospheric greenhouse effect. In the Schmidt et al. (2006) paper describing ModelE, three different model versions are shown with planetary albedos of 0.293, 0.296, and 0.297. These will produce slightly different outgoing LW fluxes. Observational evidence put the likely planetary albedo of Earth between 0.29 and 0.31. This uncertainty exists because it is very difficult to make more precise measurements from satellite orbit with existing instrumentation.

      However, this uncertainty does not adversely affect climate modeling results and conclusions. But this is one reason why climate modeling studies are conducted by running a control run and experiment run simultaneously to subtract out biases and offsets that will e common to both runs.

      Similarly, these potential biases and uncertainties in planetary albedo will affect the values of LW fluxes. So, the absolute value of model fluxes may differ. Accordingly, it does not make sense to compare the “accuracy” of atmospheric fluxes in a absolute sense between different models, since the reason for the differences may be complex, and not really having an impact on conclusions drawn, since the effect of these differences will be largely subtracted out by differencing the experiment and control runs.

      Instead, the accuracy of atmospheric fluxes is better assessed by comparing model flux results with line-by-line calculations for the same atmospheric temperature-absorber structure.

      • David L. Hagen

        A. Lacis
        You note:

        All energy transports are properly being included in climate modeling calculations.

        Yet Kevin Trenberth notes 23%-69% discrepancy in energy budgets between observed and accounted for. See above.
        Is that level of discrepancy what is considered “properly” included?

        An increasing number of evaluations are finding climate models projecting temperatures substantially above global temperatures. e.g.
        McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290.
        How are we to understand/explain these substantial divergences?
        Errors in modeling? In data? In statistics?

  29. I posted a comment on Dec 6 at 7.10 am. I have replied to my own second comment. Let me put it here at the end of the comments for emphasis.

    Let me put this a little more strongly. Radiative transfer models deal with a real atmosphere. According to the IPCC definition of radiative forcing, one deals with a hypothetical atmosphere. Therefore whatever Myhre et al in 1998 estimated, it was NOT radiative forcing. And the same applies to all the other estimates that have been done ever since. What the numbers are, I have no idea. All I know is that they are NOT radiative forcing.

  30. Radiative transfer is a standard consideration in design of microwave, millimeter wave, and infrared band systems. Fire control, missile, and communication systems’ performance hinges on the electromagnetic absorption along the path from transmitter to reflectors to receiver. Clear sky spectral absorption data from published and other sources has proven up to the task. Sometimes low absorption is needed, as in long range operations, and sometimes high absorption is sought, as when designing systems to operate concealed behind an atmospheric absorption curtain. Regardless, an accurate prediction is important.

    Curry writes, “The challenge for climate models is in correctly simulating the variations in atmosphere profiles of temperature, water vapor, ozone (and other variable trace gases), clouds and aerosols. These confounding effects beyond a standard dry atmosphere were too much for military and communication system design and analysis. Perfection was hopeless, and more complicated situations utterly unpredictable.

    Curry then says, “And finally, for calculations of the direct radiative forcing associated with doubling CO2, atmospheric radiative transfer models are more than capable of addressing this issue”. Radiative forcing in a limited sense applies radiative transfer, but it is not the same. Radiative Forcing is the paradigm IPCC selected for its climate modeling, and staunchly defended against critics. In radiative transfer, a long path in the atmosphere, as in hundreds of kilometers horizontally, or from the surface to the top of the atmosphere, is modeled end-to-end by an absorption spectrum. It represents the entire path, one assumedly tolerant of the temperature lapse rate. It also avoids microscopic modeling of molecular absorption and radiation. IPCC models the atmosphere in multiple layers, each with its own peculiar temperature, and hence temperature lapse rate, with radiative forcing characteristics, and in some considerations, molecular physics.

    Radiative Forcing has severe limitations, some of which may be fatal to ever showing success at predicting climate. It doesn’t account for heat capacity, and therefore can provide no transient effects. A prime alternative to an RF model is a Heat Flow Model (a redundant term, though universally accepted — heat is already a flow). In a Heat Flow model, the environment is represented by nodes, each with its own heat capacity, and with flow resistance to every other node. A heat flow model can represent transient effects, and variable attenuation for cyclic phenomena, such as seasons, the solar cycle, and ocean currents.

    Radiative Forcing has no flow variable, no sinks, and no sources. A heat flow model does. Feedback is a sample of energy, displacement, mass, rate, or information from within a system that flows back to the driving signal to add to, or subtract from it. Without a flow variable, the RF model must account for feedbacks without representing them. Consequently, IPCC redefined feedback, and produced a most confused explanation in TAR, Chapter 7. To IPCC, a feedback loops are imaginary links between correlated variables. This is a severe restriction for the RF paradigm especially because IPCC has yet to account for major feedbacks in the climate system, including the largest feedback in all of climate, the positive and negative feedback of cloud albedo, and the positive feedback of CO2 from the ocean that frustrates IPCC’s model for accumulating ACO2. It doesn’t have the carbon cycle or the hydrological cycle right. The RF model looks quite unrepairable.

    Curry talks about “doubling CO2”. This is an assumption by IPCC that greatly simplifies its modeling task, while
    simultaneously exalting the greenhouse effect. IPCC declares that infrared absorption is proportional to the logarithm of GHG concentration. It is not. A logarithm might be fit to the actual curve over a small region, but it is not valid for calculations much beyond that region like IPCC’s projections. The physics governing gas absorption is the Beer-Lambert Law, which IPCC never mentions nor uses. The Beer-Lambert Law provides saturation as the gas concentration increases. IPCC’s logarithmic relation never saturates, but quickly gets silly, going out of bounds as it begins its growth to infinity.

    Under the logarithm absorption model, an additional, constant amount of radiative forcing would occur for every doubling (or any other ratio) of CO2 or water vapor or any other GHG. Because the logarithm increases to infinity, the absorption never saturates. This is most beneficial to the scare tactic behind AGW. Secondly, the additional radiative forcing using the Beer-Lambert Law requires one to know where the atmosphere is on an absorption curve. This is an additional, major complexity IPCC doesn’t face.

    Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. These data are at the core of radiation transfer, and that the “doubling CO2” error slipped through is surprising.

    • The big guns are riding into town.

      Welcome Dr Jeff Glassman!

      Let the serious debate begin. :)

    • David L. Hagen

      Jeff Glassman
      Thanks for the physics/chemistry perspective:
      “The Beer-Lambert Law provides saturation as the gas concentration increases. . . .the additional radiative forcing using the Beer-Lambert Law requires one to know where the atmosphere is on an absorption curve.. . . CO2 appears to be in saturation in the atmosphere.”
      The quantitative Line By Line Planck weighted Global Optical Depth calculations by Miskolczi (2010) show remarkably low sensitivity to CO2, and even lower combined variability to both CO2 and H2O given the available 61 year TIGR radiosonde data and NOAA. See Fig. 10 sections 3, 4. I would welcome your evaluation of Miskolczi’s method and results.

    • Jeff – The images from Channels 1-7 in my Tyndall gas effect post illustrate directly that CO2 is not in saturation in the atmosphere.

    • I recommend the replies by Lacis and Moolten to you below. I will talk about radiative fluxes here, since your concern appears to be a lack of a flow variable. In fact radiation schemes do computations over multiple atmospheric layers, as you say, and what they compute for each level are upwards and downwards radiative fluxes (W/m2). It is the convergence or divergence of these fluxes that result in radiative heating or cooling in a layer, which also depends on the heat capacity of that layer. So in fact fluxes are central to these schemes, and their impact on the atmosphere.

    • Jeffrey,
      Why do you say “Radiative Forcing” doesn’t account for heat capacity? There’s an energy equation which enforces energy balance in each cell, including that which comes and goes via radiative transfer, And the internal energy is calculated via specific heat.

      In your proposed “Heat Flow Model”, do you really have flow resistance to every other node? Even between nodes far apart? With then a dense matrix to solve? What about transmission through the atmospheric window, say?

    • @glassman: Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. These data are at the core of radiation transfer, and that the “doubling CO2″ error slipped through is surprising.

      When the same person posts ten paragraphs each eminently refutable, where do you begin? My theory, yet to be proved, is that the other nine paragraphs are best shot down by shooting down the tenth, which is the one quoted above.

      Since this paragraph is stated simply as a fact, that I know from the data to be blatantly false, let me simply ask Mr. Glassman to support his statement, which in the interests of decorum in this thread I’ve refrained from attaching any other epithet to than “false.”

  31. “Judging from published spectral absorption data, CO2 appears to be in saturation in the atmosphere. “

    Jeffrey – the misconception inherent in your comment dominated thinking about the role of CO2 until about 60 years ago, when geophysicists realized that the atmosphere could not be represented as a simple slab wherein a “saturating” concentration of CO2 precluded any further absorption and warming, but rather had a vertical structure. Within that structure, absorbed photon energy is subsequently re-emitted (up and down) until a level is reached permitting escape to space. For CO2, this is a high altitude in the center of the 15 um absorption band, but much lower as one moves into the wings, which are essentially unsaturable.

    This blog has a couple of informative threads on the greenhouse effect that address this phenomenon in some detail, and the links in the present thread are also valuable. I can see from your comment that you are well informed in some areas of energy transfer and radiation, but I suspect you have not had an opportunity to reconcile your knowledge with the principles of radiative transfer within the vertical profile of the atmosphere, and the resources I suggest may help. Others may be able to offer further suggestions.

  32. Jeff,

    I am sure that you will agree that Beer’s Law exponential absorption only applies to monochromatic radiation. When you have spectral absorption by atmospheric gases vary by many orders of magnitude at nearby wavelengths, you specifically have to take that fact into account. Line-by-line calculations do that. So do correlated k-distribution calculations (which is what is being used in many climate models). Calculating “greenhouse optical depths” averaged over the entire spectrum like Miskolczi does, makes absolutely no sense at all.

    You should take the time to become better informed on how climate models handle energy transports – radiative, convective, advective, etc. There are no heat capacity, no sinks, no sources, no flow variables , or feedbacks in radiative transfer calculations. It is only the temperature profile, surface temperature, atmospheric gas, aerosol, and cloud distributions (and their associated absorption and scattering parameters) that enter into radiative transfer calculations. Radiative energy transfer is incomparably faster than than energy transported by the other physical processes. Radiative transfer calculations provide the instantaneous heating and cooling rates that the climate GCM takes into account as it models all of the hydrodynamic and thermodynamic energy transports in a time marching fashion. All energy transports are properly being included in climate modeling calculations.

    IPCC does not assume that infrared absorption is proportional to the logarithm of GHG concentration. Radiative transfer is being calculated properly for all atmospheric gases. The absorption behavior by some of the gases, for example CO2, happens to be close to logarithmic under current climate conditions, but the absorption for GHGs is nowhere close to being saturated except for the central portions of the strongest absorption lines. There are many more weak than strong lines.

    Radiative forcings need to be understood for what they are, and what they aren’t. A radiative forcing is simply the flux difference between two different states of the atmosphere. It helps if the atmospheric state that is used as the reference is taken to be an atmosphere that is in radiative/convective equilibrium. The second atmospheric state may be the same atmosphere but with the CO2 profile amount doubled. A comparison of radiative fluxes between the two atmosphere states will show flux differences from the ground on up to the top of the atmosphere. The flux difference at the tropopause level is typically identified as the “instantaneous radiative forcing” (which for doubled CO2, happens to be about 4 W/m2). Since doubled CO2 decreased the outgoing LW flux to space, this is deemed to be a positive radiative forcing since the global surface temperature will need to increase to re-establish global energy balance. If no feedback effects were allowed, an increase of the global-mean surface temperature by about 1.2 C would re-establish global energy balance. In the presence of full atmospheric feedback effects, the global-mean surface temperature would need to increase by about 3 C before global energy balance was re-established. And by the way, the climate feedbacks are not prescribed, they are the result of the physical properties of water vapor, as they are modeled explicitly in a climate GCM.

    • David L. Hagen

      A. Lacis Re:

      Calculating “greenhouse optical depths” averaged over the entire spectrum like Miskolczi does, makes absolutely no sense at all.

      Please read and understand Miskolczi before again mischaracterizing him.
      Miskolczi actually does the detailed LBL calculations that you advocate:

      When you have spectral absorption by atmospheric gases vary by many orders of magnitude at nearby wavelengths, you specifically have to take that fact into account. Line-by-line calculations do that.

      Miskolczi also performs the LBL calculations with much finer spatial and frequency resolution, and in much greater detail than you have described in posts here. After he calculates the detail, he then performs a Planck-weighted global integration to evaluate the global optical depth. That now gives a manageable parameter to quantitatively track global absorption over time.

      Did I understand your objections or what am I missing in what you so strongly disagree/advocate?

  33. David,

    The detail of Miskolczi’s line-by-line results is not the issue. It is how he uses his line-by-line modeling results to come to the erroneous conclusions about the greenhouse effect and the global warming due to increasing CO2. That is where the problem is.

    Here is a simple test to see how really useful Miskolczi’s greenhouse method is. Can Miskolczi reproduce the well established 4 W/m2 radiative forcing for doubled CO2 with his methodology, and its equivalent 1.2 C global warming?

    • Years spent on case studies in th History of Science taught me that “well established” doesn’t provide a logical guarantee of being correct.

      That it isn’t seen as a possibility that it could be otherwise by a community of scientists is the whole reason it is “well established”.

      Yet the wailing and the gnashing of teeth over Trenberth’s “missing heat” cold be an indication that the atmospheric window may be wider open than has been previously thought. Even that it might also vary, a la Iris theory of Lindzen.

      Anyone looking up at the ever varying cloudscape would conclude that variation might be the rule rather than the exception.

      • Yet the wailing and the gnashing of teeth over Trenberth’s “missing heat” cold be an indication that the atmospheric window may be wider open than has been previously thought.

        This is an excellent point.

        Even that it might also vary, a la Iris theory of Lindzen.

        I like the way the Wikipedia article on the iris hypothesis says “However, there has been some relatively recent evidence potentially supporting the hypothesis” and then cites papers by Roy Spencer and Richard Lindzen.

        Very noble of the foxes to volunteer for sentry duty at the hen house.

      • Heh, given the presence of the gatekeepers guard dogs on Wikipedias global warming section I’m surprised the foxes managed to sneak it in. ;)

      • Right, I reckon Connolley either dozed off or neglected to put that article on his watch list. Even alarmists like me can’t get past Connolley and Arritt.

    • David L. Hagen

      Thanks for clarifying. As I understand your objection to Miskolczi, it is with the methodology and conclusions of his greenhouse planetary theory, not with the LBL evaluation of atmospheric profiles leading to the Planck-weighted Global Optical Depth.
      As I understand Miskolczi, his method of evaluating the Global Optical Depth can be applied to any atmospheric profile, including doubled CO2 etc from which you can prescribe insolation and other parameters to evaluate conventional forcing methodology.

      See my detailed response to you above. Please see Miskilczi’s (2010) detailed evaluation of numerous sensitivities. Of particular interest is his evaluation of actual Co2 and H2O sensitivities derived from the available 61 year empirical TIGR radiosonde data.

      As to his overall model, how do you evaluate how well he has fit the actual optical absorption measurements to various atmospheric fluxes?

      Do his simplified correlations between those fluxes reasonable approximations to the actual ratios of those flux curves?

      How do you evaluate Bejan’s constructal law approach to modeling climate with thermodynamic optimization? See:
      Thermodynamic optimization of global circulation and climate
      INTERNATIONAL JOURNAL OF ENERGY RESEARCH
      Int. J. Energy Res. 2005; 29:303–316 DOI: 10.1002/er.1058

  34. Andy,

    As I can see, your approach to the greenhouse problem is through Ramanathan’s G (= Su – OLR), or g (= G/Su) greenhouse functions. Empirically, it gives you the 396-239 = 157 (g=0.4) all-sky and 396-264=132 W/m2 (g=1/3) clear-sky factors, with about 33 (and 27) K greenhouse temperatures.

    The question is how you get these numbers for G, or g (with OLR given) from the measured amounts and distributions of GHG’s and temperature. This is the task of radiative transfer calculations. The result will depend on the global average atmospheric absorbed LW, or on the surface transmitted (St, “Atmospheric Window”) radiation. According to the (monochromatic) Beer law, the global average frequency-integrated tau is a given (logarithmic) function of St/Su .

    As we all want to have exact numbers for the greenhouse effect, we must calculate precisely the global average infrared absorption, or the “window”. Having this, one can establish a theoretically sound G(tau), or, if you like, a G(St) function.

    That’s why approximate flux estimations are not acceptable, and that’s why I ask the radiative experts here to present their most accepted numbers for LW absorption, window and downward radiation.

    When we aggree on the actual fluxes, we can step forward to the possible effects of future composition changes.

  35. A. Lacis wrote: “The line-by-line codes require significant computer resources to operate, but they are the ones that give the most precise radiative transfer results.”

    This sounds very reassuring. Few questions are however in order. Form what I learned, if atmosphere is hypothetically isothermal, what would be the “radiative forcing” from CO2 doubling? Zero, right?

    Now, if atmosphere would go with the same lapse rate for all 45km up, what would be the radiative forcing from 2xCO2? Probably a lot; M. Allen says 80-98W/m2 :-) Or 50W/m2.

    Further, if certain strongly-absorbing band emits from stratosphere, wouldn’t the forcing be negative?

    What we see is that the forcing from CO2 doubling can be anything from negative to about 50W/m2, which fundamentally depends on the shape of vertical temperature profile. Therefore, the whole reassuring precision of RT codes comes down to how accurately do we know (or represent) the “global temperature profile”. So, my question is: how accurately do you know it?

    Another question: is the resulting OLR linear with respect to T(z)?

    There are more question…
    Cheers,
    – Al Tekhasski

  36. A. Lacis wrote: “… reproduce the well established 4 W/m2 radiative forcing for doubled CO2”

    That’s another good question. Do you mean it is well established by Myhre et.al (1998) where “three vertical profiles (a tropical profile and northern and southern hemisphere extratropical profiles) can represent global calculations sufficiently”?

  37. I gave up on Miskolczi when I saw him using the factor of 2 for the ratio of average potential energy to average kinetic energy of the molecules of a planetary atmosphere. That’s valid for gravitationally bound bodies that collide very infrequently. For air molecules that assumption is so far from true as to be a joke. The mean free path of air molecules is around 70 microns near the Earth’s surface, and with that value the ratio is not 2 but 0.4.

    Miskolczi is simply pulling the wool over people’s eyes by writing impressive sound rubbish.

    • And the mean free path at the altitud of the mid stratosphere is around 1m or so? So Miskolczi’s average is calculated over what altitude change?

      You accuse Ferenc Miskolczi of deliberately trying to deceive us by “pulling the wool over people’s eyes”. That’s a very serious charge to level against a theoretical physicist. I hope you have good strong evidence, or you are going to look petty and vindictive in the eyes of many.

      • And the mean free path at the altitud of the mid stratosphere is around 1m or so? So Miskolczi’s average is calculated over what altitude change?

        There are fewer joules in the mid stratosphere than in Queen Elizabeth’s crown.

    • David L. Hagen

      Vaughan Pratt
      On the virial theorem, see references below for Toft, Pacheo and Essenhigh.

      Your problem is with classical thermodynamics. Search scholar.google.com for “virial theorem”.

      When you use ad hominem diatribe, you’ve lost credibility as a scientist. Please address the scientific issues here, not partisan politics or yellow journalism.

  38. You accuse Ferenc Miskolczi of deliberately trying to deceive us by “pulling the wool over people’s eyes”. That’s a very serious charge to level against a theoretical physicist. I hope you have good strong evidence, or you are going to look petty and vindictive in the eyes of many.

    If you think the ratio is 2 then you’re the one looking stupid.

    If I complained about your use of F = 3ma for Newton’s second law would you call me vindictive? You’re out to lunch, guy.

  39. I don’t know, which is why you see the question marks in my comment. This blog is a wonderful opportunity for nonviolent interaction between scientists on both sides of the debate. Why not ask Miklos Zagoni, who is intimately acquainted with Miskolczi’s work, if he can help explain the apparent problem, rather than jumping in with both feet making unwarranted accusations?

  40. Why not ask Miklos Zagoni, who is intimately acquainted with Miskolczi’s work, if he can help explain the apparent problem, rather than jumping in with both feet making unwarranted accusations?

    What’s your basis for “unwarranted?” I watched Zagoni’s video a couple of months ago. He was simply parroting Miskolczi. You seem to be doing the same. Stop parroting and start thinking. This is junk physics.

  41. Oh dear Vaughan.
    I’m not “parroting” anyone. As I pointed out above, I ask questions and think about replies. I recommended you ask Miklos how Ferenc arrived at the ratio of energy you had an issue with. But instead, you seem content to make unsupported assertions about the quality of his work. You fear the consequences of his theory, so you attack details without exploring how the whole fits together.

    Ah well. Be happy with whatever you believe.

    • I recommended you ask Miklos how Ferenc arrived at the ratio of energy you had an issue with. But instead, you seem content to make unsupported assertions about the quality of his work.

      There’s nothing “unsupported” about it, see e.g. this paper which should confirm what I said (and there are even shorter proofs and moreover applicable to more general situations than considered by Toth).

      Putting morons on pedestals like this only makes you a moron. Miskolczi and Zagoni are heroes only to climate deniers.

      • I’d be interested in a discussion of the virial theorem component to this. I’ve encountered another (unpublished) paper that addresses the virial theorem in context of the earth’s atmosphere that i found intriguing. Don’t ask me to defend this (I’m not up on this at all), but would be interested in a discussion on the relevance of the virial theorem.

      • David L. Hagen

        Judy
        On the virial theorem relative to Miskolczi & planetary greenhouse theories, see:
        Viktor T. Toth, The virial theorem and planetary atmospheres
        arXiv:1002.2980v2 [physics.gen-ph] 6 Mar 2010
        http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.2980v2.pdf

        He derives the atmospheric virial theorem for diatomic molecules in a homogeneous gravitational field valid for varying temperature, where the ratio of potential energy U to kinetic Energy K = gas constant R divided by the product of heat capacity cv times the molar mass Mn
        U/K = R /(cV*Mn) Equation 34.

        Hence we were able to demonstrate without having to invoke concepts such as “hard core” potentials or intramolecular forces that the virial theorem is indeed applicable to the case of an atmosphere in hydrostatic equilibrium. However, it must be “handled with care”: the nature of the atmosphere and the fact that the horizontal (translational) and internal (rotational) degrees of freedom of the gas molecules are unrelated to the gravitational potential cannot be ignored.

        See also:
        PACHECO A. F. ; SANUDO J. The virial theorem and the atmosphere, Geophysics and space physics ISSN 1124-1896, 2003, vol. 26, no3, pp. 311-316, 6 page(s) (4 ref.)

        In our atmosphere, most of the energy resides as internal energy, U, and gravitational energy P, and the proportionality U/P = ev/R = 5/2 is maintained in an air column provided there is hydrostatic equilibrium. In this paper we show that this result is a consequence of the virial theorem.

        The most detailed extention of the virial theorem to a full thermodynamic model for a planetary atmosphere column with bas absorption that I know of is by:

        Robert H. Essenhigh, Prediction of the Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere by Solution of the (S−S) Integral Equations of Transfer and Evaluation of the Potential for Profile Perturbation by Combustion Emissions, Energy Fuels, 2006, 20 (3), 1057-1067 • DOI: 10.1021/ef050276y
        http://pubs.acs.org/doi/abs/10.1021/ef050276y

        These results provide a platform for future numerical determination of the influence on the T, P, and F profiles of perturbations in the gas concentrations of the two primary species, carbon dioxide and water, and it provides, specifically, the analytical basis needed for future analysis of the impact potential from increases in atmospheric carbon dioxide concentration, because of fossil fuel combustion, in relation to climate change.

        Essenhigh addresses water and CO2 absorption bands as well etc.

        Miskolczi (2008) applied the classical virial theorem:

        Applying the virial theorem to the radiative balance equation we present a coherent picture of the planetary greenhouse effect. . . .
        (g) — The atmosphere is a gravitationally bounded system and constrained by the virial theorem: the total kinetic energy of the system must be half of the total gravitational potential energy.

        (Part the difficulty some readers have with Miskolczi 2008 is his use of astronomical language etc from other applications of the virial theorem. His statement: “the radiation pressure of the thermalized photons is the real cause of the greenhouse effect” got bloggers off onto the force of photons and satellites, rather than recognizing Miskolczi’s effort to explain the atmospheric pressure by application of the virial theorem, together with absorption of solar radiation to surface temperature. Need to check if there is a small difference in the virial coefficient versus the gas between Miskolczi, Toth, Pacheco, and Essenhigh.)

      • David L. Hagen

        Victor Toth’s paper has been published:
        The virial theorem and planetary atmospheres
        Időjárás – Quarterly Journal of the Hungarian Meteorological Service (HMS), Vol. 114, No. 3, pp. 229-234
        http://www.met.hu/download.php?id=2&vol=114&no=3&a=6

      • David L. Hagen

        For virial connoisseurs see:
        Lambert M. Surhone, Miriam T. Timpledon, Susan F. Marseken, Virial Theorem,
        206 pages, Betascript Publishing (August 4, 2010) ISBN-10: 6131111472; ISBN-13: 978-6131111471

        In mechanics, the virial theorem provides a general equation relating the average over time of the total kinetic energy,The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered in statistical mechanics; this average total kinetic energy is related to the temperature of the system by the equipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not in thermal equilibrium. The virial theorem has been generalized in various ways, most notably to a tensor form.

        It would help if some reader could check this out and review what it has to say on the application to a planetary atmosphere with diatomic and multiatomic gases, per the issues on Toth, Essenhigh & Miskolczi.

        For history buffs:
        Henry T Eddy, An extension of the theorem of the virial and its application to the kinetic theory of gases, 1883

        See a common lecture on the Virial theorem
        http://burro.cwru.edu/Academics/Astr221/LifeCycle/jeans.html

      • David L. Hagen

        For the astronomic applications of the virial theorem, see:
        James Lequeux, The Interstellar Medium
        ISBN 3-540-21326-0 Springer Berlin Heidelberg NewYork
        http://astronomy.nju.edu.cn/~ygchen/courses/ISM/Lequeux2003.pdf

        14.1.1 A Simple Form of the Virial Theorem
        with No Magnetic Field nor External Pressure p 330 – p 323
        Equations (14.1) to (14.11)

        14.1.3 The General Form of the Virial Theorem
        (This includes bulk velocity, density, pressure and gravitational potential as pertinent to a planetary atmosphere, -as well as magnetic field which may not be significant for planets.)

        14.1.4 The Stability of the Virial Equilibrium
        (Note the use of a polytropic equation of state.)

        14.1.5 The Density Distribution in a Spherical Cloud
        at Equilibrium
        (Adapt for a gas around a planet with a given radius and mass.)

      • @curryja: I’d be interested in a discussion of the virial theorem component to this. I’ve encountered another (unpublished) paper that addresses the virial theorem in context of the earth’s atmosphere that i found intriguing. Don’t ask me to defend this (I’m not up on this at all), but would be interested in a discussion on the relevance of the virial theorem.

        Would it be interesting enough to start up a separate thread on the virial theorem on your blog? Although I’ve been reluctant to be a guest on any other topics, that’s because the ratio of my time required for a guest post, divided by the expertise of others on that topic, has not been large enough so far.

        In the case of the virial theorem, from what I’ve read so far in the literature my impression is that no one alive on the planet really understands it. A guest post in which I pretend to explain it might be the most effective way of pulling the real experts on the virial theorem out of the woodwork, if there are any. Viktor Toth could do this ok, but I imagine I could do it at least as well.

        I would be delighted to be let off that hook if Viktor volunteered for that duty (I love nothing more than being let off hooks).

      • YES!!! please send me an email, lets start a thread on virial. I will send you an email also.

      • David L. Hagen

        Judy, Nick Stokes, & Vaughan Pratt
        Regarding the virial coefficient for the atmosphere
        (2 vs 3/2 vs 5/2 for Kinetic Energy/Potential Energy) See the following stating another coefficient of 6/5 for hydrogen as a diatomic gas:
        “The virial theorem which applies to a self-gravitating gas sphere in hydrostatic equilibrium, relates the thermal energy of a planet (or star) to its gravitational energy as follows:
        alpha * Ei + Eg = 0
        with alpha = 2 for a monoatomic ideal gas or a fully non-relativistic degenerate gas, and alpha = 6/5 for an ideal diatomic gas. Contributions arising from interactions between particles yield corrections to the ideal EOS (see Guillot 2005). THe case of alpha = 6/5 applies to the molecular hydrogen outer regional of a giant planet.”
        Jonathan J.Fortney et al. ,Giant Planet Interior Structure and Thermal Evolution Invited chapter, in press for the Arizona Space Science Series book “Exoplanets” Ed. S. Saeger
        arXiv:0911.3154v1
        http://arxiv.org/PS_cache/arxiv/pdf/0911/0911.3154v1.pdf

        Ref Guillot 2005 Annual Review of Earth and Planetary Sciences, 33, 493

        How did they calculate their 6/5 for a diatomic gas? What steps and assumptions did they use?

      • David,
        The first thing to note there is that the factor 2 carries the opposite sign. That is significant, because the gravitational energy referred to is the energy relative to infinity, not ground.

        The factor 6/5 arises through the same logic as in Toth’s paper. Monatomic gases have just translational KE with 3 degrees of freedom. Diatomic gases have two extra dof of rotational KE, making 5. The ratio of PE to translational KE is still -2, but with equipartitioning, there is thus 5/3 times as much KE in total, and the ratio is -2 * 3/5= -6/5.

      • Ferenc Miskolczi

        Poor Pratt, seems you do not know much about the virial concept. For the global average TIGR2 atmosphere the
        P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0) You may compute it yourself if you are able to, but you may ask Viktor Toth about the outcome of our extended discussions about the topics (after I his paper).

      • Welcome Ferenc – I very much look forward to Mr Pratt engaging you directly on the substance of his remarks.

      • Um, I’ve just re-read this. Vaughan, I apologise if I used the wrong term to address you.

      • Welcome Ferenc – I very much look forward to Mr Pratt engaging you directly on the substance of his remarks.

        Um, I’ve just re-read this. Vaughan, I apologise if I used the wrong term to address you.

        Not a problem, at Harvard they would call me “Mr Pratt,” at MIT and Stanford “Vaughan.” So your first address would be fine for Harvard, while on your second you’ve inadvertently used the correct form of address for the only two institutions I’ve taught at for more than a decade each.

        However I recently took a CPR course as part of the autonomous vehicle project I’m the Stanford faculty member on (Stanford has liability worries about the car lifts and heavy machinery we use), so feel free to call me “Dr Pratt” in case you need the Heimlich maneuver or cardiopulmonary rescuscitation. (Both my parents were medical doctors. If you’re lucky that’s hereditary, if not then the Good Samaritan law kicks in to render me innocent of your premature demise, so either way I’m safe even if you aren’t.)

      • Ferenc, welcome to the blog. Please could you answer a question for me?
        Do you think that the convergence of the results on your stable value for Tau confirms the validity of the empirical radiosonde data? If your theory is correct, would it enable you to correct or assign error bars to the empirical data?

        Thanks

      • Ferenc, thank you very much for stopping by to discuss your work.

      • My goodness Judith, you are attracting some top people here.

        Long may it continue.

      • seems you do not know much about the virial concept.

        Clearly one of us doesn’t, since we get such wildly different results.

        For the global average TIGR2 atmosphere the
        P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0)

        Neglecting mountains (which increases K very slightly by postulating atmosphere in place of the mountains) your value for P is easily computed from the center of mass of a column of atmosphere, which on any planet is at the scale height of that planet’s atmosphere, suitably averaged as a function of temperature. If the centroid is at height h then P for that column is mgh where m = 10130 kg, g = 9.81, and h starts at 8500 at sea level and drops to 7000 or less at the troposphere depending on the temperature. A good average figure for h is around 7600, and so P = mgh = 10130*9.81*7600 = 755.3 megajoules. This is essentially what you got so we’re in excellent agreement there.

        Now if Toth and I are getting 0.4 to your 2 then your figure for K must be about 20% of what we’d imagine it to be. Now the specific heat of air at constant pressure is 0.718 kJ/K/kg. Our column has mass m = 10130 kg/m2 as noted above so we have .718*10130 = 7.27 MJ/K/m2. In order to reach your figure of 376 MJ/m2 you would need the temperature of the atmosphere (suitably averaged) to be 376/7.27 = 51.7 K (°K, not Kinetic energy of course).

        I don’t know how you calculated the KE of the Earth’s atmosphere, but at that temperature every constituent of it would be solid. Check your math. I would be more comfortable (literally!) with a KE of 1.885 GJ/m^2 corresponding to a typical atmospheric temperature of 250 K and then you’d get the 0.4 that Toth and I got.

        The virial theorem’s ratio of 2 is exact for any collision-free gravitationally bound system of point particles. When collisions are frequent, as in the atmosphere where typical mean free path at sea level is 70 microns, or when the particles are large, as with a satellite orbiting Earth, the ratio changes significantly. With frequent collisions, air molecules quickly lose track of whatever orbit they were briefly on and their dynamics is completely different from that of a solitary air molecule orbiting an otherwise airless planet. And with a big particle like Earth one must define potential energy with respect to the Earth’s surface otherwise particles in a ground energy state have absurd PE’s, but in that case the KE of a satellite in orbit is a great many times its PE (imagine it in an orbit 1 meter about the surface of an airless spherical planet).

        You now have me wondering what you think the virial theorem means.

      • David L. Hagen

        Vaughan Pratt
        Kindly do us all a favor by readying the actual papers detailing the virial theorem by Toth etc. in the references cited.
        Please apply the Virial Theorem as derived by Toth.

      • No, I think you should read Toth’s paper. Vaughan’s arithmetic agrees with it, and not with Ferenc. The point of Toth’s paper is that the ratio PE/KE, where KE includes rotational energy, is 0.4, not 2 as Ferenc claims. If you restricted to translational KE, the ratio would be 2/3 (still not 2).

        The paper you quote by Pachudo gives the same ratio as Toth. I’m surprised that you quote these results without noticing that they contradict FM’s claim.

        You might also like to note Toth’s tactful aside
        “whether or not the postulate was correctly applied in [1] is a question beyond the scope of the present paper”.
        Indeed the biggest mystery is what the PE/KE ratio has to do with IR fluxes. This has never been explained.

      • Indeed the biggest mystery is what the PE/KE ratio has to do with IR fluxes. This has never been explained.

        I’d formed the impression that FM was trying to gradually back away from that mystery. His problem is how to do so in a no-fault way. He’s not handling this very well on this blog.

      • Stop ascribing motive and insinuating unscientific behaviour!

        We’ve seen more than enough of it over the last 20 years. Pack it in!

      • Stop ascribing motive and insinuating unscientific behaviour!

        I was ascribing motive to FM? All I said was that he was trying to back away from his claimed applicability of the virial theorem, which even Zagoni has not been able to apply.

        We’ve seen more than enough of it over the last 20 years. Pack it in!

        What happened 20 years ago? I can only think of the George C. Marshall Institute starting up. Did you have something else in mind?

      • David L. Hagen

        Nick Stokes & Vaughan Pratt
        Thanks Nick for clarifying the issue. Mae culpa. I was reacting to language, not checking the substance.

        In my post above giving references in response to Curry’s query on the virial theorem, I noted:
        “Need to check if there is a small difference in the virial coefficient versus the gas between Miskolczi, Toth, Pacheco, and Essenhigh.”

        Thanks for raising the issue of the PE/KE coefficient: “The point of Toth’s paper is that the ratio PE/KE, where KE includes rotational energy, is 0.4, not 2 as Ferenc claims.”

        Re: “biggest mystery is what the PE/KE ratio has to do with IR fluxes. ”
        I assume that may affect the atmospheric profile of pressure, temperature, density and composition. See Essenhigh above where he shows differences between relative pressure and relative density with elevation.

        A full analysis to <0.01% variation would need to account for variations of heat capacity with temperature, with corresponding variations in composition, pressure, temperature, and gravity with elevation.

      • David L. Hagen

        Nick Stokes & Vaughn Pratt
        See my note above regarding a coefficient of 6/5 stated for diatomic hydrogen. http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-20326

      • Now the specific heat of air at constant pressure is 0.718 kJ/K/kg.

        Sorry, I meant constant volume (the number I gave is correct). Constant pressure measures just kinetic energy, constant volume measures kinetic and potential energy. This is because work is done in the constant pressure case which puts the potential energy to work, but not in the constant volume case which leaves it in the system, like a wound-up spring. In the case of the atmosphere the work done at constant pressure is used to raise the air above, which then becomes potential energy mgh again.

      • Christopher Game

        Christopher Game posting about the virial theorem. It seems I am a bit late to post the following, but here it is.

        We are here interested in Miskolczi’s kinetic-to-potential energy ratio between time averages of potential energy and of kinetic energy, namely
        2 = .

        The principle of equipartition of energy can be stated
        The average kinetic energy to be associated with each degree of freedom for a system in thermodynamic equilibrium is kT / 2 per molecule.

        The virial theorem of Clausius (1870, “On a mechanical theorem applicable to heat”, translated in Phil. Mag. Series 4, volume 40: 122-127) states on page 124 that
        The mean vis viva of the system is equal to its virial.

        The virial theorem is about a spatially bounded system, that is to say, a system for which all particles of matter will stay forever within some specified finite region of space.

        Clausius allows a diversity of definitions of kinetic energy. For him, it was allowable to define a kinetic energy for any specified set of degrees of freedom of the system. We are here acutely aware that various writers use the permitted diversity of definitions of kinetic energy, and get a diversity of statements as a result.

        The virial theorem of Clausius makes no mention of potential energy. Potential energy is about forces. Under certain conditions, the virial of Clausius turns out to be very simply related to a potential energy.

        Clausius (1870) makes it clear that the terms of his proof may refer to all or to selected degrees of freedom of the system, as defined by Cartesian coordinates.

        Because of its generality, the virial theorem can relate to a theoretical atmosphere of fixed constitution sitting on the surface of a planet, and how this is so is indicated in the original paper of Clausius (1870).

        Remembering to be careful about appropriately specifying the degree of freedom of interest, and the potential energy of interest, the reader of this blog will find that Miskolczi’s formula for the atmosphere
        2 = is correctly derivable from the virial theorem of Clausius. Much of the physical content of the formula can be seen in a simple model, to be found in various books, papers, and on the internet, as follows.

        An elastic ball of mass m is dropped from rest at an altitude h, with g being constant over the altitude up to h (near enough). It will bounce on a perfectly elastic floor at altitude 0 at time T.
        The ball’s velocity (positive upwards) at time t in [0,T] is – gt.
        Its kinetic energy at time t is mg^2t^2 / 2 .
        Its average kinetic energy over the time [0,T] , the time it takes to fall from altitude h to altitude 0, is
        = Integral(0,T) (mg^2t^2 / 2 dt) / T .
        = mg^2T^2 / 6 .
        We recall that T = √(2h / g).
        Thus the average kinetic energy over time [0,T] is
        = mg^2(2h / g) / 6 = mgh / 3 .
        Referred to altitude zero for the zero of potential energy,
        the ball’s potential energy at time t is mgh − mg^2t^2 / 2 .
        Its average potential energy over time [0,T] is
        = mgh – Integral(0,T) (mg^2t^2 / 2 dt) / T
        = 2 mgh / 3
        Thus we have 2 = .

      • Christopher Game

        Christopher Game about the virial theorem again. Typographical problem. Please excuse.

        I didn’t use the angle brackets safely, and the formula was lost.
        I meant to write

        2 average( kinetic energy) = average(potential energy)

        I think it will be obvious how to put in the proper terms.

        Christopher Game

      • Christopher,
        Toth, in the paper cited frequently here, deals with exactly this example. But as he says, in a real gas, there are three degrees of translational motion, and the KE is equipartitioned. Your argument still works for the vertical component, but now the total KE is three times larger, and the ratio is 2:3, not 2:1.

        He goes on to argue that for diatoms, rotational KE should be counted, so the ratio is 2:5.

      • Christopher Game

        Christopher Game replying to Nick Stokes’ post of December 8, 2010 at 7:34 am, about the virial theorem.

        Dear Nick,

        Thank you for your comment.

        With all due respect, I am referring to the virial theorem of Clausius.

        Clausius is quite clear, and one can check his proof, that for the purposes of his definition of vis viva, one can work with one degree of freedom at a time, in Cartesian coordinates. What we are here calling ‘the potential energy’ is dealt with by Clausius in terms of components of force, and that is to be matched against the degree(s) of freedom chosen for the vis viva. I think the proof used by Clausius is appropriate for the present problem. Clausius does not require the use of the total kinetic energy.

        For his version of “the virial theorem”, Victor Toth cites, not Clausius, but Landau and Lifshitz. They give only a very brief account of the theorem and their proof is less general than the one that Clausius offered.

        As I noted in my previous post, Clausius allows diverse choices for the definition of the vis viva, appropriate to the problem. And various choices of definition lead to various results. I think the choice made by Miskolczi, though different from the one you are considering in your comment, is appropriate for the present problem. Your choice might perhaps be relevant to a different problem.

        Here we are interested only in the gravitational degree of freedom, and the appropriate component of vis viva has also just one degree of freedom. As allowed by Clausius, we are not interested in the other degrees of freedom. The vertically falling bouncing ball really does tell the main physics here.

        I think you will agree with me about this when you check the method used by Clausius.

        Yours sincerely,

        Christopher Game

      • Christopher,

        Before starting to discuss, which form is appropriate “to this problem” somebody should give a reason that some form of virial theorem is approprite. As far as I can see nobody has ever presented any reason for that, including Miskolczi.

      • David L. Hagen

        To evaluate the radiation fluxes, we need to know the atmospheric profile of temperature, pressure and composition. The virial theorem gives a basis for modeling temperature and pressure vs elevation with gravity. Anyone have a better explanation?

      • No, but I don’t like that one. You’d have to explain how “The virial theorem gives a basis for modeling temperature and pressure vs elevation with gravity.“. I can’t see it. And as for “the atmospheric profile of temperature, pressure and composition“, that’s what this radiosonde database is supposed to tell you.

        But the 2007 FM paper just plucks numbers out of the virial theory and puts them into IR flux equations. There’s nothing about the results mediated through atmospherical variables. But no other explanation either.

      • David is correct about the relevance of the virial theorem. Here’s the explanation he asked for.

        In the search for Trenberth’s missing heat, one wants to know how much of the total energy absorbed by the planet goes into heating the ocean, the land, and the atmosphere.

        For the atmosphere, if you assume that all the heating energy is converted into kinetic energy of the molecules, which are moving at some 450 m/sec, and calculate this energy from the elevation in temperature of the atmosphere, you will get an answer that is only 5/7 of the correct answer. This is because you neglected the increase in potential energy resulting from raising the center of mass of the atmosphere when it expands due to the warming.

        Since this is a significant error, one might wonder why everyone neglects this increase in potential energy. The answer is that they don’t, it is simply hidden in the difference between the specific heat capacity of air at constant volume, vs. that at constant pressure. The first line of this table gives the former as 20.7643 joules per mole per degree K, and the latter as 29.07. Notice that 29.07*5/7 = 20.7643 in the table. (It is most likely that the latter was simply computed in this way from the former; one would hardly expect agreement with theory to six decimal places from observation, particularly since the former is given only to four places, and the composition of air is more variable than that of any particular constituent such as nitrogen or argon.)

        Heating any region of the atmosphere, whether a cubic meter or a cubic kilometer, is done at constant pressure because the pressure is determined by the mass of air above the region, which is unchanged by heating, whether or not the heating is applied to the whole atmosphere or just the region in question. Hence the applicable heat capacity is the one for constant pressure.

        But this choice automatically factors in the elevation in altitude of the air, since a gas heated at constant pressure must increase its volume. The work done in moving the boundary against the equal and opposite pressure outside the heated region is in this case converted to the potential energy of the raised atmosphere.

        So a virial theorem is at work here, namely the one in Toth’s paper that gives the ratio of PE to KE as 2/5. But this is ordinarily buried in the distinction between constant pressure and constant volume, and so passes unnoticed. Toth makes this connection at the end of his paper.

        But even if one were to neglect the potential energy and only account for the increase in kinetic energy of the atmosphere, the heat capacity of the atmosphere is equivalent to only 3 meters of ocean, whence the error from omitting PE is tiny and cannot possibly make a dent in the amount of missing heat.

        I know of no other relevance of Toth’s virial theorem to the climate. In particular I cannot imagine any role for it in Miskolczi’s claimed effect that CO2-induced warming is offset by water-vapor-induced cooling.

      • Christopher Game

        Dear Pekka Pirilä,
        I am addressing the problem of calculating the ratio of a kinetic to potential energy.
        Yours sincerely, Christopher

      • Hi Christopher,

        It has been a while. It seems to me that according to some of these analysis (Toth’s included) that a 5Kg parcel moving along a certain trajectory at a given velocity will have a different kinetic energy than a cannon ball of same mass and trajectory and temperature.

        Of course it’s harder to convince oneself that the kinetic energy in lattice vibrations of the cannon ball is in any way relevant to the kinetic energy that we are interested in.

      • @CG: Of course it’s harder to convince oneself that the kinetic energy in lattice vibrations of the cannon ball is in any way relevant to the kinetic energy that we are interested in.

        The joules coming from the Sun wind up in the cannon ball. However the “cannon ball” is really an N-atom molecule. Each atom has 3 DOF (degrees of freedom) whence the molecule in principle has 3N DOF. However quantum mechanics forbids certain DOFs as having too low energy, so for example the diatomic O2 with 6 DOF’s in principle only has 5 DOFs below around 500 K.

        The relevance of the non-translational DOFs is that n watts of heat from the Sun distributes itself equally between all non-forbidden DOFs, whence the specific heat capacity of any given gas allows it to absorb more watts than would would expect from the translational DOFs alone.

        In particular if it has 2 non-translational or bound DOFs then 5 watts from the Sun will distribute itself 1 watt to each of those 5 DOFs.

        If the applicable virial theorem promises p joules of PE to every 1 joule of KE then you need to supply 1+p joules of energy to the system in order to raise its time-averaged kinetic energy by 1 joule. In the case of the atmosphere Toth has shown p = 0.4 (which I noticed independently of Toth but several months later so there is no question as to his priority).

        The significance of this for global warming is that if the atmosphere gains K joules of kinetic energy when its temperature is raised 1 °C, 1.4K joules must have been supplied to achieve that effect since potential energy must consume the other 0.4K joules, namely by raising the average altitude of the atmosphere as a consequence of expanding due to the temperature rise.

      • Christopher Game

        Christopher Game replying to Vaughan Pratt’s post of December 9, 2010 at 3:31 am.
        Dear Vaughan Pratt,
        Thank you for your comment. I was referring specifically to the virial theorem of Clausius (1870). It is a rather general theorem, and can be used for many purposes. It makes no mention of potential energy, and uses a particular definition of vis viva for a specified degree of freedom, not quite the same as the modern definition of total kinetic energy. You are interested in its “significance for global warming”. I was referring to its use for the gravitational degree of freedom as allowed by Clausius.
        Yours sincerely,
        Christopher Game

      • I understand that the virial theorem relates one component of kinetic energy to the same component of potential energy. Thus one can e.g. relate the height (or density profile) of the atmosphere to its temperature profile.

        I fail, however, to see any such connection between radiation and other variables that would justify the equations that Miskolczi has presented. For me those equations are just some random formulae without any given justification. (Years ago I was doing research in theoretical physics and I have also teached some aspects of thermodynamics. Thus a good justification should be understandable to me.)

      • Christopher Game

        Christopher Game replying to the post of Pekka Pirilä of December 9, 2010 at 2:41 pm.
        Dear Pekka Pirilä,
        Thank you for your comment.
        Again I say that it would be a pity if the observation of empirical fact were ignored for lack of an accepted theoretical explanation.
        Yours sincerely, Christopher Game

      • Christopher Game

        Christopher Game replying to Pekka Pirilä’s post of December 9, 2010 at 8:29am.
        Dear Pekka Pirilä,
        Thank you for your comment. I am glad it seems that we agree that one may consider one degree of freedom at a time. We are not alone in this. Chandrasekhar (1939) in “An Introduction to the Study of Stellar Structure” on page 51 writes as his equation (92)

        2 T + Ω = 0 ,

        where T denotes the kinetic energy of the particles and Ω denotes their potential energy. He is referring the potential energy to a zero at infinite diffusion, and is using Newton’s law of gravity, and this accounts for the sign difference of the ratio. He writes: “Equation (92) expresses what is generally called the ‘virial theorem’ “.

        This is far as my post went.

        You and Nick Stokes and I think others raise also the further question of the connection between this and Miskolczi’s empirical observation that Su = 2 Eu. This observation was made in figure 20 of Miskoczi and Mlynczak 2004. The fit between
        the curve y(tG) = σ tG^4 / 2 = Su(tG) / 2, and the data
        points plotted as y(tG) = Eu(tG) is not too bad, as you may see by looking at the figure. The data points are each from a single radiosonde ascent, from 228 sites scattered around the earth. Perhaps a little surprisingly, Miskolczi was not the first to notice a relationship like this. Balfour Stewart’s 1866 steel shell atmosphere gave the same result. It is simple to explain Balfour Stewart’s result, but not so simple to explain Miskoczi’s observation. I think that Miskolczi noticed the likeness of Chandrasekhar’s theoretical result (92) to the phenomenological formula Su = 2 Eu that was empirically indicated by the radiosonde data that he had analyzed.

        One may distinguish between enquiry as to factual question of the goodness of fit between the data and Miskolczi’s empirical phenomenological formula, and as to the theoretical question of its physical explanation. It is not clear to me how far people distinguish these two questions. I have not seen an explicit attempt, based on other data, or another analysis of the same dataset, to challenge the empirical fact of goodness of fit. Perhaps you can enlighten me on that point?

        The earth has only about half cloud cover, so Balfour Stewart’s theoretical explanation will hardly work for the earth. Perhaps you know of others who have done better? Perhaps your expertise in theoretical physics will enable you to better? It would be a pity to see the observation of empirical fact ignored for lack of an accepted theoretical explanation.

        Yours sincerely, Christopher Game

      • There exists all kind of empirical regularities. Some of them are precise and some approximate. For me it is incomprehensible that somebody picks one formula from a completely different physical context and proposes without good justification that it provides the reason for the observation. All the present discussion on virial theorem has been related to kinetic energy and potential in gravitational field. It is for me completely obvious that these theories have absolutely no bearing on the behaviour of electromagnetic radiation or radiative energy transfer. These are obviously very different physical processes that follow their on physical laws. The fact that both have their connections to the thermal energy of the atmosphere does not change this fact.

        Until somebody presents me a valid reason to reconsider the issue, my justment is that all these formulae of Miskolczi are complete nonsense and lack all physical basis. The fact that Miskolczi’s theory has been used in deriving results that contradict well known physics does not help.

      • Andy Lacis, Pekka Pirilä and I all view Miskolczi’s paper along the lines of Pekka’s succinct summary:

        all these formulae of Miskolczi are complete nonsense and lack all physical basis. The fact that Miskolczi’s theory has been used in deriving results that contradict well known physics does not help.

        But we’re obviously biased by virtue of being fans of the global warming team. Judging by those commenting here on Judith’s somewhat nicely organized website, the other team has at least as many fans.

        Now when attending a football game or ice hockey match or cricket tournament, it has never occurred to me to question the enthusiasm of the two teams’ respective supporters in terms of their understanding of the principles of the sport in question. The only thing that matters is how strongly they feel about their adoptive teams.

        By the same reasoning I see no need to do so here. We should judge the merits of the respective sides on the basis of their enthusiasm, not on whether they have even half a clue about the science, which is nothing more than a feeble attempt at social climbing by those with Asperger’s syndrome.

        Henceforth anyone bringing up scientific rationales for anything should be judged as simply not getting it.

        I may do so myself, which will make it clear I don’t get it.

      • David L. Hagen

        Vaughan Pratt
        “Putting morons on pedestals like this only makes you a moron.”
        Please desist from ad hominem attacks and raise your performance to professional scientific conduct. You demean both yourself and science by such diatribe. Address the substance, don’t attack the messenger. Otherwise you are not welcome for your demeaning the discussion and wasting our time.

      • Happy to oblige. (I was trying to adapt the adage “arguing with idiots makes you an idiot” by replacing “arguing with” with “put on pedestal” but misremembered the epithet. Dern crickets.)

        In less inflammatory language, my point in case you missed it was that whoever you put on a pedestal says something about you regardless of their ability. If they’re capable then you deserve some credit for recognizing and acknowledging that. If they’re not then you’ve shown yourself to be a poor judge of ability.

        I’ll leave it to others to judge whether my point constitutes an ad hominem attack (Baa Humbug seemed to think so). If Miskolczi has indeed come up with an impossibly low kinetic energy for the atmosphere, by a factor of 5 as I claim, my point would then seem to apply, and I would then be disinclined to take seriously anyone who takes Miskolczi seriously. One could predict remarkable properties of any atmosphere with that little energy. If that counts as an ad hominem attack then that’s what I’ve just gone and done.

      • Vaughan,
        As I said to Fred Moolten, I keep an open mind on all the competing theories, because they all have their assumptions, uncertainties, lacunae and insights. Also, because I’m not an expert in the field, I’m cautious about throwing in my lot with anyone’ particular theory, including my own.

        My higher academic qualification and training is in assessing these theories by looking at their conceptual basis, methodology, data gathering practice, logical consistency and a number of other factors. That is why I’m interested in delving into the work of Ferenc Miskolczi, along with a specific interest in trying to find out if the radiosonde data might be better than previously thought. It could be for example, that something valuable can come from ferenc’s work, even if he turns out to be wrong in some specific detail.

      • My higher academic qualification and training is in assessing these theories by looking at their conceptual basis, methodology, data gathering practice, logical consistency and a number of other factors

        Oh, well then we’re on level ground here since that’s my area too. Physics is just something I used to do many decades ago before I found these other things more interesting.

        It could be for example, that something valuable can come from ferenc’s work, even if he turns out to be wrong in some specific detail.

        Lots of luck with that. Usually this only happens when the worker can keep the details straight. As Flaubert said long ago, Le bon Dieu est dans le detail.

      • True, but nonetheless, even if a fatal flaw is found which falsifies a theory, and the jury is still hearing evidence in Miskolczi’s case as far as I can tell, it is always possible that a new technique or method used in some supporting aspect of a paper can contribute something which may be valuable elsewhere.

        Which is why I wasn’t impressed by your;
        “I gave up once I spotted what I thought was an error”

        By the way, my previous vocation was engineering in the fluid dynamics field, so I’m sure we’ll be able to argue well into the future. ;)

      • Which is why I wasn’t impressed by your;
        “I gave up once I spotted what I thought was an error”

        I wasn’t either, which is why I returned to Misckolczi’s paper to see whether that was the only problem and whether there might be something more fundamentally wrong with his conclusion, that more CO2 is offset by less water vapor whence global warming couldn’t happen on account of rising CO2. The problem with this line of reasoning is that if there is also less flow of water vapor into the atmosphere, that reduces the 80 w/m² of evaporative cooling, raising the temperature. Although Miskolczi mentions this he does nothing to show the flow is not reduced. That seems to me a more substantive flaw in the paper than quibbling over potential energy.

      • De l’oeuvre de Flaubert, I’m affraid I missed an enormous part. Moreover, my skills in the field of radiative physics are so low that I’m ashamed to send a public letter here (not to mention Judith’s demand for technical relevance…) Yet I’m quite sure Dr. Vaughan Pratt made at least one basic mistake here, for we Frenchies say:

        “Le diable est dans les détails”.

        Laissons donc le bon dieu à sa place. ;)

      • The quote is only attributed to him. (Mises said it for sure.)

        What Flaubert is sure to have said (to George Sand) is that

        > Est-ce que le bon Dieu l’a jamais dite, son opinion ?

        which means, roughly, what Sam just said.

        Interestingly, Flaubert said that to justify his realism, i.e. his urge to describe without judging.

        May we all have Flaubert’s wisdom.

      • For all I know the French may have gotten this from the English, who in a moment of devout atheism may have paraphrased “God is in the details,” which came from von Mises, who may have gotten the idea from Flaubert.

        Or it may have been “in the air” at the time, which is not unheard of.

        What goes around comes back with someone else’s lipstick on its collar.

      • Vaughan,
        Do you really mean the Austrian economist von Mises or the German born architect Mies van der Rohe mentioned in your link?

      • Interesting sequence there: willard writes “Mises” and without thinking my subconscious puts “von” in front instead of correcting it to “Mies” and putting “van” after. In my original post I was going to attribute it to Ludwig Mies (Mies van der Rohe) but then it occurred to me to ask the Oracle at Google whether it was original with him, and I learned for the first time about Flaubert’s alleged priority.

        Sam’s four-word conclusion, “probably we’ll never know” follows the old style of giving the proof before enunciating that which is to be proved. The new style, theorem before proof, saves the bother of reading the proof when you feel confident you could easily prove it yourself should the need ever arise, and more succinctly to boot.

        The question of the number of gods seems uppermost on many people’s minds. From what I’ve read it appears to be generally believed to be a very small perfect square, but more than that seems hard to say. In future I’ll punt on the question and say “Sam is in the details,” of which there can be no doubt.

      • Sam is in the details

        I love that one! Thanks a lot.
        Sounds like welcome. :) Isn’t it a nice compliment to get from a scientist? By the way, I just had a quick look at your CV, really impressed. Particularly the logical stuff. Wow.

        Now, if you please, the conclusion was mainly some kind of humour, despite my wish to keep coherent with cool logic here… You’re right, though: except from that gentle story about an imaginary quotation, the object partly remains to be explicited. That’s why you can consider the 4-words was a foreword.

      • Dr. Pratt, you wrote:

        “God is in the details,” which came from von Mises, who may have gotten the idea from Flaubert

        Seems a remarkably piece of history research… Rarely heard of such a rigorous work.

        For a start, talking about details, I would say that, in your formula, both commas are in excess.

        Secondly, what von Mises are you talking about? Richard Edler von Mises (1883 – 1953), scientist (fluids mechanics, aerodynamics, statistics)? Ludwig Von Mises (1881 – 1973), economist? Another one? Anyway, It’s a shame I couldn’t find any von Mises who happened to write down “God is in the details”, “Devil is in the details” or anything of the kind, nor even be told to having said so.
        Besides, the very source you quoted only tell us about a certain Ludwig Mies van der Rohe (1886–1969). Is it worth noting that the difference also makes a pair of details?…

        As for Flaubert’s words, you too have noticed that nobody has ever been able to find the sentence in his writings. The hell… (or that version supposed to get inspiration from Heaven).

        Moreover, had Flaubert ever pronounced the mysterious formula, it seems to me highly doubtful that he’d have chosen the word “God” instead of “the Devil”. As for Flaubert’s use of the “good God” term in a similar sentence, that would have been even more amazing, knowing what were his views. Flaubert’s opinions regarding religions, “God” and “the Devil” were longly expressed in his writings, and widely commented since. You’ll quickly find a lot of delectable ones on the web.
        In the first place, I thought better than disturbing this scientific debate much longer with that – we all know it could last forever (‘Les voies du seigneur sont impénétrables’… aren’t they?).
        But next I thought it was probably worth drawing at least a coarse snapshot here.

        I’d say the very idea of a “good God” was simply absurd to Flaubert. And so was that of a God being up to no good.
        Any possibly God would have no imperfection Himself, the simple fact to be stressed by desire being one, of course.
        Nor is God playing with men. See what Flaubert wrote when he was 17 (my own translations, very sorry): “One often spoke about Providence and celestial kindness; I hardly see any reason to believe in it. God who would have fun tempting men to see up to which point they can suffer, wouldn’t He be as cruelly stupid as a child which, knowing as the cockchafer will die some day, tears off its wings first, then the legs, then the head?

        So you can be sure Flaubert’s thought is merely ironical even at the first degree whenever he used the “good God” term (like in the sentence Willard quoted, which I indeed think is the most relevant, “tout bien pesé”). In other words: in Flaubert’s mind, “good God” precisely indicates… a non-existent god – (“This word which one calls God, what a highly niaise and cruelly buffoon thing! ” (1838).
        Could be one of those many idols the detested religions intend to serve with one of those hateful dogmas: “I’ll soon have finished with my readings on magnetism, philosophy and the religion. What a heap of silly things! […] Such pride in any dogma! – 1879)”; “Priests: Castration recommended. Sleep with their housemaid and have children with them, whom they call their nephews. Now, some of them are good, all the same. ” (Dictionary of the generally accepted ideas). See also the many scenes of horror prevailing in Salammbô (crucifixion of men and animals, terrible diseases, carnages, and in particular the atrocious one Flaubert pleasantly called “the grill of the kids”, where the Carthaginians throw their small children in the belly burning of Moloch)…

        As for what possible thing deserving the name of God in Flaubert’s mind, it would precisely be invisible in the details… to men.
        The following is a larger quotation (for precision stake) than the one usually made of a famous passage (correspondance with George Sand): “”As for my “lacks of conviction”, alas! the convictions choke me. I burst with angers and sunken indignations. But in the ideal view I have about Art, I believe that one the artist should show nothing of his person and not more appear in his work that God does in nature.
        Anyway, wouldn’t it seem amazing that God was said to be in the details, if not in a context putting God in the whole, in the first place? I’d add: needless to be Flaubert to avoid such a strange position.

        Remains the Devil (who will of course finish the story with a great laugh…) And of him, contrary to God, I’m quite sure Flaubert’s would have expected to be in the details. Yet we’re still waiting for the evidence… so here we could finish saying: probably we’ll never know. ;)

  42. Can I interest anyone in a knock-down argument that global warming is happening?

    The reason we can’t see it happening is that there are many contributors to global temperature besides CO2. Obviously there are the seasons on their highly predictable 12-month basis, but there are also the much less predictable El Nino events and episodes sporadically spaced anywhere from 2 to 7 years apart. Then there are the solar cycles on more like a 10-year period, also quite variable though not as much as El Nino.

    There are furthermore the completely unpredictable major volcanoes of the past couple of centuries, a couple of dozen at least, each of which tends to depress the temperature by up to 0.4 °C for 2 to 10 years depending on its VEI number.

    Last but not least, there is the Atlantic Multidecadal Oscillation or AMO. This appears to be running on a relatively regular period of around 65 years, and can be seen centuries back by looking for example at tree rings.

    Now anthropogenic CO2 supposedly began in earnest back in the 18th century. Today we’re putting around 30 GtC (gigatonnes of carbon) into the atmosphere, around 40% of which remains there, with nature sucking back the remaining 60%, a constant that has not changed in over a century. This amount has been increasing in proportion to the population, which as Malthus pointed out a while back is growing exponentially.

    But so is our technology. Hence even if it takes 60-80 years to double our population, it takes something like half that to double our CO2 output, or around 30-40 years.

    Ok, so let’s look at all these time constants. The 65 years for the AM oscillation dwarfs everything except the CO2 growth, which has been going on for centuries.

    Let’s now digress into mathematics land for a bit. If you smooth a sine wave of period p with a moving average of exactly width p, you flatten it absolutely perfectly (try it). If it’s not exact then traces remain.

    Furthermore all frequency components of period less than p/4 or so also vanish almost entirely no matter what their period.

    So: here’s how to look at the climate. Smooth it with a moving average as close to the period of the AMO as you can get. This will kill the AMO as argued above. And it will also kill all those other contributors to variations in global temperature, none of which have a time constant more than a quarter of the AMO. (Solar cycles are about 1/6, ENSO events and episodes even less, and of course the seasonal variations are 1/65 of that.)

    This would be heavy lifting were it not for UK programmer Paul Clark’s wonderful website Wood for Trees. When you click on this link, the 785-month period of the AMO will have been filled in for you, and you can see for yourself what happens to the global climate when smoothed in a way calculated to kill not only the AMO but everything else below it, by the above reasoning. (The phrase “killAMO” is all you need to remember for this URL, which can be reached as http://tinyurl.com/killAMO .)

    Now I claim that this curve is very closely following Arrhenius’s logarithmic dependency of Earth’s surface temperature on CO2 level, under the assumption that nature is contributing a fixed 280 ppmv and we’re adding an amount that was 1 ppmv back in 1790 and has been doubling every 32.5 years. (These numbers are not mine but those of Hofmann et al in a recent paper but if this is behind a paywall for you, you can more easily access Hofmann’s earlier poster session presentation.)

    All that remains is to specify the climate sensitivity, and I’m willing to go out on a limb and say that for the instantaneous observed flavor of climate sensitivity (out of the very wide range of possibilities here), it is 1.95 °C per doubling of CO2. (The IPCC has other definitions involving waiting 20 years in the case of transient climate response, or infinity years in the case of equilibrium climate sensitivity, etc. Here we’re not even waiting one nanosecond. The IPCC definitions get you closer to the magic number of 3 degrees per doubling of CO2 depending on which definition you go for.)

    Unfortunately Paul Clark’s website is not flexible enough to allow you to enter a formula. This is unfortunate because my own website shows that theory and practice are incredibly close. This is a far more accurate fit than one is accustomed to seeing in climate science, but this is how science makes progress.

    The key to this knockdown argument is the regularity of the 65-year-period AMO cycle and the much shorter time constants of all other relevant factors. Any doubt about this can be dispelled by using some other number than 785 for the WoodForTreees smoothing at http:tinyurl.com/killamo.

    • From IPPC4

      “For example, in a model comparison study of six climate models of intermediate complexity, Brovkin et al. (2006) concluded that land-use changes contributed to a decrease in global mean annual temperature in the range of 0.13–0.25°C, mainly during the 19th century and the first half of the 20th century, which is in line with conclusions from other studies, such as Matthews et al. (2003). ”

      Deforestation in tropical zones, the most common form of deforestation recently, does not have a cooling effect but rather a warming effect. https://e-reports-ext.llnl.gov/pdf/324200.pdf

      There is also urbanization to take into account.

      All together, it is quite possible that by correcting early land use changes and later land use changes you can change the difference in temperature from the 19th century to the present by up to 0.3C. Chronology matters.

      If the proper adjustments have been made regarding the known land use changes this argument would be invalid. I am not aware of a study that would indicate by the percentage of adjustments higher vs lower that it has been properly accounted for.

      I understand this is off topic andso leave it to the moderator if she would like to eliminate the comment or not.

      • I should clarify that this is only true regarding hypotheses involving a very long time period to reach equilibrium. The equilibrium temperature would be above the current temperature so you affect the rates of warming much more than the actual temperature achieved.

    • Vaughan, I assume that ‘lb’ is a typo for ‘ln’?

      At first glance the fit looks impressive but you have to be careful about fitting.
      The smoothed Hadcrut curve is only changing slowly so in terms of its Taylor expansion it can be described by a quadratic term (it is easy to check that it can be fitted just as well by a quadratic as by your function) so it has 3 ‘degrees of freedom’. One of these you have chosen one with your choice of 1.95, which as you admit comes from a wide range of sensitivity values. Also you have fitted the additive constant in your graph to get the curves to match up (fair enough, it is a plot of anomalies). So of the three parameters in the smoothed data, you have chosen two of them to fit, which makes the fit not quite so impressive.

      • I assume that ‘lb’ is a typo for ‘ln’?

        Two lines before the formula I wrote “using binary rather than natural logs, lb rather than ln,”. This has been a standard abbreviation for log base 2 for I think at least a couple of decades. If used ln I would then have to multiply the result by ln(2) to convert to units of degrees per doubling instead of degrees per multiplication by e, which is what ln gives. The latter is of course more natural mathematically but it’s not what people use when talking about climate sensitivity.

        The smoothed Hadcrut curve is only changing slowly so in terms of its Taylor expansion it can be described by a quadratic term (it is easy to check that it can be fitted just as well by a quadratic as by your function) so it has 3 ‘degrees of freedom’

        Ah, thanks for that, I realize now that I should have drawn more attention to what happens with even more aggressive smoothing. If you replace 65-year smoothing with 80-year smoothing you get a curve that would require a fourth-degree polynomial to get as a good a fit as I got with 60 years.

        So of the three parameters in the smoothed data, you have chosen two of them to fit, which makes the fit not quite so impressive.

        Not true. First, I had no control over the choice of curve, which composes the Arrhenius logarithmic dependency of temperature on CO2 with the Hofmann dependency of CO2 on year, call this the Arrhenius-Hofmann law. (My syntax for Hofmann’s function slightly simplifies how Hofmann wrote it, but semantically, i.e. numerically, it is the same function.)

        If I’d had the freedom to pick a polynomial or a Bessel function or something then your point about having 3 degrees of freedom would be a good one. However both these functions are in the literature for exactly this purpose, namely modeling CO2 and temperature. I did not choose them because they gave a good fit, I chose them because theory says that’s what these dependencies should be.

        Since we agree about anomalies the one remaining parameter I had was the multiplicative constant corresponding to “instantaneous observed climate sensitivity” which can be expected to be on the low side compared to either equilibrium climate sensitivity or transient climate response as defined by the IPCC.

        Now I had previously determined that 1.8 gave the best fit of the Arrhenius-Hoffman law to the unsmoothed HADCRUT data, with of course a horrible fit because the latter fluctuates so crazily, but it is the best fit and so I’ve been going with it.

        65-year smoothing completely obliterates all the other contributors to climate change (though 80-year smoothing puts back a chunk of the AMO as I said earlier, though nothing else), but it also transfers some of the high second-order-derivative on the right of the temperature curve over to the left, which the slight increase from 1.8 to 1.95 was to compensate for.

        So I really didn’t have any free parameters, other than the exact amount needed to compensate for the smoothing that artificially makes the left of the curve seem to increase faster than it actually does.

        If I’d had not only two free parameters but also the freedom to pick any type of curve I wanted such as a polynomial, then as you say it wouldn’t be so impressive. However that would miss the further point that with 80-year smoothing you don’t get anywhere near as good a match to the Arrhenius-Hofmann law. That there exists any size window yielding a log-of-raised-exponential curve can be seen to be something of a surprise when you consider that neither 80-year nor 50-year smoothing do so.

    • I stupidly wrote: major volcanoes of the past couple of centuries, a couple of dozen at least, each of which tends to depress the temperature by up to 0.4 °C

      Another darned cricket behind the chair keeping me awake all night, causing me to slip a decimal point. Should have been 0.04 °C. (Krakatoa and Mt Pelee seemed to be closer to 0.06 but plenty of volcanoes can easily cool the climate by one or two hundredths of a degree, easily observable in the HADCRUT temperature record for most significant volcanoes after subtracting the AMO, global warming, solar cycles, and everything shorter.)

    • randomengineer

      Hi Proffesor Pratt, I’m putting on my skeptic hat for this, mainly due to being unconvinced that +2C is necessarily a *bad* thing. Please be kind.

      Your graph would seem to show that GHE works as theorized. What it doesn’t show is the human footprint.

      If you were to correlate your graph with human population and/or acreage under plow or some other *reliable* historical data then it could or might or maybe show anthropogenic cause. The idea that there’s a smooth upswing when human technology runs in fits and starts and is interesting, especially when the count of anthros at the left side of the graph is N and exponentially higher at the other. How exactly DOES one impute anthropological cause again?

      Moreover, your graph doesn’t say much about technology, which is always the bogeyman. Data on coal fires? Trains? Anything? One could just as easily claim human population exhalation and every other guy has a fire and the numbers would look the same. To prove that this is a clean anthropological signal, wouldn’t we need to see the corresponding tech and outputs in GTonne, etc. ??

      Would you mind please explaining how the human footprint part works?

      Thanks

      • Would you mind please explaining how the human footprint part works?

        That’s in Hofmann’s papers (the singly-authored poster session version and the journal version with two coauthors). Hofmann explains that his formula for CO2, which replaces the quadratics and cubics that NOAA had previously used, was based on considerations of exponentially exploding population deploying exponentially improving technology. His doubling period of 32.5 years is a reasonable match to a doubling period of 65 years for each of population growth and technology growth.

        We are currently adding 30 gigatonnes of CO2 to the atmosphere each year, of which nature is removing some 17-18 gigatonnes, kind of a leaky-bucket effect. The rest stays up there, which we know because we have monitoring stations like the one at Mauna Loa that keep tabs on the CO2 level. All these numbers are known fairly accurately.

        The logarithmic dependence of temperature on CO2 has been known since Arrhenius first derived it in 1896.

        One could just as easily claim human population exhalation

        Good point. Around 1700 the human population had reached 1/10 of what it is today, a level at which human exhalation was breaking even with volcanic CO2. Human exhalation is now an order of magnitude more than volcanic CO2. However that’s less than ten percent of all human CO2 production from power stations, vehicles, etc.

        mainly due to being unconvinced that +2C is necessarily a *bad* thing.

        I’m not objecting to the rise, merely interested in improving our ability to predict it. I think +2C would be a neat experiment, the last time temperatures were that high was many millions of years ago. If it does really bad things to the environment we (that is, our great-grandchildren) might need to get really creative really fast, but meanwhile let’s do the experiment. Also drop ocean pH to remove Antarctic krill and copepods, who ordered them?

        Capitalism and communism were neat experiments, today we know capitalism works better than communism. This was somewhat less obvious in the 19th century. At least communism Russian-style, it doesn’t look like China-style communism has been disproven yet, but then is it really communism?

      • Posted by TRC Curtin on another thread:

        “What is missing from all statements on growth rates of CO2 emissions (by the IEA et al. and the IPCC) is any statistical analysis of the uptakes of those large gross increases in CO2 emissions (other than the misleading and error strewn section in AR4 WG1 Ch.7:516). These provide a large positive social externality by fuelling the concomitant increases in world food production over the period since 1960 when the growth in CO2 emissions began to rise.
        Thus although gross emissions have grown by over 3% pa since 2000, the growth of the atmospheric concentration of CO2 has been only 0.29% p.a. (1959 to 2009) and still only 0.29% p.a. between October 1958 and October 2010 (these growth rates are absent from WG1 Ch.7). The growth rates of [CO2] from October 1990 to now and from October 2000 to now are 0.2961 and 0.2966 respectively, not a very terrifying increase when one has to go to the 4th decimal point to find it, but to acknowledge this would not have got one an air fare to Cancun.
        These are hardly runaway growth rates, and fall well (by a factor of 3) below the IPCC projections of 1% p.a. for this century. The fortunate truth is that higher growth of emissions is matched almost exactly by higher growth of uptakes of [CO2] emissions, because of the benevolent effect of the rising partial pressure of [CO2] on biospheric uptakes thereof (never mentioned by AR4).
        You will of course NEVER see these LN growth rates of [CO2] in any IPCC report or in any other work by climate scientists.”

      • These are hardly runaway growth rates, and fall well (by a factor of 3) below the IPCC projections of 1% p.a. for this century. The fortunate truth is that higher growth of emissions is matched almost exactly by higher growth of uptakes of [CO2] emissions, because of the benevolent effect of the rising partial pressure of [CO2] on biospheric uptakes thereof (never mentioned by AR4).

        Actually nature was taking back 60% a century ago and is still doing so today. The remaining 40% continues to accumulate in the atmosphere as it has been doing all along, easily confirmed by the Keeling curve data, which shows an exponentially growing rise superimposed on the natural level of 280 ppmv. 60% is not “almost exactly” in my book, and the estimate of 40% remaining in the atmosphere is well borne out by the data.

      • randomengineer

        Thanks, Professor. I have a few comments to make, after which I may have a followup question.

        Hofmann explains that his formula for CO2, which replaces the quadratics and cubics that NOAA had previously used, was based on considerations of exponentially exploding population deploying exponentially improving technology.

        This is the interesting part, which is essentially an equation based on correlation and an assumption, not necessarily something determined via historical data, i.e. hard data evidence, etc. It would seem to also have some presumptions about how long CO2 hangs in the atmosphere.

        During the early 1900s there really wasn’t much in the way of carbon emitting tech in the world. The Occident was using trains and burning coal, sure. In the Orient though… not so much. Man didn’t really have widespread worldwide adoption of major carbon tech (read: enough cars to even cause a blip) though until nearly 1930, and this was still mostly a western world phenomenon. (And I’d argue that you’d at least need 1950’s levels of cars to be able to have any effect at all, only due to the numbers.)

        And yet temps rose, as did presumably CO2. Now what’s interesting here that this rise was already occuring BEFORE the adoption of automobiles as we see today, and so on. If you look at this particular hockey stick —

        http://en.wikipedia.org/wiki/File:Extrapolated_world_population_history.png

        It shows what I’m referring to. There’s a correlation of human population to temp rise to CO2. And yet… the technology we have to emit carbon didn’t really start ratcheting exponentially until after WWII. A tech explosion, if you will. There’s no “knee” that one would expect to see on the graphical representation of CO2 as the result of all of this rapid carbon release. It just keeps rising, nice and slow.

        What I get from looking at the big picture here is that the correlation of CO2 and temp pre-1930 doesn’t imply anthropogenic cause based solely on CO2 release given the lack of a knee when human CO2 emissions from (energy and vehicle) technology adoption really kicked in. It could be argued that pre-1930 anthropogenic influence includes land use and deforestation, i.e. changing the nature of CO2 sinks, but it doesn’t seem that correlating hypothetical formula derived CO2 emissions and temp records is worth much prior to the modern era. In fact the data by itself is ambiguous; you could just as easily use it to say that pre-modern era CO2 rise was reaction to temp rise.

        I’m very much interested in the notion of the A in AGW being solely tied to CO2 emission when clearly CO2 and temp was rising before CO2 emission was at a level that could be detectable. I find this doubly interesting given the data showing that MWP temps were close to that of today (or above, depending on whose work you trust.) Clearly there’s a lot that isn’t understood.

        Yes, professor, I know this is very un-PC for one who gets GHE and thinks that we’re adding to GHG’s. I get the physics part. And I’m aware that we’re running an open ended experiment re emission. Yes I agree that we need to e.g. adopt nuclear energy etc and decommission coal plants.

        So, my followup query is as follows: it seems that Hoffman’s formulae are incorrect and imply causation unsupported by fact — there’s no slow adoption of technology as implied. There ought to be a modern era CO2 knee *if* the A in AGW is based on the modern era explosion in emissions, and there’s no knee. Why?

      • randomengineer

        Quick followup note:

        Just to be ridiculously clear, the assertion that the climate was sensitive enough in 1890 to show temp rise following CO2 also says there was no sink of the extra CO2 (otherwise, why would it rise at all?)

        At the time of massive CO2 belching starting in the 1940s’-50’s this same “sensitive” climate unable to sink CO2 in 1900 would still be unable to sink. CO2 should have gone right through the roof, as would he temp.

        Didn’t happen. Tres confusing. The climate is sensitive or it is not. But there was no linear rise in human tech and emission of CO2.

      • Just to be ridiculously clear, the assertion that the climate was sensitive enough in 1890 to show temp rise following CO2 also says there was no sink of the extra CO2 (otherwise, why would it rise at all?)

        (Can’t remember if I replied to this.) Who’s asserting that? Although CO2 was raising the temperature in 1890, by an amount computable from Hofmann’s formula, it was not doing so discernibly because the swings due to natural causes such as AMO, solar cycles, and volcanoes were much larger swings. Today CO2 has outpaced all these natural causes.

      • During the early 1900s there really wasn’t much in the way of carbon emitting tech in the world. The Occident was using trains and burning coal, sure. In the Orient though… not so much. Man didn’t really have widespread worldwide adoption of major carbon tech (read: enough cars to even cause a blip) though until nearly 1930, and this was still mostly a western world phenomenon. (And I’d argue that you’d at least need 1950′s levels of cars to be able to have any effect at all, only due to the numbers.)

        I fully agree. How much less CO2 did you have in mind for 1900? 10% of what it is today? That’s what Hofmann’s formula gives.

        If you think it was even less than 10% then you may be underestimating the impact of coal-burning steam locomotives, which were popular world-wide: the first locomotive in India was in 1853, in Brazil 1881 or earlier, Australia 1854, South Africa 1858, etc. etc.. Everyone used them: without automobiles, rail was king. The transcontinental railroad was the Internet of 1869, connecting the two coasts of the US and paying for the school I teach at. My wife’s book club is reading Michael Crichton’s The Great Train Robbery about the theft in 1855 of £12,000 worth of gold bars from a train operated by an English train company founded in 1836.

        You also didn’t mention electric power stations, which were largely coal powered in those days and were introduced by Edison and Johnson in 1882. By the early 20th century electricity had caught on big time around the world. Today electricity accounts for some 5 times the CO2 contributed by automobiles. In 1900 it was obviously much more than automobiles.

        And you didn’t mention the transition from sail to steam, which happened early in the 19th century. Ships today produce twice the CO2 of airplanes, and obviously far more than that ratio in 1920 and infinitely more in 1902.

        There is also human-exhaled CO2, which worldwide today is only about 60% of automobile-exhaled CO2 but in 1900 was obviously a far bigger percentage.

        And there is one cow for every five people, and cows belch a lot of methane, which has many times the global warming potential of CO2.

        Rice paddies produce even more methane, and predate even steam ships.

        So I don’t think 10% of today’s greenhouse-induced warming is an unreasonable figure for what humans were responsible for in 1900.

        And yet temps rose, as did presumably CO2. Now what’s interesting here that this rise was already occuring BEFORE the adoption of automobiles as we see today, and so on. If you look at this particular hockey stick –

        What you’re seeing there is a correlate of the Atlantic Multidecadal Oscillation, which dwarfed global warming prior to mid-20th-century. In 1860 it was raising global temperature 8 times as fast as CO2. In 1892 CO2 had grown very slightly but the AMO meanwhile was on a downward swing that the CO2 reduced by only about 20%. In 1925 CO2 warming was at twice the rate it had been in 1860, which therefore added 25% to the AMO rise.

        Not until 1995 did CO2 raise the temperature at the same rate as the AMO. From now on CO2 is going to dominate the AMO swings assuming business as usual.

        It is interesting to consider what would happen if we continued to add 30 gigatonnes of CO2 a year to the atmosphere. The CO2-induced rise in temperature would slow down and eventually become stationary, with the CO2 stopping at perhaps 500 ppmv. That’s because the system would then be in equilibrium. Well before then the AMO would have regained its title as world champion multidecadal temperature changer.

        Fortunately for those of us interested in seeing the outcome of this very interesting experiment, this isn’t going to happen. With business as usual CO2 will continue to be added to the atmosphere at the same exponentially increasing rate as over the last two centuries, pushing it to 60 gigatonnes of CO2 a year by 2045. (In 1975 we were only adding around 15 gigatonnes of CO2 a year.)

      • Would add that by 1900, two of the largest fossil-fuel-based fortunes, Carnegie’s and Rockefeller’s, were solidly in place. Those homes that were not being illuminated with natural gas lighting (made from coal,) were being illuminated with Rockefeller’s kerosene. He had a booming global lighting business by the 1870s. As for Carnegie’s steel, how did he make it? So there was a pretty significant bloom in fossil-fuel CO2 before 1900.

      • According to Encyclopedia Britannica (link in Wikipedia) (based on 1911 data) the world coal production was very close to 1000 million tons in 1905. In 2009 the world coal production was 6940 million tons, oil production 3820 million tons, and natural gas production 2700 mtoe (million tons oil equivalent).

        In 1905 oil and gas were negligible compared to coal. Thus the annual CO2 emissions from fossil fuels were in 1905 8-9% of their present level.

      • In 1905 oil and gas were negligible compared to coal. Thus the annual CO2 emissions from fossil fuels were in 1905 8-9% of their present level.

        Add some for slash and burn, exhalation from humans and their livestock, methane from rice paddies which degrades to CO2, and that should get us reasonably close to Hofmann’s formula, which gives total anthropogenic CO2 as being 10.7% of its level today.

      • ” At least communism Russian-style, it doesn’t look like China-style communism has been disproven yet, but then is it really communism”

        Who controls and uses the guns?

    • Vaughan, firstly I am on the side of AGW and certainly have also long supported the idea of the log CO2 temperature rise. My only thought about this knock-down AMO argument is that you give the AMO too much credit. My own sense of things is that 1910-1940 rises faster than this log because of a solar increase at that time, and the 1950-1975 flattening is due to global dimming (aka aerosol haze expansion near industrial/urbanized areas due to increasing oil/fossil burning). I don’t think the AMO amplitude is much compared to these other factors that give the illusion of a 60-year cycle in the last century. The last 30 years is behaving free of these influences and is parallel to a growth given by 2.5 degrees per doubling.

      • Very much appreciate your feedback, Jim, as it will steer me towards what needs more emphasis or more clarification if and when I come to write up some of these ideas.

        My own sense of things is that 1910-1940 rises faster than this log because of a solar increase at that time

        Can you estimate the contribution of this increase to global warming over that period? That would be interesting to see. If it’s large enough I need to take it into account. It does seem to be sufficiently sustained that 12-year smoothing isn’t enough to erase it.

        and the 1950-1975 flattening is due to global dimming (aka aerosol haze expansion near industrial/urbanized areas due to increasing oil/fossil burning).

        This question of whether it’s aerosols or the AMO downswing would make a fascinating panel session. I would enthusiastically promote the latter. (But I’m always enthusiastic so one would have to apply the applicable discount.) I just recently bought J. Robert Mondt’s “The History and Technology of Emission Control Since the 1960s” to get better calibrated on this.

        I estimate that the RAF’s putting Dresden, Pforzheim, and Hamburg into the tropopause, plus those interminable air sorties by all sides during WW2, had the cooling power of three Krakatoas. One Krakatoa per German city perhaps.

        I don’t think the AMO amplitude is much compared to these other factors that give the illusion of a 60-year cycle in the last century.

        I don’t put much trust in estimates based on 30-year windows of the temperature record. I much prefer every parameter to be estimated from the full 160 year HADCRUT global history. The NH record goes back a couple of centuries further and it would be interesting to coordinate that with the 160 year global record for an even more robust estimate.

        Currently I estimate the AMO amplitude in 1925 at around 0.125 °C, meaning a range of 0.25 °C, and rolling off gradually on either side, being around .08 in 1870 and 1980. The r^2 for this fit is a compelling 0.02, rising to 0.024 if you don’t let it roll off suggesting the roll-off is significant. Not only is the roll-off a better fit but it’s consistent with the disappearance of the AMO signal in the 17th century inferred from tree ring data.

        The last 30 years is behaving free of these influences

        That’s the CO2 speaking. ;)

        You have to remove the CO2 to see it because the CO2 is so steep by then.

        and is parallel to a growth given by 2.5 degrees per doubling.

        Using Hofmann’s doubling time of 32.5 years for anthropogenic CO2, from a base of 280, the current 390 ppm level should be over 1000 ppmv by 2100, which is 1.39 times the current level. 2.5*lb(1.39) is a rise of 3.6 degrees over this century. Is that what you’re expecting, or do you expect less CO2 than that in 2100?

        I’m projecting +2C in 2100 but considerably more if this warming releases a significant amount of methane before then. I’m neither a pessimist nor an optimist when it comes to global warming, I’m just an uncertaintist.

        I’m not wedded to any of this, if my perspectives shift so may my estimates of these parameters.

        I’m not enthusiastic about introducing more parameters, but methane considerations will certainly force at least one more. Anyone here with a model that has a clue about likely methane emissions in 2030? (Just asking, I’d love it if there were.)

      • If you gents are interested in the solar contribution, you might consider the cumulative nature of the retention of solar energy in and dissipation of energy from the oceans (which controls atmospheric temperature in the main), and have a look at this post on my blog.

        http://tallbloke.wordpress.com/2010/07/21/nailing-the-solar-activity-global-temperature-divergence-lie/

      • have a look at this post on my blog.

        On your indicated blog, tallbloke, you write “what a load of rubbish.”

        Had I written that, global warming deniers would be all over it in a flash and Willard would be agonizing about how I’ll never live that down.

        Something should be done about the hypocrisy in this debate.

        At the very least you could have written “what a load of rubbish (pardon my French)” as an exculpatory measure.

      • > Something should be done about the hypocrisy in this debate.

        Speaking of which:

        > I wonder why tallbloke is commenting on this blog, after accusing me of dishonesty on his own.

        Source: http://scienceofdoom.com/2010/12/04/do-trenberth-and-kiehl-understand-the-first-law-of-thermodynamics-part-three-and-a-half-%E2%80%93-the-creation-of-energy/#comment-8015

      • Vaughan, I agree with your CO2 formula. Mine goes 280+90*exp[(year-2000)/48] which also gets to 1000 ppm at 2100. Using 2.5 C per doubling this gives 2100 warmer than 2000 by 3.5 C. Like I said, the gradient fits the last 30 years very well.
        Regarding solar effects in 1910-1940, I estimated this is +0.2 C, and for aerosols 1950-75 -0.4 C. This gives 20th century attribution as 0.7 C total = 0.9 C due to CO2 + 0.2 C due to solar – 0.4 due to aerosols.
        The solar estimate comes from the TSI estimate on climate4you, but has to assume a reasonable positive feedback to get from 0.2 W/m2 to 0.2 C, but somewhat similar to what is required to explain the Little Ice Age with the same TSI estimate.

      • Vaughan, I agree with your CO2 formula. Mine goes 280+90*exp[(year-2000)/48]

        Ah, excellent, thanks for that! (Actually it’s not mine, it’s ESRL’s David Hofmann’s at NOAA ESRL Boulder, who came up with it I think a couple of years ago.) Your formula is extremely close to his, here’s yours minus his at the quarter-century marks.

        1950 1.235
        1975 1.446
        2000 1.355
        2025 .4444
        2050 -2.38
        2075 -9.35
        2100 -24.8 (Hofmann is 1027.65 then)

        Those differences are insignificant for predictive purposes, and moreover are a fine fit to past history. But as a purely academic question the differences disappear essentially completely when you change your 90 to 89 and 48 to 47.

        As it happens I do have a formula for CO2, namely 260 + exp([(y – 1718.5)/60]. I came up with this before seeing Hofmann’s and switching to his. Mine fits the Keeling curve more exactly especially in the middle. Its derivative is also a better fit to the slope; in particular the derivative of your and Hofmann’s formula at 2010 predicts a rise of 2.3 ppmv between July 2009 and July 2010 while mine predicts only 2.1 ppmv. The latter is much closer to what we actually observed.

        I have no evidence for CO2 being 260 in July 1718 (the meaning of those numbers) than the goodness of fit to the Keeling curve, which when extrapolated backwards as an exponential seems to asymptote more to 260 than 280. Absent any more compelling reason for 260 I figured I’d just switch to Hofmann’s formula since I didn’t want to undermine my uses for the formula with questionable choices of parameters.

        Mine incidentally predicts only 840 ppmv for 2100, which I suppose makes me an AGW denier in the same sense that reducing the ocean’s pH from 8.1 to 8.0 makes it acidic.

      • I think 840 is close to the A2 scenario used by IPCC which is their most extreme one. Our others are pessimistic compared to this.

      • I think 840 is close to the A2 scenario used by IPCC which is their most extreme one. Our others are pessimistic compared to this.

        The IPCC is in an unenviable position. The slightest error brings a hail of protest. The science and politics of global warming live on opposite sides of a widening ice crevasse, while the IPCC stands awkwardly with one foot on each side. The scientists can afford to err on the side of pessimism, the politicians optimism.

        One wants to throw the IPCC a skyhook. They cope by hedging their bets. Don’t expect the IPCC to pick the scientifically most probable number when there’s a wide selection, they will prefer the most expedient for the circumstances.

        The only way they can make everyone happy is to make no one happy.

      • We can only hope that the running down of oil burning due to reducing resources is not compensated by an increase in coal, gas, shale oil, etc., fossil-fuel burning, otherwise we are headed for 1000 ppm by 2100.

    • Allow me to throw one small fly in the ointment. If we had data and could do a similar temp series as killAMO from 1780-1870 I suspect we would get a very similar slope. This is simply based on historical, geological, and archeological evidence that NH glaciers and polar ice were retreating faster during the1800s (ref: John Muir) than they are today. This period was well before significant influence from anthropogenic CO2.

      Here is my question: Given similar warming trends, What caused the rapid warming of the 1800s? If not CO2, then what?

      • If we had data and could do a similar temp series as killAMO from 1780-1870 I suspect we would get a very similar slope. This is simply based on historical, geological, and archeological evidence that NH glaciers and polar ice were retreating faster during the1800s (ref: John Muir) than they are today

        (killAMO is what to append to tinyurl.com/ to see the graph in question.)

        But the curve that the smoothed fits so well is not merely a slope, it bends up, and moreover in a way consistent with it having the form log(1+exp(t)) for suitable scales of time and temperature. This curve asymptotes to a straight line of slope 1, which in more familiar units corresponds a few centuries hence to a steady rise of 1 °C every 18 years (assuming business as usual meaning unlimited oil and continued population growth).

        The odds of the 60-year-smoothed record for 1780-1870 fitting log(1+exp(t)) that well are zip. If it did it would strongly imply an exponential mechanism of comparable magnitude to global warming, which would be extraordinary.

      • Nice dance around the question. A key question it is when attempting to determine if our current warming is “unprecedented” and if indeed atmospheric CO2 is the primary culprit. Allow me to restate for clarification:

        So if the observed NH ice meltdown was indeed faster in the 1800s than current NH ice meltdown both in terms of volume and extent, as indicated by historical record, why the rapid meltdown then? Do those same unknown climate forcings exist today? If not CO2, then what?

      • A key question it is when attempting to determine if our current warming is “unprecedented”

        By “unprecedented” I mean hotter. The 10-year-smoothed HADCRUT record shows no date prior to 1930 that was warmer than any date after 1930. Furthermore the temperature today is 0.65 °C hotter than in 1880, which was the hottest temperature reached at any time prior to 1930.

        So if the observed NH ice meltdown was indeed faster in the 1800s than current NH ice meltdown both in terms of volume and extent, as indicated by historical record, why the rapid meltdown then? Do those same unknown climate forcings exist today? If not CO2, then what?

        What are you talking about? The Northwest Passage has been famously impassable for 500 years, ever since Henry VII sent John Cabot in search of a way through it in 1497.

        Here’s some relevant reading. If you have equally reputable sources that contradict these stories as flatly as you claim I’d be very interested in seeing them.


        European Space Agency News 2007

        BBC News 2007

        Kathryn Westcott, BBC News, 2007

        Joshua Keating, Foreign Policy (need to register but the account is free)

      • David L. Hagen

        Vaughan Pratt
        1) Re: “By “unprecedented” I mean hotter.”
        Yet the caveat: “hotter than any prior temperature in the
        the 10-year-smoothed HADCRUT record”
        See definitions for: unprecedented
        e.g. Webster’s 1913:

        Un*prec”e*dent*ed (?), a. Having no precedent or example; not preceded by a like case; not having the authority of prior example; novel; new; unexampled. — Un*prec”e*dent*ed*ly, adv.

        Macmillan

        “never having happened or existed before”

        Unprecedented does not have the same scientific meaning as your caveat.

        The global Medieval Warm Period would qualify as a precedent.
        See: The Medieval Warm Period – a global phenomenon, unprecedented warming, or unprecedented data manipulation?

        See also the Vikings settling in Greenland.
        “By the year 1300 more than 3,000 colonists lived on 300 farms scattered along the west coast of Greenland (Schaefer, 1997.)”

        2) Re:”The Northwest Passage has been famously impassable for 500 years, ever since Henry VII sent John Cabot in search of a way through it in 1497.”
        False!
        e.g., see articles on “Northwest passage” at WUWT
        It was sailed in 1853 by Robert McClure, the HMS Investigator

        Norwegian explorer Roald Amundsen traversed the NW passage between 1903 to 1906.
        etc.

        Regards

      • Unprecedented does not have the same scientific meaning as your caveat.

        Quite right, David, I freely admit that I was merely following the example set for us by the president two weeks ago. Mea culpa, henceforth I vow to faithfully serve the dictionary instead of the president. You have your priorities right.

        But with the scientific definition why should you and I have to quarrel over whether Easterbrook was telling the truth when we can go further back to a time where we can amicably agree over a beer that the present level is far from unprecedented?

        Rejoice, we are on our way back to the ice-free temperatures that obtained before the Azolla event 49 million years ago. During the 800,000 years of that event, CO2 plummeted from a tropical 3.5‰ (3500 ppmv) to a bone-chilling 0.65‰, a decline of 3.6 ppmv per millennium.

        To the best of our understanding of geology, that sustained rate of decline was unprecedented in your—sorry, the dictionary’s—scientific sense.

        Today CO2 is rising at a rate of 2100 ppmv per millennium. Comparing that with the scientifically unprecedented 3.6 ppmv per millennium of 49 million year ago, I call for scientific recognition of a new world record for planet Earth, of the fastest rate of climb of CO2 in the planet’s 4.5 billion year history.

        Ferns were the previous record holder. Humans have proved themselves superior to ferns. And it only took us 49 million years.

        My wife the botanist has been using the Internet to monitor the fern traffic. She suspects they’re plotting a rematch. If they can break our record she’s figured that any such whiplash event will break our collective necks.

      • During the 800,000 years of that event, CO2 plummeted from a tropical 3.5‰ (3500 ppmv) to a bone-chilling 0.65‰, a decline of 3.6 ppmv per millennium.

        Those numbers don’t look right I think you might have dropped a zero somewhere

      • Those numbers don’t look right I think you might have dropped a zero somewhere

        1% = .01, 1‰ = .001, i.e. parts per thousand. I prefer ‰ to % because it groups digits in threes. I’ve often seen people convert 389 ppmv to .389%, that mistake is harder to make when using ‰ .

        The decline was indeed from 3500 ppmv to 650 ppmv, look it up.

      • I didn’t notice you were using parts/thousand

      • Still dancing I see. Perhaps we should clarify a few terms. Warming implies rate of temp increase. Hotter implies higher measured temperature. Rapid meltdown implies rate of ice mass loss.
        Clearly we have been in a step warming trend since emerging from the LIA circa 1800. One does not need advanced degrees in climatology to understand that in a step warming trend over a period of 200 years it will likely be “hotter” towards the end of the warming period. Very little of this trend has been attributed to AGW. What exactly did cause that first 150 years of warming if not CO2? We also expect there to be much less NH ice mass after this 200 year warming trend. As the earth warms, ice melts. No surprises.

        Re: RATE of ice mass loss… Your links regarding the current cryosphere simply do not address the question of 19th century rapid rate of ice mass loss, or it’s causes at all. It was greater between 1850-1890 and briefly 1940-1950 than it is today. Relevant reading as to 1800 ice mass loss and historical temp record? Why yes I believe we do:

        Historical evidence:
        http://www.nps.gov/glba/historyculture/people.htm

        Paleo evidence:
        http://westinstenv.org/wp-content/postimage/Lappi_Greenland_ice_core_10000yrs.jpg

        Re: The Northwest passage… It has been choked with ice from the LIA for the last 600 years. Of course no one has been sailing through there. Perhaps this year, after 200 years of melting ice, we will discover additional archeological evidence of Viking explorers who were navigating these high arctic waters 1000 years ago during the MWP.

        Yes indeed. I do understand and agree with the physics of radiative transfer but there are still many flies in the AGW ointment.

      • Still dancing I see. […] Clearly we have been in a step warming trend since emerging from the LIA circa 1800. One does not need advanced degrees in climatology to understand that in a step warming trend over a period of 200 years it will likely be “hotter” towards the end of the warming period. Very little of this trend has been attributed to AGW. What exactly did cause that first 150 years of warming if not CO2?

        What are you talking about, ivpo? One glance at the gold standard for global land-sea temperature, the HADCRUT3 record, for the 45-year period 1875-1920 with 16-year smoothing, shows that the temperature was plummeting during the period CO2 was having no influence.

        Seems to me you’re the one who’s dancing fancy numbers in front of us that don’t hold up under examination.

        (Only those who’ve been following my killAMO stuff will see the trick I’m pulling here. Fight fire with fire.)

      • Oops, sorry, forgot to give the link to the HADCRUT3 record for 1875-1920.

      • So at the end of all that dancing, all those scientific links, you still have no explanation for the extraordinary NH ice melt off during the 1800s. Nor can you differentiate the cause of the 1800s melt off from our current ice melt off. We don’t really know why. And there is no sound evidence that those same forcings are not in effect today. I think you made my point Vaughn.

        I actually believe your killAMO smoothing has merit but it is still very much open to misinterpretation. It does demonstrate long term warming. It does not tie that warming to CO2 until we can isolate and quantify all other causes for long term warming (such as the rapid NH ice melt off during the 1800s).

      • you still have no explanation for the extraordinary NH ice melt off during the 1800s.

        You may have missed my answer the other day to this. I cannot explain what did not happen.

    • It was the quality of the hindcast that got me over the line on this one :)

  43. Hi Vaughan,
    fresh start for us?

    A few observations about your ‘knockdown argument’, in no particular order:

    1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.

    2) The recovery of global temperature from the little ice age started around 1700

    3) Even if the match between co2 and temperature were good (it isn’t). Correlation is not causation.

    4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.

    5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.

    6) My own solar model matches the temperature data better, without the need for heavy smoothing of data.

    7) You haven’t considered longer term cycles such as the ones producing the Holocene climatic optimum, the Mycean, Roman, Medieval and Modern warm periods. These give you the real reason for the rise in temperature from the low point of the little ice age to now, though we don’t yet fully understand the mechanism, bt we’re working on it.

    8) I’ll stop here, because I reached the smiley number.

    • Hi Vaughan,
      fresh start for us?

      Deal.

      1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.

      Since 1870 is 13 years before my graph begins, how is this relevant here?

      3) Even if the match between co2 and temperature were good (it isn’t).

      Define “good.” Are you looking for an accuracy of one part in a thousand, or what? I would have thought any normal person would have seen my match as fantastic. I could hardly believe it myself when it saw it.

      Correlation is not causation.

      I never claimed otherwise. Maybe the temperature is driving up the CO2. Or maybe Leibniz’s monads are at work here. (Remember them?)

      4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.

      What are you talking about? You seem wedded to the concept that CO2 cannot raise temperature. Do you imagine either Miskolczi or Zagoni believes that?

      5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.

      I have two problems with this. You can fix the first one by correcting the Wikipedia article, which says that the law “states that there is a logarithmic dependence between the transmission (or transmissivity), T, of light through a substance and the product of the absorption coefficient of the substance, α, and the distance the light travels through the material (i.e. the path length), ℓ.”

      For the second one, gases don’t follow Beer Lambert, radiation does. Beer Lambert is applicable to any material through which radiation might pass, whether solid, liquid, gas, or plasma.

      6) My own solar model matches the temperature data better, without the need for heavy smoothing of data.

      Fantastic. Your point?

      7) You haven’t considered longer term cycles such as the ones producing the Holocene climatic optimum, the Mycean, Roman, Medieval and Modern warm periods.

      Excellent point. Which of these are hotter than today?

      These give you the real reason for the rise in temperature from the low point of the little ice age to now, though we don’t yet fully understand the mechanism, bt we’re working on it.

      Good to know you’re working on it. Let me know how it turns out. (I’m not holding my breath.)

      • Ferenc Miskolczi

        Pratt, you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation? By the way, I do not believe, but I know, and I can prove that under the conditions on the Earth the atmospheric absorption of the IR radiation is not increasing. The CO2 greenhouse effect does not operate the way you (or the IPCC) thinks.

      • Pleasure meeting you here on JC’s blog, Ferenc. Hopefully you’re sufficiently used to unkind words from others as not to mind mine, for which I can offer condolences, though the only apology I can offer is that we Australians can be disconcertingly blunt at times.

        you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation?

        Your “try to come up with” implies that the world is waiting with bated breath for someone to show that CO2 can elevate global temperature. I’d explain it except Tyndall already did so a century and a half ago and it would be presumptuous of me to try to add anything to Tyndall’s explanation at this late date.

        Those who’ve used the HITRAN data to estimate the impact of CO2 on climate more accurately, taking pressure broadening as a function of altitude into account, have added something worthwhile. If I think of something equally worthwhile at some point I’ll write it up. (I’ve been meaning to write up my thoughts on TOA equilibrium for some time now, and was pleased to see Chris Colose expressing similar thoughts along those lines on Judith’s blog, though I was bothered by his reluctance to take any credit for them, instead attributing them to “every good textbook” which AFAIK is not the case.)

        Now, I have a question for you. Regarding your argument that the heating effect of increasing CO2 is offset by a decrease in the flow of water vapor into the atmosphere, I would be very interested in two things.

        1. An argument worded for those like me with only half Einstein’s IQ as to how that would happen.

        2. An explanation of why reduced water vapor flow would cool rather than heat. Figure 7 of the famous 1997 paper by Kiehl and Trenberth shows more loss of heat by “evapotranspiration” than by net radiation, namely 78 vs. 390-324 = 66, in units of W/m^2. In other words the same mechanism by which a laptop’s heatpipe carries heat from its CPU to its enclosure is used by evaporation to carry heat from Earth’s surface and dump it in clouds, thereby bypassing the considerable amount of CO2 between the ground and the clouds, and this mechanism removes even more heat from the Earth’s surface than net radiation! Any impairment of that flow will tend to heat the surface (but cool the clouds).

        It is certainly the case that the *level* of atmospheric water vapor regulates heat, by virtue of water vapor consisting of triatomic molecules and therefore being a greenhouse gas. However flux and level are to a certain extent independent: you can lower the flow of water vapor from the ground to the clouds without changing the level of atmospheric water vapor simply by raining less. The water vapor then continues to heat the Earth as before, but now you’ve lost the cooling benefit of the heat pipe.

        Your problem, Ferenc, is that you have very complicated explanations that no one can follow, perhaps not even you. Granted climate science is rocket science, but rocket science starts with the idea that you can increase the momentum and hence velocity of a rocket by ejecting a little burnt fuel with equal and opposite momentum. Climate science needs to start with equally simple ideas, and you’re not helping.

      • though the only apology I can offer is that we Australians can be disconcertingly blunt at times.

        I’m an Australian and I can distinguish between being blunt and being rude. We especially frown upon playing the man – not the ball and pushing in the back is always penalized heavily.

        Lift your game mate

      • Should that be lift your game Pontin!!

      • lol The 2 decade domination was bound to end

      • Since we’re into concern troll territory, it would be interesting to know how to interpret this one:

        > Poor Pratt […]

        Let’s not forget this one too:

        > You may compute it yourself **if you are able to** […]

        Source: http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-19575

        Since these two come from a short post with three sentences or so and that the longest one is an appeal from authority, that’s not bad a ratio, don’t you think?

        Being coy does not help to sound anything else but rude, here. Quite the contrary.

        ***

        In any case, it’s not about lifting the game, but picking an efficient strategy. Vaughan Pratt picked to be open and state his feelings candidly. This is fair game when speaking among friends. This is fairly normal considering the scientific background.

        This strategy will play against him here. It’s not a friendly discussion. It’s not even a scientific one, at least not for most readers, I surmise. Let’s not forget that these words might get read and repeated for ever and ever.

        Decorum matters. The strategy to pick should be a little more closed. For the chessplayers: think 1. Nf3 with Kramnik’s repertoire, not 1. e4 with Shirov’s.

      • @willard This strategy will play against him here. It’s not a friendly discussion. It’s not even a scientific one, at least not for most readers, I surmise.

        David Hagen astutely observed (and BH complained in like vein) that I had “attacked the messenger” (by whom I suppose DH could have meant either himself or Miskolczi, or both) when I objected to his putting Miskolczi on a pedestal with the reasoning that doing so dragged him down to Miskolczi’s level. While not pleading either guilty or not guilty, I interpreted Andy Lacis’s comment to DH,

        Your unmitigated faith in Ferenc nonsense does not speak well of your own understanding of radiative transfer.

        as essentially the same “attack,” minus my gratuitous “morons” epithet. Since no such objection was raised to Andy’s comment, I am led to wonder whether it was really my trash-talking that bothered them rather than this alleged ad hominem attack.

        But just because Andy and I are in agreement on the quality of Miskolczi’s work doesn’t make us right. For unassailable evidence of “Ferenc nonsense” we need go no further than the two numbers Miskolczi offered yesterday.

        For the global average TIGR2 atmosphere the
        P and K totals (sum of about 2000 layers) P=75.4*10^7 and K=37.6*10^7 J/m2, the ratio is close to two ( 2.o0) You may compute it yourself if you are able to

        (These two numbers are of course PE = 754 and KE = 376 whether expressed in megajoules per square meter or joules per square millimeter. I tend to think in terms of the latter, and to include “E” for disambiguation when using ASCII. Had Unicode been an option I’d have used the respective Khmer symbols ព and គ for these two consonants if the morons programmers at Redmond hadn’t made them so ridiculously tiny.)

        One infers from Miskolczi’s second sentence a commendable passion for the sort of attention to detail that analyzing 2000 layers of atmosphere must entail. God is in the details, after all. Miskolczi’s thought that I might be incapable of mustering such passion is right on the money: my brain scrambles at the mere mention of even 1000 layers.

        But unless you belong like me to the small sect that worships the back of an envelope, God is not where I scribbled PE = mgh = 10130*9.81*7600 = 755.3 MJ/m2 where m = 10130 kg is the mass of a 1 m^2 air column, g = 9.81 is the acceleration due to gravity, and h = 7600 is a reasonable estimate of the altitude of the center of mass of a column of air, which a little calculus shows is the same thing as the scale height of the atmosphere (integrate x exp(-x) from 0 to infinity and beyond). While I may well be unable to duplicate the many thousands of calculations Miskolczi must have needed to arrive at PE = 754 megajoules from 2000 layers of TIGR2 atmosphere, third grade must have been the last time I could not multiply three numbers together, and the outcome in this case gave me no cause to doubt Miskolczi’s imputed Herculean labors in his computation of PE.

        Room remained on the envelope for two more multiplications: KE = 0.718*10.13*250 = 1818 MJ/m2 where 0.718 is the constant-volume specific heat capacity of air, 10.13 is the mass in tonnes of a square meter column of air, and 250 K (see why we needed KE?) is a rough guess at the average temperature of the atmosphere.

        This is about five times what Miskolczi calculated.

        Multiplying my figure by the 510 square megameters of the planet’s surface gives 927 zettajoules, just under one yottajoule as the total kinetic energy of Earth’s atmosphere.

        With Miskolczi’s number we get only 192 zettajoules.

        Hmm, maybe there’s an error in my math. Let’s try a completely different back-of-the-envelope way. At a room temperature of 300 K, air molecules move at an RMS velocity of 517 m/s (and a mean velocity of 422 m/s but for energy we want RMS, and the Maxwell-Boltzmann distribution makes this quite different). Since much of the atmosphere is colder than this, a more likely RMS velocity averaged over the whole atmosphere would be something like 465 m/s or 1040 mph, twice the speed of a jet plane. The atmosphere has mass m = 5140 teratonnes, allowing us to compute the translational portion of KE as ½mv² = 0.5*5140*465² = 557 zettajoules. But translation is only 3 DOF, air molecules have two more DOFs so we should scale this up by 5/3 giving 5*557/3 = 928 zettajoules.

        Ok, I admit it, I cheated when I “guessed” 465 m/s for the typical RMS velocity of air molecules. But to get Miskolczi’s 192 zettajoules the velocity would have to be 257 m/s or 474 mph. If air molecules slowed to that speed they’d be overtaken by jet planes and curl up on the ground in a pile of frozen oxygen, nitrogen, and humiliation.

        What kind of climate scientist would discount the energy of the Earth’s atmosphere to that extent?

        Ok, you say, so Miskolczi merely overlooked a factor of 5 in some tremendously complicated calculation of the kinetic energy of the atmosphere. So what else is new, people make these sorts of mistakes all the time in complicated calculations. If that’s the only mistake Miskolczi ever made he’s well ahead of the game.

        Except that (i) a climatologist really does need to be able to calculate the energy of the atmosphere more accurately than that, and (ii) Miskolczi has been claiming this for years, even when the mistake is pointed out to him. Instead he has been trying to convince Toth that the error is on Toth’s side, not Miskolczi’s. And that for a paper that Toth wrote eight months ago and that has now been accepted for publication.

        By my standards I think Andy was very kind to limit himself to “Ferenc nonsense.” Being me I would have used stronger language like “idiot” or “moron.” (Hmm, come to think of it, I did.)

        Let’s not forget that these words might get read and repeated for ever and ever.

        I wish. Just so long as their meaning is not changed by misquoting them or taking them out of context. ;)

        I like learning new stuff, and for that reason I prefer being proved wrong over right. One learns nothing from being proved right, I’m not invincible and am always happy to be vinced. On the other hand being contradicted is not the same thing as being proved wrong. But one also learns nothing from being proved wrong when one is deliberately wrong. (“I’m shocked, shocked to find that lying about climate science is going on in here.” “Your Heartland grant, sir.”)

        Decorum matters. The strategy to pick should be a little more closed. For the chessplayers: think 1. Nf3 with Kramnik’s repertoire, not 1. e4 with Shirov’s.

        Can’t be chess or we could have ended this vicious little game long ago with either one of the threefold repetition rule or the fifty-move rule (no material progress after fifty moves).

      • Unfortunately wordpress turns out not to offer the overstrike capability. Please read the first word of “morons programmers” in my preceding comment as having been overstruck.

      • Vaughan,

        Just look at what you wrote!

        > While I may well be unable to duplicate the many thousands of calculations Miskolczi must have needed to arrive at PE = 754 megajoules from 2000 layers of TIGR2 atmosphere, third grade must have been the last time I could not multiply three numbers together, and the outcome in this case gave me no cause to doubt Miskolczi’s imputed Herculean labors in his computation of PE.

        This is WAY better than saying that FM is a moron, don’t you think? Style! Zest! Gusto! A really entertaining rejoinder to his low jab, in my most humble opinion.

        If only I had a professor like that when I was younger, I would too worship the back of the envelope! Too late for me, I now prefer the back of a napkin:

        http://www.thebackofthenapkin.com/

        More seriously, here is how your trash-talking get recycled into editorials:

        http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-19979

        So now “scientists are mean.” This is good news, if we’re to compare to “scientists are not even wrong” or “scientists are endoctrinators.” Still, this means that your back of the envelope shows numbers that can’t be contested. The only way out is to play the victim: see how scientists treat us, mere mortal!

        Please do not let that way out.

        Hoping to see more and more back-of-the-envelope doodles,

        Best regards,

        w

        PS: The chess analogy works better if we separate the bouts. It’s not impossible to make a career in repeating the same openings, over and over again. Imagine a tournament, or a series of tournament, not a single game of chess… Besides, if one repeats oneself too much, he becomes predictable and loses, unless one’s simply driving by to cheerlead and leave one’s business card with Apollo on it ;-)

      • Vaughan Pratt is nothing but fun. It would be an honor to be called a moron by Vaughan Pratt. If only I could rise to that level.

        Willard, why do people harp on Aristotle’s fallacies? To me they’re quaint and all, but just the cowboy in me, I’m bringin’ my ad hominem attacks and my tu quoques to a bar fight. This appears to be a dust up.

      • > It would be an honor to be called a moron by Vaughan Pratt.

        Agreed.

        Nonetheless, one must then pick up on the editorializing that ensues. Just below here, for instance:

        http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-20271

        Or elswhere, not far from here:

        http://judithcurry.com/2010/12/06/education-versus-indoctrination-part-ii/#comment-20019

        There are many other places. In fact, simply put this into your G spot:

        site: judithcurry.com arrogance

        Yes, even Judith is using that trick.

        Damn rhetorics!

      • David L. Hagen

        Vaughan Pratt |
        Re: “as essentially the same “attack,” minus my gratuitous “morons” epithet. Since no such objection was raised to Andy’s comment, I am led to wonder whether it was really my trash-talking that bothered them rather than this alleged ad hominem attack.”
        I did raise an objection to A. Lacis.
        BOTH your “trash-talking” AND your ad hominem attacks are unbefitting of professional scientific conduct. I address again to you what I said to A. Lacis

        Will you rise to the level of professional scientific conduct?
        Or need we treat your comments as partisan gutter diatribe?
        Your diatribes (against Mikilczi) do not speak well of your professional scientific abilities or demeanor.

        Please raise your conduct to professional scientific discourse rather than waste our time and distort science.

        I acknowledged I misunderstood the core of your concerns over the coefficient in the virial theorem.
        Please read up on how astronomy applies the classic virial theorem to calculate gas pressure, temperature and density versus radius. e.g., Advanced Astrophysics, by Nebojsa Duric Section 2.4.2 p 35

      • > Please raise your conduct to professional scientific discourse **rather than waste our time and distort science**. [Our emphasis]

        How professional and befitting.

      • Vaughan Pratt

        Re: “I like learning new stuff,” – My compliments.
        Re: “But just because Andy and I are in agreement on the quality of Miskolczi’s work doesn’t make us right.” Agreed:
        The one who states his case first seems right,
        until the other comes and examines him.
        Proverbs 18:17 ESV

        Re: “ and for that reason I prefer being proved wrong over right.”
        OK per your desire:
        Re: “Unless you belong like me to the small sect that worships the back of an envelope,” That is too small an object to worship.

        The danger of worshiping your envelope is in missing critical big picture details. You observe: “This is about five times what Miskolczi calculated.”

        Your error appears to be in calculating the conventional TOTAL thermal energy that you thought Miskolczi’ had calculated – RATHER than the single degree of freedom vertical component of the kinetic thermal internal energy that Miskolczi had actually calculated.
        See further comments on my post: of Aug 16, 2011

        Professional courtesy includes first asking the author to first see if I made an error, before trumpeting “his “error”. Miskolczi and Toth communicated back/forth with each other and colleagues and concluded that they agreed with each other’s virial theorem for a diatomic gas within an algebraic conversion factor. So If i have made an error, please clarify and link to the evidence & solution.

        Best wishes on our continued learning.

      • “However flux and level are to a certain extent independent: you can lower the flow of water vapor from the ground to the clouds without changing the level of atmospheric water vapor simply by raining less.”

        And this works the other way too. It’s possible to have less water vapour resident in the atmosphere without lowering the flow or precipitation.

        And your point about Miskolczi’s theory was?

      • And this works the other way too. It’s possible to have less water vapour resident in the atmosphere without lowering the flow or precipitation. And your point about Miskolczi’s theory was?

        That he didn’t say which.

        (Don’t complain that I set you up, you did it to yourself.)

      • Given the context of his theory, he doesn’t need to spell out which.

        Except to you apparently. ;)

        You are the one claiming that your observation constitutes disproof, I am merely pointing out that it doesn’t.

      • Given the context of his theory, he doesn’t need to spell out which.

        It would appear you’re not following. If it’s one then Earth’s surface cools, if it’s the other it warms. Why do you believe he doesn’t need to spell out which?

  44. 1) Human produced emissions of co2 didn’t make much difference to atmospheric levels before 1870.

    Since 1870 is 13 years before my graph begins, how is this relevant here?

    In your original post you said:
    “Now anthropogenic CO2 supposedly began in earnest back in the 18th century.”
    This is potentially misleading. You could say that human emission of co2 began in earnest with the start of the industrial revolution in C18th Europe, but it’s not thought the atmospheric level was much affected by humans until the late C19th or even early C20th. So the problem for your explanation of temperature rise is accounting for the rising temperature from circa 1700 to circa 1880. What do you propose? Longer cycles with as yet unknown causation? (I won’t hold my breath for your explanation), or solar activity? Or something else?

    Let’s do these one at a time so the posts don’t get too long.

    • the problem for your explanation of temperature rise is accounting for the rising temperature from circa 1700 to circa 1880.

      How is this a problem? If you believe Arrhenius’s logarithmic dependence of the temperature of the Earth’s surface on atmospheric CO2, and Hofmann’s raised-exponential dependence of atmospheric CO2 on year, then a simple calculation at a couple of years, say 1800 and 1900, confirms the impression of those who doubt, as you correctly say, that “the atmospheric level was much affected by humans until the late C19th or even early C20th”.

      Using n = 280 ppmv for the natural base (the part of Hofmann’s formula that raises the exponential), o = 1790 as the onset year in which Hofmann says human CO2 remaining in the atmosphere reached 1 ppmv, and d = 32.5 as the number of years it then took to double to 2 ppmv, all due to Hofmann, and using binary rather than natural logs, lb rather than ln, for Arrhenius’s formula so that the answer comes out in units of degrees per doubling rather than degrees per increase by a factor of e, we have lb(n + 2^((y-o)/d)) = 8.136 and 8.182 for y = 1800 and 1900 respectively. That’s an increase of .046 during the whole of the 19th century. If we use a climate sensitivity of 1.8, which is what’s needed for this formula to account for the temperature rise during the 20th century, then the rise during the 19th century would have been 1.8*0.046 = .083 °C, of which 0.021 °C would (assuming this formula) have been in the first half of that century and .062 °C in the second half.

      Your impression of what people either observed or believed is confirmed by the theory.

      The same formula applied to the period from 1957 to 1990 predicts a rise in temperature of 0.28 °C. Consulting the 12-year (144-month) smoothed HADCRUT3 temperature curve at WoodForTrees.org, we see a rise of exactly 0.28 °C.

      Coincidence? Well, let’s back up to an intermediate period: 1892 to 1925. The formula promises a rise of .08 °C. Consulting the same smoothed data confirms that the rise was exactly that.

      One more try: 1957 should be 0.15 °C hotter than 1925. Bingo! On the nose yet again.

      Caveat: these dates are at (or near, pace Judy) zero crossings of the Atlantic Multidecadal Oscillation or AMO. Other dates don’t match the formula as well unless the AMO is incorporated into the formula.

      One should also take into account the larger volcanoes, along with the extensive aerosols created when the RAF transplanted entire cities like Hamburg, Pforzheim, and Dresden into the atmosphere during World War II, not to mention the emissions from the interminable air sorties, tanks, etc., in order to get truly accurate results.

      World War I on the other hand consisted largely of 70 million infantry running around and a few bricks being thrown from planes, whose aerosols had no impact on climate whatsoever while the combined heavy breathing of the infantry may have raised the CO2 a tiny amount. (World War II might be described as World War I “done right” in the view of its instigators, with the benefit of the great advances in military technology in the intervening quarter century.)

      El Nino, solar cycles, seasonal fluctuations and other short term events, episodes, and phenomena can also be neglected because the 144-month smoothing completely removes their influence from the temperature record. This is not to understate their influence, which is large, and noticeably more traumatic by virtue of happening more suddenly!

      • All very interesting, so co2 rise does follow temperature rise quite closely, once we remove the rising and falling of the AMO, the solar cycles and other natural oscillations . Presumably it’s residence time flattens out these shorter fluctuations?

        Ok onto number two.
        2) The recovery of global temperature from the little ice age started around 1700

        No reply.

        Number three:
        3) Even if the match between co2 and temperature were good (it isn’t). Correlation is not causation.

        You agreed to this, which is great.

        4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.

        No reply, as Ferenc Miskolczi so kindly pointed out to you.

        5) Because gases follow the Beer Lambert law not a logarithmic scale, co2 does not keep adding the same amount of extra warming per doubling.

        I see S.o.D. has refuted this claim, which I picked up off Jeff Glassman. No doubt there are gritty details tucked in here, but I’m happy to leave it for now.

      • Look at:

        Vaughan Pratt | December 7, 2010 at 12:48 pm

        you did not answer tallbloke’s question 4. Why do not you try to come up with some scientific explanation?

  45. May I point out the obvious; that radiative forcing cannot be measured with current technology. So a Michelson/Morley type event has not occurred, and is unlikely to occur into the indefinite future. Neither side of this theortical argument can prove their case.

    However, the IPCC cannot use the “scientific method” to prove it is right. By the same token, the opposing arguments cannot prove that the IPCC is wrong. It is just that, with billions of dollars at stake, it seems to me that we need to wait for proof that the IPCC is right. Which is what most of our politicians have NOT done.

    • David L. Hagen

      Jim Cripwell
      Regards to “proving”, there are methods to check. Scientists are now checking how well IPCC projections match subsequent temperatures etc. – They show increasing divergence.
      Miskolczi (2010) above is a method of evaluating if the Global Optical Depth is increasing as expected from IPCC models. His results suggest not.

      It will also help to move IPCC to adopt stringent Principles for Scientific Forecasting for Public Policy. See
      http://www.forecastingprinciples.com/index.php?option=com_content&task=view&id=26&Itemid=129/index.html

      • David, Thank you for the support. The reason I think that this is important is that the next discussion is with respect to the rise in global temperature as a result of the change of radiative forcing, without feedbacks. Here the lack of experimental data is even more definite; it is impossible to measure this temperature rise with our atmosphere. The whole IPCC estimation of climate sensitivity has no experimental basis whatsoever. Yet, somehow, we are supposed to genuflect and pretend that the science is solid.

  46. People wondering whether climate science just doesn’t understand the basics might wonder whether Jeff Glassman on December 6, 2010 at 6:37 pm is pointing out some clear flaws.

    I’ll pick one claim which is easily tested:

    IPCC declares that infrared absorption is proportional to the logarithm of GHG concentration. It is not. A logarithm might be fit to the actual curve over a small region, but it is not valid for calculations much beyond that region like IPCC’s projections. The physics governing gas absorption is the Beer-Lambert Law, which IPCC never mentions nor uses. The Beer-Lambert Law provides saturation as the gas concentration increases. IPCC’s logarithmic relation never saturates, but quickly gets silly, going out of bounds as it begins its growth to infinity.

    Have a read of 6.3.4 – 6.3.5 of the IPCC Third Assessment Report (2001) – downloadable from http://www.ipcc.ch:

    Here is the start of 6.3.5:

    IPCC (1990) used simplified analytical expressions for the well mixed greenhouse gases based in part on Hansen et al. (1988). With updates of the radiative forcing, the simplified expressions need to be reconsidered, especially for CO2 and N2O. Shi (1992) investigated simplified expressions for the well-mixed greenhouse gases and Hansen et al. (1988, 1998) presented a simplified expression for CO2. Myhre et al. (1998b) used the previous IPCC expressions with new constants, finding good agreement (within 5%) with high spectral resolution radiative transfer calculations. The already well established and simple functional forms of the expressions used in IPCC (1990), and their excellent agreement with explicit radiative transfer calculations, are strong bases for their continued usage, albeit with revised values of the constants, as listed in Table 6.2.

    The paper that the IPCC refers to – New Estimates of Radiative Forcing due to well-mixed Greenhouse Gases by Myhre et al, GRL (1998) has same graph and the logarithmic expression – you can see these in CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers.

    And you can read the whole paper for yourself.

    Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.

    The IPCC curve in the 2001 TAR report follows the values established by Myhre et al. Myhre et al simply use the radiative transfer equations to calculate the difference between 300ppm and 1000ppm in CO2. Then they plot these values on a graph. They make no claim that this represents an equation which can be extended from zero to infinite concentrations.

    Myhre et al and the IPCC following them are not making some blanket claim about logarithmic forcing. They are doing calculations with the radiative transfer equations over specific concentrations of CO2 and plotting the numbers on a graph.

    Beer-Lambert law of absorption:

    The radiative transfer equations do use the well-known Beer-Lambert law of absorption along with the well-known equations of emission. You can see this explained in basics and with some maths in Theory and Experiment – Atmospheric Radiation.

    So the claim: “the Beer-Lambert Law, which IPCC never mentions nor uses” is missing the point of what the IPCC does. You won’t find the equations of radiative transfer there either. You will hardly find an equation at all. But you will find many papers cited which use the relevant equations.
    In fact, in publishing results from people who use this Beer Lambert law, the IPCC does use it.

    So when people write comments like this it indicates a sad disinterest in understanding the subject they appear so passionate about.

    I recommend people to read the relevant sections of the IPCC report and the Myhre paper. Well worth the effort.

  47. Perhaps the most critical scientific question relevant to future climate behavior as a function of CO2 concentration involves “climate sensitivity” – the temperature response to changing CO2, typically expressed in terms of a CO2 doubling. This combines both the initial forcing as determined with the aid of the radiative transfer principles discussed in this thread, and the feedbacks initiated by the temperature response to the forcing. In addition to the Planck response (a negative feedback dictated by the Stefan-Boltzmann equation but part of the calculations implicit in the response to the forcing alone ), the most salient, at least on relatively short timescales, are the water vapor, lapse rate, the ice-albedo, and cloud feedbacks. The last of these has been a subject of some controvery but has generally been estimated as positive, the ice-albedo feedback is generally considered positive but quantitatively modest, the lapse rate feedback is negative, and the water vapor feedback is generally thought to be the dominant positive feedback responsible for significant amplification of the temperature response to CO2. This is positive response of water vapor is expected based on the increase in water vapor generated by the warming of liquid water.

    Expectations aside, and despite the above, the sign of the water vapor feedback has been challenged. A negative feedback due to declining water vapor in response to CO2-mediated warming would have major implications for climate behavior. Dr. Curry has stated that she plans a new thread on climate sensitivity, and so extensive discussion might be withheld until that appears. However, water vapor is relevant to some upthread commentary asserting a high degree of climate self-stabilization based on negative water vapor feedbacks. In anticipation of the future, I think it’s worth pointing out here that substantial observational data bear on this question. These data include satellite measurements demonstrating that tropospheric water vapor at all levels is increasing in relation to increasing temperatures. In addition, longer term data are available from radiosonde measurements. Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful. At this point, the convergence of evidence strongly supports a positive water vapor feedback capable of amplifying rather than diminishing the initial effects of CO2 changes alone. The validity of theories dependent on a countervailing negative response (a decline in atmospheric water) cannot be excluded with absolute certainy, but appears to be highly improbable. I’ll provide more data and extensive references in the relevant upcoming thread.

    • Hi Fred,
      I see plenty of uncertainty so i keep an open mind on all theories in play, plus I have one of my own, which is that humidity might not be dancing to co2’s tune either positively or negatively, but might be dancing to the beat of a different drum.

      Nothing conclusive yet, but I made this plot of the NCEP reanalysis of the radiosonde data for specific humidity at the 300mb level, around the height where most radiation to space occurs, and solar activity.

      http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png

      I’ve been touting it around in the hope someone might have something worthwhile to say about it, so please do.

      • I’ll discuss the NCEP-NCAR reanalysis in the upcoming thread. It’s an outlier, with the other reanalyses, plus the satellite data, all showing increasing specific humidity. A major problem was changing instrumentation, which improved (i.e., shortened) the response time of the instruments, so that more recent data based on measurements at a high altitude were increasingly less contaminated with residual measurements from lower, wetter altitudes.

        I think we can agree that the issue isn’t settled with absolute certainty, but attempts to conclude that humidity has not been increasing will have to overcome a rather large body of evidence to the contrary, particularly as satellite-based trends have begun to supplement the radiosonde data.

      • Thanks Fred,
        My feeling is we should try to salvage what we can from the radiosonde data since it goes back twice as far as satellite data. Speaking of which, can you point me to any nice sites with easily accessible satellite data for such things as specific humidity?

        Thanks again.

      • You’ll have to forgive me for not replying in detail, but I will try to review more of the references a bit later. In the meantime, you might check out some of the references in AR4 WG1, Chapter 3. There’s nothing there since early 2007, but the chapter does include some interesting text and references, including the brightness temperature data comparing O2 signals with water vapor signals, based on the relatively unchanging atmospheric O2 concentration as a baseline.

        Any time data on this or any other unsettled topic are cited, it’s important to ask whether the data cited are inclusive, or whether they omit important sources that imply a different interpretation. To the best of my knowledge, the NCEP/NCAR reanalysis is the only source of extensively sampled data that conflicts with the other sources (I’m referring to observational data, not theoretical arguments, although these of course tend to go mainly with increasing humidity). If there are other important sources of observational data reinforcing the NCEP/NCAR conclusions, I hope someone will call attention to them.

      • Fred,
        This is why I asked Ferenc Miskolczi whether he believed his analysis of the radiosonde data which found a constant value for Tau confirmed the validity of the data. I hope he calls back to reply.

        Global rainfall records are hard to come by, but Australia has seen a decline in precip since 1970.

      • My feeling is we should try to salvage what we can from the radiosonde data since it goes back twice as far as satellite data.

        I agree with tallbloke on this point.

    • David L. Hagen

      Fred Moulton
      Thanks for the overview. You note:

      longer term data are available from radiosonde measurements. Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful.

      1) Per this thread, any comments on the confidence/uncertainty on those radiation evaluations?
      2) You note increasing water vapor. Yet Miskolczi (2010) above applies the available radiosonde data showing decreasing water vapor. That appears to be one major issue over his finding of stable global optical depth. Similar declining humidity trends were reported by:
      Garth Paltridge & Albert Arking & Michael Pook
      Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data Theor Appl Climatol 4 February 2009, DOI 10.1007/s00704-009-0117-x

      Gilbert gives theoretical basis to support this:
      William C. Gilbert THE THERMODYNAMIC RELATIONSHIP BETWEEN SURFACE TEMPERATURE AND WATER VAPOR CONCENTRATION IN THE TROPOSPHERE
      Energy & Environment Vol. 21 No 4, 2010
      http://www.eike-klima-energie.eu/uploads/media/EE_21-4_paradigm_shift_output_limited_3_Mb.pdf#page=105

      The theoretical and empirical physics/thermodynamics outlined in this paper predict that systems having higher surface temperatures will show higher humidity levels at lower elevations but lower humidity levels at higher elevations. This is demonstrated in the empirical decadal observational data outlined in the Introduction, in the daily radiosonde data analysis discussed above and explained by classical thermodynamics/meteorology relationships. . . .The key to understanding the actual observed atmospheric humidity profile is to properly take into account the physics/thermodynamics of PV work energy in the atmosphere resulting from the release of latent heat of condensation under the influence of the gravitational field. . . .
      The contents of this paper may also have relevance for the work of Ferenc
      Miskolczi [6, 7] in that his radiosonde analysis shows that over the decades, as CO2 concentrations have increased, water vapor concentrations at higher elevations have decreased yielding offsetting IR absorbance properties for the two greenhouse gases. That offset results in a constant Optical Depth in the troposphere.

      Appreciate any references addressing those differences. (With the main discussion for Curry’s future post on water vapor.)
      PS Judy – feel free to move these last two to a different post.

      • Dear David and Fred,

        Let me repeat here: It is often said that with increasing CO2 the water vapor amount must decrease to support Ferenc’s constant. No, not the amount of GHG’s but their global absorbing capacity determines the greenhouse effect. The stability of tau can be maintained by changes in the distribution of water vapor and temperatures too. If there is a physical contraint on it (as Ferenc states), the system has just enough degrees of freedom to accomodate itself to this limit, and will ‘know’ which one is the energetically more ‘reasonable’ way (satisfying all the necessary minimum and maximum principles) to choose.

        Thanks,
        Miklos

      • Theoretically, I agree that water vapor could be redistributed so as to reduce optical thickness despite rising overall levels. Hower, the distribution of increased humidity includes the mid to upper troposphere where the greenhouse effect is most powerful, and the increased optical thickness (tau) is already evident from direct measurements relating water vapor to OLR. There will be more discussion of this in the upcoming climate sensitivity thread, but the link I cited in my first comment in this thread (not quite halfway down from the top) provides one piece of evidence.

        At this point, the climate sensitivity issue revolves mainly around the cloud feedback values (generally thought to be positive, but controversial). The positive water vapor feedback appears to be on solid ground and unlikely to be overturned, in my view, for reasons I’ve stated above.

  48. SoD wrote:
    “Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.”

    Actually, you are talking about wrong paper, it contains nearly no information nor details. Better look at their previous paper, with some details on the method:

    http://folk.uio.no/gunnarmy/paper/myhre_jgr97.pdf

    Then pay attention to Figure2. I don’t know where did they get that the “choice of tropopause level” makes only difference of 10%. What I see from their Figure2 that depending on where do you stop calculating (and which profile is used, polar or tropical), the CO2 forcing may differ by 300%. That’ why I asked the question, to what extent it was “established” that global CO2 forcing is 3.7W/m2 (which, BTW, implies the accuracy of determination of better than 3%).

  49. Al – Not sure where you’re getting your 300%. For example, the OBIR mls curve in Fig. 2 of Myrhe and Stordal (1997) has a an irradiance increase of 0.11 W/m2 at an altitude of 8 km for a 5 ppm increase in CO2, and an irradiance increase of 0.10 W/m2 at an altitude of 20 km, a difference of 10% rather than 300%. 8 km is a minimum tropopause height, and 20 km is a maximum tropopause height, so values within that altitude range are the only ones that matter.

    In practice, plausible choices for global-mean tropopause height do not have nearly so broad a range, so the sensitivity of CO2 radiative forcing to tropopause height choice is much less than 10%. Myrhe and Stordal’s 10% refers to the sensitivity with CFC’s and other low-concentration Tyndall gases, not with CO2.

    Finally, this is not a physical uncertainty, but an uncertainty associated with an arbitrary definitional choice. The “tropopause” is an arbitrary concept with multiple definitions, so the precise value of radiative forcing at the tropopause will depend upon what definition one chooses for the tropopause. None of this affects the vertical profile of altered irradiance, the physical response of the atmosphere to the altered irradiance, nor any model-simulated response to the altered irradiance.

  50. ??? For polar profile, the “forcing” (difference in two OLRs) is less than 0.035W/m2 for a distant observer. If you stop calculations at 10km for tropics, you have 0.115W/m2. This is a ratio of 3.28, or 328%. Or 228%, whatever.

    What do you mean “not a physical uncertainty”? The planet gets some SW radiation, then it emits some LW to outer space, as a distant observer would measure. To get a steady state, the OLR must have certain magnitude, all regardless of your definitions. If composition is altered, new OLR will be established, and the system must react to re-establish the balance. How it could be an arbitrary concept if system needs to react either to 0.035W/m2, or to 0.115W/m2? I understand that you can throw in various mix of profiles (three of them :-)), and it will lead to a narrower range of possible forcing, but stopping calculations at 8km is not justifiable when it is known that the rest of atmosphere would cut this imbalance in half, as Figure2 suggests.

  51. Al Tekhasski said on December 7, 2010 at 6:14 pm

    “SoD wrote:
    “Myhre comments on the method used to calculate the values that appear on the graph: “Three radiative transfer schemes are used, a line by line model, a narrow band model and a broadband model.. New coefficients are suggested based on the model results.”

    Actually, you are talking about wrong paper, it contains nearly no information nor details. Better look at their previous paper, with some details on the method..

    How can it be the “wrong paper”?

    It is the paper cited by the IPCC for the update of the “famous” logarithmic expression. It doesn’t explain the radiative transfer equations because these are so well-known that it is not necessary to repeat them. The paper does contain references for the line by line and band models.

    Imagine coming across a paper about gravitation calculations 50 years and 5,000 papers after the gravitational formula was discovered. People in the field don’t need to derive the formula, or even repeat it. .

    Where did they get that the choice of tropopause definition makes only 10% difference?

    From Greenhouse gas radiative forcing: Effects of averaging and inhomogeneities in trace gas distribution Freckleton et al, Q. J. R. Meteorol. Soc. (1998).

    And as John N-G correctly says on December 8, 2010 at 1:52 am:

    ” The “tropopause” is an arbitrary concept with multiple definitions, so the precise value of radiative forcing at the tropopause will depend upon what definition one chooses for the tropopause.

    Picture it like this.

    Suppose you live some distance north of New York City. Some people define the center of New York City as the Empire State Building. Some define it as the location of City Hall.

    Suppose the distance from your house to New York City is 50 miles with the center defined as the Empire State Building. If someone wants to know how far from your house to New York City with the center defined as City Hall, they just have to add 3 miles.

    The choice of “center of NY” is arbitrary. The distance from your house to the Empire State Building is always the same. The distance from your house to the City Hall is always the same.

  52. I dont know if others feel this way, but I cannot see how these theoretical discussions can ever get us anywhere. The proponents of CAGW will always want to believe that there are sound theoretical reasons for believing that a high value for a change of radiative forcing exists, when CO2 doubles. Skeptics will want to believe the opposite. Without any observed data, I cannot see how the argument can be resolved. And this, of course, is the weakenss of the IPCC approach. Without hard measured data, they can never use the “scientific method” show that CAGW is real.

    • I dont know if others feel this way, but I cannot see how these theoretical discussions can ever get us anywhere. The proponents of CAGW will always want to believe that there are sound theoretical reasons for believing that a high value for a change of radiative forcing exists, when CO2 doubles. Skeptics will want to believe the opposite. Without any observed data, I cannot see how the argument can be resolved. And this, of course, is the weakenss of the IPCC approach. Without hard measured data, they can never use the “scientific method” show that CAGW is real.

      My feeling exactly, as I’ve said repeatedly on Judy’s blog.

      Only when one sees the temperature increasing logarithmically with CO2 level can one possibly begin to believe all these cockamamie theories that it “ought to.”

      What impresses me is the number of people who will deny seeing exactly that when it’s pointed out to them. Their response is, “Oh, that could be any curve,” without actually offering an alternative curve to the logarithmic one.

      Deniers are wedged into denier mode, data will not change their minds no matter how good the fit to theory.

      • Richard S Courtney

        Vaughan Pratt:

        You assert:
        “Deniers are wedged into denier mode, data will not change their minds no matter how good the fit to theory”.

        But
        Catastrophists are wedged into catastrophist mode, data will not change their minds no matter how bad the fit to theory.

        So, your point is?

        Richard

      • Nothing you will read in this thread will change your opinion one jot.

  53. When I was a young staff engineer, long before personal computers or even remote terminals, I worked on a large campus of Hughes Aircraft Company. We had a wildly popular football pool of a dozen or so games per week, handicapped with point spreads. I had a computer in my office that I used for my picks, which I posted on my office door midweek. People from different buildings would gather at my door, pads and pencils in hand, to get the computer picks. I didn’t tell them that I had used a random number generator.

    Myhre, et al. (1998) did the same thing, making the picks look more genuine by graphing some with lines and some as if they were data points for the lines. To make it more convincing, they labeled a couple of curves “approximation” and a couple of them “fit”, as if to say approximation to data or fit to data. Gunnar Myhre was an IPCC Lead Author for both the TAR and AR4.

    The equation scienceofdoom gives on his blog is ΔF =K*ln (C/C0), where K = 5.35 W/m^2 and C0 = 278 ppm. It is the bottom curve labeled “IPCC type fit to BBM results” with an rms fit of 1.08E-14 W/m^2, digitized uniformly from end-to-end with 15 points.

    The curve “IPCC type fit to NBM results” is for all practical purposes the same as the NBM results, which is logarithmic with K = 5.744 and C0 = 280.02, with an rms error = 2.46E-16 over 13 points.

    The curve “IPCC (1990) approximation” has K = 6.338, C0 = 278.555, with an rms error = 8.88E-16 over 16 points.

    The curve “Hansen (1998) approximation” has K = 6.165, C0 = 288.328, with an rms error = 7.74E-15 over 19 points.

    The data set “BBM results” has K = 5.541, C0 = 281.537, with an rms error of 6.427E-15 over all 11 legible points.

    The data set “NBM TREX atmosphere” has K = 5.744, C0 = 280024, with an rms error of 2.463E-16 over all 13 legible points.

    In summary, these are all models of the failed conjecture that RF depends on the logarithm of the CO2 concentration. Who among the authors stands first to take the credit? Hansen? Scienceofdoom says,

    “Myhre et al and the IPCC following them are not making some blanket claim about logarithmic forcing.”

    Yes, they are. The message is just disguised with ornaments.

    Sod says,

    “They make no claim that this represents an equation which can be extended from zero to infinite concentrations.”

    Of course not — not explicitly. The result is obvious, and besides that observation makes the claim of the logarithm quite silly.

    Using the least IPCC curve, the one for which sod gives an equation, when the concentration gets to 1.21E16, the RF is equal to the entire radiation absorbed on the surface, 168 W/m^2. IPCC, AR4, FAQ 1.1, Figure 1.1. When the concentration gets to 1.61E30, the RF is 342 W/m^2, IPCC’s TSI estimate. Id.

    Of course, the most the CO2 concentration can be is 100%, or 1E6 parts per million, but the log equation doesn’t know that! And furthermore, the most the RF from CO2 can be is less than the ratio of the total of all CO2 absorption bands to the total band of blackbody radiation, and much less than one. But the log equation doesn’t know that either. Clearly the logarithm has limited value. How should the relation behave?

    Beer and Lambert took these and other considerations into account when they developed their separate theories that jointly proved to be a law. The band ratio is but one conservative, upper bound to the saturation effect, which can be calculated from the Beer-Lambert Law, line-by-line or band-by-band, as one might have the need and patience for accuracy and resolution.

    The logarithm model makes CO2 concentration proportional to the exponent of absorption. The Beer-Lambert Law makes absorption proportional to the exponent of CO2 concentration.

    None of Myhre’s traces comprise measurements, not even the output of the computer models, which will surprise some. You can depend on these models, the NBM and BBM, to have been tuned to produce satisfactory results in the eyes of the modeler, and in no way double blind simulations. Just like GCMs.

    Certainly, no tests have ever been conducted over the range of the graphs, 300 to 1000 ppmv. No theory supports the logarithm function, the long held, essential but false conjecture of AGW proponents. The applicable physics, the Beer-Lambert Law, is not shown by Myhre, et al., of course. And if any investigator ever made estimates qualitatively different than the logarithm, IPCC would not have selected his results for publication, nor referenced a work that did. He probably couldn’t have been published anyway, but if he did, the folks at IPCC would deem the journal not to have been peer-reviewed so the journal and its papers could be shunned.

    Likewise, when challenged with physics, sod turns ad hominem:

    “So when people write comments like this it indicates a sad disinterest in understanding the subject they appear so passionate about.”

    “scienceofdoom, thanks for this lucid clarification.”

    • Jeff – You are arguing the equivalent of “the global temperature cannot possibly be increasing linearly at the rate of 0.1 C/decade, because that would mean that 2881 years ago the mean global temperature would have to have been below absolute zero“.

    • Jeff,
      Scientistofdoom, Chris Colose, Vaughan Pratt and several other ‘proper scientists’ on this blog use petty insult and high handed sarcasm quite freely, as if it somehow their right as ‘proper scientists’ when talking to ‘mere sceptics’. Many of whom are equally well or better qualified to discuss the topics Judith has been posting about…

      • Here is Jeff Glassman’s first paragraph:

        > When I was a young staff engineer, long before personal computers or even remote terminals, I worked on a large campus of Hughes Aircraft Company. We had a wildly popular football pool of a dozen or so games per week, handicapped with point spreads. I had a computer in my office that I used for my picks, which I posted on my office door midweek. People from different buildings would gather at my door, pads and pencils in hand, to get the computer picks. I didn’t tell them that I had used a random number generator.

        Let’s wonder what is the purpose of this story.

      • Scientistofdoom, Chris Colose, Vaughan Pratt and several other ‘proper scientists’ on this blog use petty insult and high handed sarcasm quite freely, as if it somehow their right as ‘proper scientists’ when talking to ‘mere sceptics’

        I’ll never live that down. ;)

        But do I want to? I try my derndest to treat “mere sceptics” with the utmost kindness and discretion. But we’re not dealing with mere sceptics here, but bad ones.

        There are two ways to be a bad sceptic. One is to insist that scientific facts are not subject to democratic voting since they are immutable. The other is to acknowledge the need for scientific consensus but to insist that facts run for re-election periodically, just as we make senators run for office periodically, as a measure of protection against their becoming “inconvenient.”

        Scientific facts are neither of these, they are more like Supreme Court justices who must run the gamut of Congress but are appointed if at all for life. Theories are proposed, sceptically examined, and eventually rejected or accepted.

        Any subsequent successful impeachment is likely to lead to a Nobel prize.

        The former kind of sceptic states their version of the facts and when it is pointed out that this is not the scientific consensus objects that science is not a voting matter.

        The latter kind forces re-elections in order to vote on the matter so they can “kick the bums out,” namely the incorrect laws of science, and replace them what they insist are the true laws. To this end they form their own contrarian party which they populate somehow with enough scientists proclaiming the new order to create the appearance of a majority opinion. There is a wide range of types of such scientists: TV weathermen, the nonexpert, Wall Street Journal subscribers, those who haven’t kept up with the literature, the confused, the willfully wrong, the uncredentialed, the dead, the pretend scientists, the fictional, and so on. And some (only a few I hope) have discovered that the trick that worked so well for getting good grades, to give the answer the professor wanted in preference to the one in the text when there’s a conflict, generalizes to research funding . (I should stress that the converse is definitely false: there are those who take oil industry money who nevertheless insist that anthropogenic global warming is a serious concern—but which of the 84 on that last list claims this and if so how did they end up on that list?)

        I view both kinds as being mean to science. I therefore have no compunctions about being mean to them.

        But not everyone who does not believe in global warming is necessarily a sceptic. They might simply not understand the material and would like it explained to them. Scientists (those in the teaching profession anyway) should not be mean to students having difficulties, they should be friendly and helpful and insightful and patient and understanding and give everyone A’s (erm, not the last)

        But students and others who willfully feign ignorance, or insist on contradicting you with data you know from years of experience is completely wrong, count for me as sceptics. They are obstructing science, and then scientists start to act more like the police when dealing with someone who’s obstructing justice. The more egregious the offence, the meaner a scientist can get in defending what that scientist perceives to be the truth. Not all scientists perhaps, but certainly me, I can get quite upset about that sort of behaviour.

        I do try to be tolerant when I’m not 100% sure which of these two types I’m dealing with, whether interested students having difficulties or willful obstructers of science. In the case of Arfur Bryant, with whom I had many exchanges elsewhere on Judy’s blog, I tried to remain patient with what I came to suspect was his pretense at being the former kind. We reciprocated each other’s politeness, but eventually PDA suggested I was wasting my time on him, just about the time I’d come to that conclusion myself.

        Ferenc Miskolczi is a very different case from Arfur Bryant. Unlike Bryant, who claimed ignorance in the subject, Miskolczi claimed competence when his paper suggested the opposite. His misapplication of the virial theorem was (for me) only the first hint of this.

        Andy Lacis found enough errors to satisfy himself that FM’s paper did not hold water. The big problem I had was Miskolczi’s failure to address the independence of the rate of the water cycle and the amount of water vapor held in the atmosphere. You might have a huge amount of vapor going from the ground to the clouds and immediately precipitating back down leaving virtually no water vapor up there, or it might hang around up there for a long time piling up like cars in a massive traffic jam and seriously blocking infrared. Or only a small trickle might go up, but again you have the choice of how long it hangs around.

        The flow is relevant because it acts like a laptop’s heat pipe, which transfers heat from the CPU to the enclosure via evaporation at the CPU which passes along the pipe to the enclosure where it condenses. In the atmosphere the roles of CPU and enclosure are played by the Earth’s surface and the clouds respectively. Less flow means less cooling.

        But the level of water vapor in the atmosphere is also relevant because it’s a greenhouse gas. A lower level means more cooling.

        Miskolczi claimed to have shown that more CO2 would be offset by less water vapor. But without also calculating how this would impact the rate of flow no conclusion can be drawn about the impact of the “Miskolczi effect” on global warming. This is because if the flow is also reduced then you lose the 78 W/m^2 heat pipe labeled “evapotranspiration” in the famous Figure 7 of Kiehl and Trenberth’s 1997 paper. That’s the biggest loss of heat from the Earth’s surface, the second biggest being the net 390 – 324 = 66 watts of difference between direct radiation up and back radiation down.

        A different kind of problem is that the claimed effect was simply not believable, which neither the paper nor http://www.youtube.com/watch?v=Ykgg9m-7FK4“>Miklos Zagoni’s you-tube explanation addresses. Usually when you have an unbelievable result you have an obligation to offer a digestible reason to believe it.

        Unfortunately climate science is not (in my view) able to raise that kind of objection to Miskolczi’s paper without being accused of the same thing. If the point of hours or months of computer modelling is to increase confidence that global warming is happening, it’s not working for those who look at the last 160 years of crazily fluctuating temperature and say “seeing is believing.” The significant fluctuations in 19th century temperature appear to them inconsistent with the offered explanations of global warming. The only ones not put off by those fluctuations are those already convinced of global warming. This needs to be fixed.

        But I digress. Getting back to the main point, after this I did not feel inclined to treat Miskolczi kindly. But I see in retrospect that when I wrote that “putting morons on pedestals makes you a moron” (paraphrasing “arguing with idiots makes you an idiot”) I should have expanded on it with “and putting obstructionists on pedestals makes you an obstructionist” so as to offer a wider selection. I didn’t actually commit to either of these.

        Admittedly this is a bit like Dr. Kervorkian loaning out the choice of his Thanatron or a loaded revolver. But then Kervorkian did not go to jail until he was caught injecting someone himself. I did not actually say either Miskolczi or Hagen was a moron, I left it up to them to complete the reasoning as they saw fit.

        One reasonable completion would be “putting someone who believes the Clausius virial theorem applies here on a pedestal makes you such a believer.”

      • This comment (with some background) would make an interesting guest post!

        All we need is to find the appropriate blog for that…

        B-)

  54. Al – You seem to be combining a misinterpretation with an exaggeration.

    Fig. 2 in Myhre and Stordal shows three sets of curves, one for a standard tropical sounding, one for a standard midlatitude sounding, and one for a standard polar sounding. If you want to see how the CO2 irradiance change differs across the globe, you might compare the result from the polar sounding with the result from the tropical sounding. This is the calculation you’ve done, except you’ve exaggerated the difference by comparing the irradiance at one altitude (60 km) for the pole with a different altitude (10 km) for the tropics. In reality, the tropopause in the tropics is higher than the tropopause at the pole, so a more apt comparison would be the irradiance at 20 km in the tropics (0.10 W/m2) with the irradiance at 8 km at the pole (0.072 W/m2), a difference of 30-50%.

    But that’s not the issue, and that’s where the misinterpretation comes in. Nobody is going to compute the radiative forcing using the coldest temperature profile possible, and nobody is going to compute the radiative forcing using the warmest temperature profile possible. Instead, you’re going to use the global mean temperature profile (if you’re being crude), or use a range of temperature profiles that together come reasonably close to the actual temperature structure. Freckleton et al. (1998) (their Fig. 2) showed that a weighted average of three profiles gets you to within 1-2% of what you’d get with the full range of atmospheric conditions. So that’s what Myrhe et al. (1998) did.

    • There is indeed an exaggeration, to draw your attention. I was trying to illustrate the possible range of magnitudes of the “forcing” effect. As I already said, you can use a mix of profiles, and the range of forcings can be smaller.

      Yet you need to explain your “unphysical” hint (slip?). I assert that the integration must be always done from surface to infinity, because this is the actual range where the final balance (steady state) is achieved for the system. There is no “definitional” choice. Granted, you can stop calculations at a certain height if you can show that the curve reaches some asymptote and does not change anymore. The Figure2 of Myhre and Stordal (1997) shows that if you continue to calculate way past the tropopause (whenever it could be), the calculated “forcing” gets smaller by about half. The effect of your “selection of tropopause” is exaggeration by 100%. Yet AGW proponents keep saying that 3.7W/m2 is “well established”. Figure2 shows that this is baloney.

      Please explain why do you stop calculations at “tropopause” while Fig2 shows that it leads to 2X inflation of the estimate (even if we assume that the RT calculations were done correctly, which also can be questioned).

      • Al – Indeed such a seemingly strange choice requires justification. Here’s the scoop:

        From the stratosphere on up, atmospheric temperatures are pretty well determined by the combination of radiation absorption/emission and the horizontal and vertical motions of the air. In contrast, the troposphere’s temperature structure is pretty strongly determined by exchange of heat with the ground and oceans in combination with the small-scale and large-scale motions that redistribute heat under the constraints of dry and moist adiabatic lapse rates.

        Because of this difference, everything we know, observe, and simulate about the stratosphere shows that it adjusts fairly quickly to a radiative perturbation (on the order of a few months). However, the tropopause takes much longer, in particular because the oceans are very slow to respond to radiative forcing changes.

        Myhre and Stordal (1997) in Fig. 2 show the instantaneous calculated changes in downward irradiance due to a CO2 change. As you’ve pointed out, the irradiance change is largest at the tropopause and is smaller higher up in the atmosphere. This means that the stratosphere would have a radiative imbalance, radiating away more energy than it’s absorbing. As a result, it will cool quickly, eventually equilibrating over a few months when the decrease of upward irradiance at the top of the atmosphere has become equal to the decrease of upward irradiance at the tropopause.

        So, all in all, the IPCC figured it would be simpler to consider the radiative forcing after the stratosphere equilibrates rather than before the stratosphere equilibrates, since that’s what’s ultimately going to determine what happens in the troposphere. Here’s what they say about it.

        Footnote 1: Myhre et al. (1998) refer to the instantaneous radiative change at the tropopause as “instantaneous” and the radiative change after the stratosphere equilibrates as “adjusted”.

        Footnote 2: The substantial decrease in instantaneous irradiance change from the tropopause to the top of the atmosphere due to a change in CO2, which is what caused all this discussion, is, as far as I know, shared only by O3 among Tyndall gases, and is thus one of the fingerprint elements for climate change attribution.

      • Thanks for the reply, I missed it in the noise. I am certainly familiar with IPCC/Hansen’s definition of “radiative forcings”, and I expected AGW proponents to bring it in. This definition begs few additional questions.

        (1) You said that stratosphere will “cool quickly, eventually equilibrating over a few months”. Given results of some radiative models I saw, the state of stratosphere has “cooling rate” of about one degree C per day. Therefore, without some compensating heat fluxes it would completely cool down to absolute zero in “few months”. It does not. Therefore, the concept of stratosphere “cooling quickly” and “readjusting to radiative equilibrium” does not exactly fit observations, would you agree?

        (2) When IPCC says “stratospheric temperatures to readjust to radiative equilibrium”, does it mean literally that they assume no substantial convection up there? This would be very odd, because we know that the CO2 got “well mixed” everywhere, more or less. The question would be, how CO2 could ever get into stratosphere if moleculat diffusion would take about 100,000 years to get the CO2 across 20km layer of motionless air? Would you agree that the time and mechanism of temperature adjustment would require some substantial account for “stratospheric dynamics” that was neglected so far?

        (3) You say, “the troposphere’s temperature structure is pretty strongly determined by exchange of heat with the ground and oceans”. While it sounds very reasonable on the surface, experience shows that surface-atmosphere system has a pretty fast response to direct radiative imbalances, say when clouds come by, or seasons of year change. Therefore, the concept of extremely slow response of surface-troposphere system (“typically decades”) must be also quite a stretch, would you agree?

        (4) IPCC defines: “In the context of climate change, the term forcing is restricted to changes in the radiation balance of the surface-troposphere system imposed by external factors, with no changes in stratospheric dynamics, without any surface and tropospheric feedbacks in operation … , and with no dynamically-induced changes in the amount and distribution of atmospheric water.”

        So, everything is fixed in troposphere (including temperature) except CO2 concentration. The forcing therefore is a discrepancy between instant change in concentration and underlying temperature. This forcing is expected to last “typically decades”, correct?

        Now, how would you physically bring in a CO2 jump into entire atmosphere? One would assume that the mechanism of turbulent mixing of convectively-stirred air is an essential mean to propagate the surface-injected CO2 into tropopause and above. Fine. This means that the new state of the system was created, where the temperature now is deviated from the new state of equilibrium, and must adjust. The temperature at emission height is now a perturbation. And must last “typically decades” to force and sustain process of global warming. Is this correct description?

    • From Freckleton (1998) abstract :
      “By comparison with calculations at a high horizontal resolution, it is shown that the use of a single global mean profile results in global mean radiative forcing errors of several percent for CO2 and chlorofluorocarbon CFC-12 (CCI2F2); the error is reduced by an order of magnitude or more if three profiles are used, one each representing the tropics and the southern and northern extratropics.” (Sorry, the article is behind a paywall)

      So, they had ONE “high horizontal resolution” MODEL of atmosphere, and they calculated “forcing from 2xCo2”, which is ONE NUMBER. Then they have THREE numbers from three standartized atmospheric MODELS. Then by mixing the three numbers with three fudge coefficients, they got close to the “high resolution” number within a percent. Fantastic. (Sorry, the article is behind a paywall, can’t do a more detailed “review”)

      I think I can do better than that, I could mix the three numbers to match the “high-resolution” forcing number to zero percent, with 20 zeros after the decimal point. [ I am sure they tried to fit to several GH gases at once, but the whole idea of parametric fudging does not fly in first place].

      Is this how the entire radiative forcing science operates, and how the confidence was “established” and AGW foundation was built? Who said that their first model has “actual temperature structure”? Few radiosondes at handful of convenient locations launched twice a day? Sorry, this doesn’t sound serious.

  55. I guess the HITRAN database does not handle continuum and far wing absorption particularly well, as it must have problems with weak absorption lines as well. Typical path lengths in the real atmosphere can be as long as several kilometers. On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity. Absorption at frequencies in these bands (like the so called atmospheric window) is not easily measured in the lab, because cells used in spectroscopy have limited path lengths (several dozens of meters max). Therefore the database containing values derived from actual measurements is insufficient for algorithmic determination of atmospheric absorption/emission, one also needs extrapolation based on hardly understood models of continuum and far wing behavior. It is not a straightforward task to verify these models with in-situ atmospheric measurements, as the trace gas contents of the atmosphere is highly variable and it is neither controlled nor measured properly with sufficient spatio-temporal resolution.

    Even some of the so called “well mixed” gases (like carbon dioxide) are not really well mixed. In the boundary layer, close to the vegetation canopy CO2 concentration can be anywhere between 300 and 600 ppm, depending on season, time of day, insolation, etc. Active plant life continuously recreates this irregularity, which is then carried away by convection and winds. Turbulent mixing and diffusion needs considerable time and distance to smooth it out and bring concentration back to its average value.

    In case of water contents, humidity of an air parcel is even more dependent on its history (time and temperature of last saturation). There is strong indication atmospheric distribution of water is fractal-like over a scale of many orders of magnitude (from meters to thousands of kilometers). Fractal dimension of this distribution along isentropic surfaces tends to decrease with increasing latitude. It is close to 2 in the tropics, but drops well below 1 in polar regions. In other words it is transformed by advection from an almost space-filling tropical distribution through a stringy one at mid-latitudes to granulous patches at poles.

    As dependence of transmittance on concentration of absorber is highly non-linear, average concentration alone does not determine atmospheric absorptivity at a particular wavelength, finer details of the distribution (like higher moments) have to be given as well. However, these are neither measured nor modeled (because spatial resolution of computational climate models is far too coarse). Even with an absorber of high average atmospheric concentration, if there are see-through holes in its distribution, average optical depth can be rather low (you can see through a wire fence easily, while a thin metal plate made of the same amount of stuff blocks view entirely).

    So no, I do not have much confidence in radiative transfer models. The principles behind them are sound, but the application is lacking.

    • On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity. Absorption at frequencies in these bands (like the so called atmospheric window) is not easily measured in the lab,

      That’s very interesting, Berényi. What would you estimate as the uncertainty in total CO2 forcing as a function of this uncertainty?

      In the boundary layer, close to the vegetation canopy CO2 concentration can be anywhere between 300 and 600 ppm, depending on season, time of day, insolation, etc.

      What should we infer from this? That the Keeling curve underestimates the impact of CO2 on global warming? Are you trying to scare us?

      There is strong indication atmospheric distribution of water is fractal-like over a scale of many orders of magnitude (from meters to thousands of kilometers). Fractal dimension of this distribution along isentropic surfaces tends to decrease with increasing latitude.

      Also very interesting. How much of the impact of this effect on global temperature would you attribute to human influence? Global climate and anthropogenic global climate are not the same thing. Global climate has been going on for, what, 4.5 billion years? Anthropogenic global climate can only be compared with that on a log scale: roughly ten orders of magnitude.

      People, please get some perspective here.

    • You write “On the other hand the important frequency bands from a climatological point of view are the ones where optical depth is close to unity.”

      I think you mean “close to zero” or “not close to unity”. Optical depth is unity, when the radiation is fully absorbed or scattered.

      • (I was waiting for Berényi Péter to answer this and then forgot all about it until just now.)

        My own answer would be that unit optical depth is where the OLR is changing most rapidly as a function of number of doublings of the absorber (e.g. CO2)
        and is therefore the most important density.

        To see this, let n be the number of doublings with n = 0 chosen arbitrarily (e.g. for the CO2 level in 1915 say). Hence for general n, optical depth τ = k*2^n where k is whatever the optical thickness is at 0 doublings. Hence (assuming unit surface radiation) OLR = exp(-τ) (definition of optical depth) = exp(-k*2^n).

        We want to know for what value of n (and hence optical depth) the OLR is changing most quickly. So we take the derivative of this twice and obtain – k*ln(2)*(ln(2) − k*ln(2)*2^n)*exp(n ln(2) − k 2^n). But this vanishes when 1 − k*2^n = 0 or k*2^n = 1 (and hence n = lb(1/k)). But τ = k*2^n, so the second derivative vanishes when τ = 1, the desired result.

        The same result would have obtained had we counted triplings of absorber instead of doublings: we would instead have 3^n and ln(3) in place of 2^n and ln(2) everywhere, but in the end we would obtain τ = k*3^n = 1.

        (The naive thing would have been to do this more directly as a function of optical depth itself, but one would then find that the OLR changes most quickly when the optical depth is zero. This is ok when considering absolute levels of CO2, but not when considering dependence of temperature on optical depth, the “climatological point of view” Berényi referred to, which calls for a second exponential.)

        I’m not sure where Pekka is getting his definition of optical depth from, but ordinarily it is synonymous with optical thickness and is defined with natural logs, as distinct from optical density which is the same thing but customarily with decimal logs. Absorbance is yet another name for the concept, preferred over optical density by the IUPAC, and does not commit to a particular base of log so one says (or at least the IUPAC recommends) decadic or Napieran absorbance when not clear from context.

        A material that is fully absorbing or scattering the radiation has infinite optical depth, one that allows it all to pass has zero optical depth. Optical depth is additive, so that radiation passing through depth d1 and then d2 is said to have passed through depth d1 + d2. It is a dimensionless quantity.

        For a given wavelength ν of radiation and absorbance of the atmosphere at that wavelength, an optical depth of 1 means that the fraction of photons of that wavelength leaving Earth’s surface vertically and reaching outer space is 1/e. As we saw above this is the depth where the number of escaping photons of that wavelength is most sensitive to changes in the logarithm of absorbance.

        Decreasing absorbance drives this fraction from 1/e up to 1, where the optical depth vanishes. The closer optical depth gets to 0, the less the impact of a given percentage change in the logarithm of absorbance (but the more the impact when working directly with absorbance itself).

        Conversely increasing absorbance drives this fraction from 1/e down to 0, where the optical depth tends to infinity. The closer optical depth gets to infinity, again the less the impact of a given percentage change in absorptivity, but simply because the change in number of escaping photons is negligible, it makes no difference at this end whether we’re working directly with absorbance or with its logarithm.

  56. Jeff,

    You are barking up the wrong tree. Radiative forcings are well defined by both off-line radiative transfer models, an by those that are used in climate GCMs. Radiation is being computed explicitly, and does not rely on logarithmic formulas. Any logarithmic behavior that you see in model output is the result of what the radiation models produce in response to changing greenhouse gas amounts, not a constraint.

    Take a look at some examples that I posted earlier on Roger Pielke Sr’s blog

    http://pielkeclimatesci.wordpress.com/2010/11/23/atmospheric-co2-thermostat-continued-dialog-by-andy-lacis/

    We also include multiple scattering effects in our GCM radiation modeling. These go beyond what Beer-Lambert exponential extinction is designed to represent.

    • Dr. Lacis
      Not being a scientist but a casual observer, I wander how would you reconcile within the CO2 hypothesis:
      – 1920-45 gentle rise in CO2 with sharpest rise in temperature
      – 1945-80 cooling period at the time of the steepest rise of CO2 in the recorded history.
      As far as I can see it, is not possible for the CO2 hypothesis to become accepted theory, if out of the 150 years of reliable records, the hypothesis it is not supported by 30% of the time.
      http://www.vukcevic.talktalk.net/CO2-Arc.htm
      re. above linked graph: Any geomagnetic hypothesis despite its good correlation, for time being, is not a contender without a viable quantifiable mechanism.

      • Define “the CO2 hypothesis”.

      • Hi Dr. Nielsen-Gammon
        That would be something (axiom possible, but hypothesis or even a theory not so sure). I did say : ‘Not being a scientist but a casual observer’. If I were to do that, I would be compounding one possible error with a much greater one.
        I may have naively assumed that Dr. Lacis might be able to deal my less than precisely articulated question, but your opinion would be also welcome and appreciated.

      • In comparing the time periods 1920-45 with 1950-2000, you should take a look at Fig. 5 and Fig. 8 of Hansen et al. (2007) “Climate simulations for 1880-2003 wi GISS ModelE”. The pdf is available from GISS webpage http://pubs.giss.nasa.gov/abstracts/2007/

        There are other forcings besides the GHG increases that need to be included, especially the strong volcanic eruptions of Agung, El Chicon, and Pinatubo that provided storn negative forcing in the 1960-1990 time period.

      • Dr. Lacis
        Thank you for your prompt reply. I will certainly look into suggested alternatives again. A science theory to stand rigorous tests of time, when sufficient quantity and quality of data is available, requires of those formulating such theories to be explicitly precise on every single exception, however irritating they may be.
        Else, hypothesis is just that a hypothesis and it will deprived of the respect and acceptance of a theory.
        I am also looking forward to possible further clarification by Dr. Nielsen-Gammon.
        Thanks again.
        p.s It was 1945-1980 I referred to, it is a very different proposition to 1950-2000 period you quoted; as the scientist aware of the exactness required for the case you present, I am sure you would agree.

      • Dr. Lacis
        By time Agung volcano erupted in1963 temperature was falling for 15 years or longer, and was already at its trough, while you would agree El Chichón- 1982 and Pinatubo -1991 were outside of the period I referred to (1945-1980), the temperature was already rising.
        The period I referred to is clearly marked on the graph
        http://www.vukcevic.talktalk.net/CO2-Arc.htm
        which you may have not taken opportunity to take a look at.
        Dr Lacis if we (i.e. our generation) are to build credible climate science than its foundations must be solid and indisputable.
        Thank you for your time, I can assure you it was not wasted, in my case it has widen my perspective on soundness of the state of the arguments presented.

      • By time Agung volcano erupted in1963 temperature was falling for 15 years or longer, and was already at its trough,

        And your point?

        The Atlantic Multidecadal Oscillation explains this very well, Milivoje.

      • I suppose my statement of the CO2 hypothesis would be something like:

        CO2 and other non-condensing Tyndall gases, whose concentrations are increasing rapidly to man’s influence, have become one of the strongest forcing agents on the global climate, and further increases in concentration will be large enough to cause a further increase of several degrees C within the century.

      • CO2 and other non-condensing Tyndall gases,

        Here is why I believe “greenhouse gas” is the correct name.

        1. It’s the standard name today.

        2. Greenhouses perform two functions: retaining the contained air, and trapping outgoing longwave radiation. Earth does the same thing, using gravity in place of walls and greenhouse gases in place of glass. The analogy for the former should be obvious (without gravity Earth would be a lot colder); for the latter, glass is at least triatomic (SiO2 for example), like greenhouse gases (H2O, CO2, O3, CH4, etc.). Salt windows do not trap infrared (shorter than 17 microns), being diatomic (NaCl), like O2 and N2.

        The question of whether greenhouses exhibited the greenhouse effect was denied by W.Wood in the Feb. 1909 issue of Phil.Mag., responded to in the July issue by Charles Greely Abbot, director of the Smithsonian Observatory, who ox Wood was goring. 65 years later the same question was debated strenuously in two journals in 1974, with the outcome being more or less consistent with what I wrote above minus my point about gravity. More on this by googling “The atmospheric science community seems to be divided” (must be in quotes) for Craig Bohren’s perspective on this debate.

      • It is misleading to call solid NaCl diatomic. It is an ionic crystal formed from individual atoms (or ions) without any grouping to diatomic molecules.

        In ionic crystals larger scale exitations – phonons – control the interaction with infrared radiation. It is, however, true that ionic crystals are transparent to a part of the IR radiation. NaCl absorbs strongly above 17 um and lighter LiF above 7 um. The reflection gets strong at even longer wavelengths (>30 um) and is therefore not so important.

        Normal glass is also transparent to shorter wavelengths of infrared, but the limiting wavelenth is typically 2-4 um and varies depending on the type of the glass. This limit is so low that glass is indeed an efficient barrier for IR radiation.

      • Thanks for clarifying that, Pekka. Sounds like we’re in perfect agreement.

        I have a couple of NaCl windows at home that I picked up for fifty bucks each, they’re great fun to play with. If you put a patch of black insulating tape on a saucepan (without which the silvery surface doesn’t radiate much) and boil water in it, a $10 infrared thermometer (way cheaper than the salt windows) will register close to 100 °C. When you put a sheet of glass between the thermometer and the saucepan the temperature plummets 60-70 degrees. But when you put a salt window between them there is hardly any difference.

        Hey, I’m just a retired computer scientist having fun doing the stuff I was trained for in college half a century ago before I discovered the joy of computing.

      • Incidentally, in connection with the molecular structure of NaCl, Arrhenius’s logarithmic dependency of the Earth’s surface temperature on atmospheric CO2 level was not his only “disruptive” contribution. Back when it was assumed that NaCl in solution consisted of diatomic molecules floating around among the water molecules, Arrhenius argued that they dissociated into Na⁺ and Cl⁻ ions, as an explanation of why salt lowers the freezing point of water. British chemist Henry Armstrong disagreed, arguing instead that NaCl associated with water to form more complex molecules. That’s the simplified version, the longer version is more complicated.

        As Pekka points out, the NaCl molecules lose their identity as such in the crystalline form. One can still pair them up, but not uniquely: there are six possible (global) pairings, one for each of the six Cl neighbors of each Na atom (or vice versa), since rock salt forms a face-centered cubic lattice. Pairing one Na with one Cl determines all remaining pairings (assuming no dislocations).

    • A. Lacis wrote: “Radiative forcings are well defined by both off-line radiative transfer models, an by those that are used in climate GCMs. Radiation is being computed explicitly”

      It is like saying that since my calculators have very accurate algorithms to calculate exponents and logarithmic functions explicitly with 20-digit accuracy, now I can calculate anything with same accuracy, be it ocean heat content, or annually-averaged CO2 flux across ocean surface, etc. Or global OLR. Don’t you see a big lapse of logic in your statements?

  57. David Hagen, 12.6.10 7:29 pm, 7:31 pm

    Miskolczi (2010) is about tau_sub_a, the Greenhouse-Gas Optical Thickness. He says,

    >> The relevant physical quantity necessary for the computation of the accurate atmospheric absorption is the true greenhouse-gas optical thickness . The definition and the numerical computation of this quantity for a layered spherical refractive atmosphere may be found in Miskolczi [4]. P. 244.

    Miskolczi [4] is Miskolczi, F.M. “Greenhouse effect in semi-transparent planetary atmospheres”, J. Hungarian Met. Serv., v. 111, no. 1, 2007, pp. 1-40. Miskolczi (2010) relies on [4] on pp. 244, 248 (2), 253 (2), and 259. He also includes as [11], Miskolczi, F.M. and M.G. Mlynczak, “The greenhouse effect and the spectral decomposition of the clear-sky terrestrial radiation”, J. Hungarian Met. Serv., v. 108, no. 4, 2004, pp. 209-251, but with no citations in the paper.

    In response to a reader’s invitation, I recently reviewed [4] and [11] jointly. The review can be read at IPCC’s Fatal Errors in response to a comment on 1/14/10. Google for Miskolczi at http://www.rocketscientistsjournal.com. My conclusions include that the author used a definition of greenhouse effect that was different than IPCC’s, that he tried to fit data from closed-loop real world records in an open-loop model, that he used satellite radiation measurements mistakenly as a transfer function, and that he forced his transfer function arranged inside a control loop to do the work of the entire control loop. He concludes,

    >>>> The theoretically predicted greenhouse effect in the clear atmosphere is in perfect agreement with simulation results and measurements. [11], Miskolczi (2004), p. 209.

    To which I responded,

    >>Just as a matter of science, Miskolczi goes too far. An axiom of science in my schema is that every measurement has an error. A more concrete observation is that his greenhouse effect is for a clear atmosphere, meaning cloudless, but he cannot possibly have had such measurements.

    The concluding exchange reads as follows:

    >>>>[I]t is difficult to imagine any water vapor feedback mechanism to operate on global scale. [4], Miskolczi 2007, p. 23.

    >>>>On global scale, however, there can not be any direct water vapor feedback mechanism, working against the total energy balance requirement of the system. [4], Miskolczi 2007, p. 35.

    >>There is precisely a water vapor feedback mechanism in the real climate. Miskolczi’s work has been productive. It has discovered the existence of the powerful, negative, water vapor feedback. Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity. In the end, Miskolczi, and hence Zágoni, share a fatal error with IPCC. The fatal result is independent of the mathematics. One cannot accurately fit an open loop model to closed loop data.

    Water vapor feedback works through cloud albedo to be the most powerful feedback in climate. It is positive and fast because of the burn off effect to amplify solar variations. It is negative because warming increases humidity, and slow because of the high heat capacity of surface waters. This negative feedback regulates the global average surface temperature to mitigate warming from any cause. It has not been discovered by IPCC.

    I conclude that [4], Miskolczi (2007), is an essential foundation of Miskolczi (2010), so the latter inherits fatal errors from its parent.

    Specifically you invited me to examine Figure 10 and sections 3 and 4 (probably pp. 257-260) in Miskolczi (2010). Miskolczi is here testing a model that says greenhouse absorption should be proportional to layer thickness, and that layer thickness should increase optical thickness. He says,

    >>To investigate the proposed constancy with time of the true greenhouse gas optical thickness, we now simply compute tau_sub_a every year and check the annual variation for possible trends. In Fig. 10 we present the variation in the optical thickness and in the atmospheric flux absorption coefficient in the last 61 years.

    From which he observes,

    >>The correlation between tau_sub_a and the top altitude is rather weak.

    He leaves to the reader the task of visual correlation. I would not venture a guess about the correlation between the two records in Figure 10. However, they could be made to appear much more correlated by adjusting the vertical scales to emphasize that effect. This is what IPCC does. Correlation is mathematical, and he should compute it.

    Miskolczi concludes

    >> In other word, GCMs or other climate models, using a no-feedback optical thickness change for their initial CO2 sensitivity estimates, they already start with a minimum of 200% error (overestimate) just in Δtau_sub_a.

    Besides visual correlation, a candidate for the most common error in data analysis is to take differences of a noisy signal, then attempt to fit a function to the differences. Correlating one record with another is wholly analogous to fitting a function to a record. This error is found everyday in economics, and in fields like drug and food studies where the investigator attempts to find a best fit probability density. Engineers quickly learn not to design a circuit to differentiate a noisy signal. Taking differences amplifies the noise, and attenuates the signal. One mathematical manifestation of the problem is that a sample probability density, the density histogram, is not guaranteed to converge to the true population density as the number of samples or cells increases. Not so, the probability distribution! Spectral densities will not converge, but spectra will. A well-behaved spectrum often has an impossible spectral density, as, for example, whenever line spectral components are involved. The better technique then is to fit a function to the total signal, to the cumulative probability, or to the spectrum, and then if needed or for human consumption, to differentiate (take differences of) the best fit function.

    My advice to all is always be suspicious of analysis from data that are differences, anomalies, or densities.

    By taking differences of the optical thickness and of atmospheric absorption data, Miskolczi is differentiating noisy signals. As discussed above, he should first fit analytical functions to each data record, such as power series or orthogonal series, such as Fourier series. In the best of worlds, these might reveal an analytic relation between the signals. Regardless, he can next detrend the signals with parts of his analytical functions to perform a full correlation analysis, providing a scatter diagram and graphical and numerical demonstrations of linear predictors and of data correlation. After this is done, his paper might be ripe for conclusions.

    • David L. Hagen

      Jeff Glass”man
      re: “By taking differences of the optical thickness and of atmospheric absorption data, Miskolczi is differentiating noisy signals. As discussed above, he should first fit analytical functions to each data record,”

      Please clarify where you see Miskolczi “taking differences in optical thickness” or “differentiating noisy signals.”
      I think you have misinterpreted his papers.
      I understood him to actually have “first fit analytical functions to each data record,” – e.g. of the atmospheric profile based on TIGR radiosonde data.
      Then he calculates the optical absorption for each of 150 layers.
      Next he INTEGRATES this (adds) not differentiates (substracts) – to get the global optical depth.

      His correlations are fitting parameters to the observed radiosonde data processed to give optical absorption. The differences he takes are after taking these parameters, or after finding and rounding the correlations between the fluxes.

    • David L. Hagen

      Jeff Glassman
      re Cloud feedback. You note: “Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity.”

      Do you have any way to clearly quantify this? Or papers supporting it?
      e.g. Roy Spencer critiques conventional assumptions that clouds dissipate with warming giving a positive feedback.

  58. A Lacis, 12/8/10, 11:49 am

    Wrong tree?

    What I said was “Radiative forcing in a limited sense applies radiative transfer, but it is not the same.” How can I parse what you have written to see what the wrong tree is?

    The core, the heart, the essence of the AGW model is the existence of a climate sensitivity parameter, in one form or another. It is the Holy Grail of AGW. Sometimes it’s the transient CSP, sometimes the equilibrium CSP, and sometimes just the vanilla CSP. Sometimes it is represented by λ, and sometimes not. IPCC says,

    >>The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. It is not a projection but is defined as the global average surface warming following a doubling of carbon dioxide concentrations. It is likely to be in the range 2ºC to 4.5ºC with a best estimate of about 3ºC, and is very unlikely to be less than 1.5ºC. Values substantially higher than 4.5ºC cannot be excluded, but agreement of models with observations is not as good for those values. Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty. {8.6, 9.6, Box 10.2} AR4, SPM, p. 12.

    This puts the parameter as a response to an RF. In another expression, IPCC turns the relation around a bit, saying

    >>The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid. These formulae are based on global RF calculations where clouds, stratospheric adjustment and solar absorption are included, and give an RF of +3.7 W m–2 for a doubling in the CO2 mixing ratio. (The formula used for the CO2 RF calculation in this chapter is the IPCC (1990) expression as revised in the TAR. Note that for CO2, RF increases logarithmically with mixing ratio.) 4AR ¶2.3.1 Atmospheric Carbon Dioxide, p. 140.

    Of course RF increases logarithmically with mixing ratio! The concept that RF(2C) = a constant + RF(C) is a functional equation, and its unique solution is the logarithm function, and the base is irrelevant. Generally, the solution to y(kx) = constant + y(x) is the logarithm, and in the AGW world, the standard k is a doubling of x, the concentration of CO2, almost always. The constant is the climate sensitivity parameter when k = 2.

    Safe to say, all the major computer models used by IPCC produce a constant climate sensitivity parameter. A table of the models studied in the Coupled Carbon Cycle-Climate Model Intercomparison Project (C4MIP) with their transient climate sensitivity parameter is AR4, Table 7.4, p. 535. IPCC says,

    >>The equilibrium climate sensitivity estimates from the latest model version used by modelling groups have increased (e.g., CCSM3 vs CSM1.0, ECHAM5/MPI-OM vs ECHAM3/LSG, IPSL-CM4 vs IPSL-CM2, MRI-CGCM2.3.2 vs MRI2, UKMO-HadGEM1 vs UKMO-HadCM3), decreased (e.g., CSIRO-MK3.0 vs CSIRO-MK2, GFDL-CM2.0 vs GFDL_ R30_c, GISS-EH and GISS-ER vs GISS2, MIROC3.2(hires) and MIROC3.2(medres) vs CCSR/NIES2) or remained roughly unchanged (e.g., CGCM3.1(T47) vs CGCM1, GFDLCM2.1 vs GFDL_R30_c) compared to the TAR. In some models, changes in climate sensitivity are primarily ascribed to changes in the cloud parametrization or in the representation of cloud-radiative properties (e.g., CCSM3, MRI-CGCM2.3.2, MIROC3.2(medres) and MIROC3.2(hires)). However, in most models the change in climate sensitivity cannot be attributed to a specific change in the model. 4AR, ¶8.6.2.2, p. 630.

    Safe to say, every climate model produces a climate sensitivity parameter. You don’t have read esoteric papers on absorption to find the logarithm dependence. The very notion that such a thing as this constant exists is the same as assuming that radiative forcing is proportional to the logarithm of the gas concentration. Further, recognizing that the effect of CO2 is the absorption of infrared lost from the surface, we have the key underlying assumption to all of AGW that the absorption of IR by CO2 is proportional to the logarithm of the CO2 concentration.

    No matter how these models might have been mechanized, whether computing a radiation transfer or not, whether mechanizing the atmosphere as one or many layers, whether making actual computations or parameterizing, they produce the logarithm dependence. Like Captain Kirk, IPCC said, “Make it so.”

    The assumption is false. That is the wrong tree up which I am barking.

    • Jeff,

      I think you would get to understand radiative transfer, and radiative transfer effects and issues, a whole lot better if you took the time to read radiative transfer papers from the published literature (e.g., JGR, GRL, J Climate,or IPCC), or checked out the posted background information on blogs like Real Climate, Chris Colose, or Roger Pielke, Sr, instead of spending time perusing such papers as those by Miskolczi dealing with his mistaken interpretation of the greenhouse effect.

      There is good reason why Miskolczi’s papers are not getting published in the mainstream scientific literature. These journals try very hard not to publish material that they know to be erroneous.

    • No matter how these models might have been mechanized, whether computing a radiation transfer or not, whether mechanizing the atmosphere as one or many layers, whether making actual computations or parameterizing, they produce the logarithm dependence.

      Jeff, you can forget completely about models. Observation of the temperature after subtracting the 65-year AMO shows an impressively accurate fit to the logarithm dependence when the Hofmann formula 280 + 2^((y-1790)/32.5) is used for CO2.

      Not only does CO2 have a measurable effect on climate, that effect is without any shadow of doubt logarithmic. The fit is far too good to have any other explanation.

      If you don’t believe this, how do you account for the fact that the HADCRUT temperature record with 10-year smoothing is now 0.65 °C above any temperature attained prior to 1930? This even applies to 1880, the highest temperature between 1850 and 1930. Is God holding a soldering iron over us, has the Sun very suddenly gotten far hotter than in the last 3 million years, has the Devil decided the Apocalypse is nigh and is slowly boiling us like frogs in a pot, or what?

  59. Willard 12/8/10 12:07

    You need wonder no more! The answer to the purpose of the (true) little parable is in the very next paragraph. Do read on!

    Myhre, et al. added labels and symbols to their graph, the one that charmed scientistofdoom, to give the appearance that climate researchers were approximating data and fitting curves to data. They represented the output of a couple of computer models as data. They did this because people with under-developed science literacy put great stock in the output of computers. This bridges from football poolers to IPCC’s audience.

    Computer models are grand and essential. What is missing in these examples is the notion that computer models, like all other scientific models, must make significant predictions which then survive the tests of validation. This is the scientific process of advancing from hypothesis to theory.

    My contention is that only theories can be used ethically for public policy. IPCC fails this test.

    • Jeff Glassman,

      The answer to what you do with your paragraph lies in the sentence that immediately follows it:

      > Myhre, et al. (1998) did the same thing, making the picks look more genuine by graphing some with lines and some as if they were data points for the lines.

      So Myhre’s “pick” amounts to using a number generator. As far as I can see, there are two ways to interpret this. Either it’s meant litterally, in which case it really looks like a caricature. Or it’s meant as a way to express sarcasm, i.e. Myhre’s pick is no better than a random choice. My personal impression is that you are expressing sarcasm, the figure of speech that Tallbloke was condemning.

      I underlined this story of yours (which I liked, btw) to show that caricature or sarcasm is common down here. Complaining that scientists are the one that indulge in that kind of trick, here and in general, amounts cherry-picking. The habit is more general than that. Style matters, mostly, as far as I am concerned. As long as one is interested to invest some time to entertain the gallery in a most pleasant way, I don’t mind much.

  60. Hi Jeff,
    A fair bit of this analysis is over my head, but I wondered if you could just clarify this statement for me:

    “Miskolczi’s work has been productive. It has discovered the existence of the powerful, negative, water vapor feedback. Specific humidity is proportional to surface temperature”

    Proportional to surface temperature at which pressure level?
    I assume we are talking about Miskolczi’s analysis of the radiosonde data?

  61. Rather than respond individually, I’d like to make a few points relevant to several comments above.

    1. One of the striking elements of Judith Curry’s post is the linked references to multiple sources demonstrating the exellent correspondence between radiative transfer calculations and actual observations of IR flux as seen from both ground-based and TOA vantage points. This correspondence is based initially on line-by-line calculations utilizing the HITRAN database. Models based on band-averaging are less accurate but still perform well. Empirically, therefore, the radiative transfer equations have serve an important purpose in representing the actual responses of upward and downward IR to real world conditions.

    2. It is universally acknowledged that the roughly logarithmic relationship between changes in CO2 concentration and forcing applies only within a range of concentrations, but that range encompasses the concentrations of relevance to past and projected future CO2 scenarios. It does not necessarily appy to other greenhouse gases, although water appears to behave in a roughly similar manner.

    As far as I know, that logarithmic relationship can’t be deduced via any simple formula. Rather, it represents the shape of the curve of absorption coefficients as they decline from the center of the 15 um absorption maximum into the wings. As CO2 concentrations rise, the maximum change in absorption moves further and further into the wings, and since the absorption coefficients there are less than at the center, the effect of rising CO2 assumes a roughly logarithmic rather than linear curve.

    3. A point was made earlier about the difficulty of laboratory determination of absorption coefficients relevant to atmospheric concentrations where the tau=1 relationship holds. Not being a spectroscopist, I can’t give an informed opinion on this, but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.

    • Fred,

      Concerning your point 2. One example that leads to the logarithmic relationship is a strong absorption line with exponential tail. For this example it is possible to derive the result analytically.

      I do not try to claim that this a correct model, but the derivation may help in understanding how broadening of a saturating absorption peak leads to the logarithmic relationship.

      • Pekka Pirilla – Maybe I’m misinterpreting your point, but the main CO2 absorption band centered around 15 um contains hundreds of individual lines, representing the multitude of quantum transitions singly and in combination that CO2 can undergo. The 15 um line represents a vibration/rotation transition. As one moves in either direction away from 15 um, the lines are weaker, because the probability of a match between photon energy and the energy needed for that transition declines. As a result, IR in those wavelengths must encounter more CO2 molecules in order to find a match. Absorption is so efficient at 15 um that more CO2 makes little difference at that wavelength (surface warming is a function of lapse rate, but the lapse rate at the high altitude for 15 um emissions is close to zero). In the wings, however (e.g., 13 um or 17 um), more CO2 means more absorption and greater warming. The logarithmic relationship appears to reflect the fact that increasing CO2 more and more involves absorption wavelengths of lower and lower efficiency – those further and further from 15 um.

        Note that we are talking about the breadth of the absorption band with its many lines. The term “broadening” generally refers to the increasing width of individual lines in response to increases in pressure or temperature.

        Finally, the absorption within a single line (i.e., monochromatic absorption) follows the Beer-Lambert law of exponential decay as a function of path length, but this is not the source of the logarithmic relationship we are discussing. Indeed, in the atmosphere, absorption is followed by emission (up, down, or sideways), followed by further absorption and so on, which is why the radiative transfer differential equations rather than a simple absorption-only paradigm must be used.

        I may not have addressed your point, but I’m hoping to clarify what happens in the atmosphere for individuals unfamiliar with the spectral range of absorption or emission involving greenhouse gas molecules.

      • Fred Molten,
        The simple mathematical example that I was referring to applies to a situation where the absorption is fully saturated in the center of the band and the tails have an exponential form. For this kind of absorption peak applying Beer-Lambert law to the tails, gives as an analytical result the logarithmic relationship between concentration and transmission through the atmosphere.

        The fact that the logarithmic relationship is approximately valid also in the LBL-models may be interpreted to tell that weaker and weaker absorption becomes effective at approximately similar relative rate than in exponential tails.

      • Alexander Harvey

        Pekka,

        I believe that the existence of an abundance of lines distributed widely across a range many orders in magnitude of their strength would by themselves tend to give rise to a logarithmic type response to increasing concentrations over a wide range. This may be similar, more or less important than the side band effect, I simply do not know.

        Alex

      • Alex,
        The result should be the same if the strenth distribution of the lines is suitable. My intuition tells that it should be such that the PDF of the logarithm of the line strengths is flat over the relevant range.

      • Absorption is so efficient at 15 um that more CO2 makes little difference at that wavelength (surface warming is a function of lapse rate, but the lapse rate at the high altitude for 15 um emissions is close to zero).

        If you led the world’s theoretical physicists out to a courtyard and shot them all, I believe you would seriously set physics back.

        I cannot say the same for climate science. The ratio of theorizing to observation is totally out of hand.

        Admittedly Fred Moolten’s theorizing is bordering on the crackpot. However it seems to me that even highly respected theoretical climate scientists are undermining the credibility of their field with calculations that underestimate the environment’s complexity.

        Theoretical economists have a similar problem. It’s a good question whether the economy or the climate is computationally more intractable in that regard. They’re both incredibly complicated systems that theorists love to oversimplify.

    • David L. Hagen

      Fred Moolton
      Appreciate your clarifications. You note above:
      “Despite some conflicting results (at times cited selectively), these too indicate that as temperatures rise, atmospheric water vapor content increases, including increases in the upper troposphere where the greenhouse effect of water vapor is most powerful.”
      I can see how absorption can vary with altitude as concentration changes. e.g. Essenhigh calculates for 2.5% water vapor vs 0.04%. (I can see how altitude variations would the relative absorption heating and the temperature lapse rate – and in turn adjust clouds.)

      However, the Beer Lambert absorption shows the log of Io/I to change as the produce of concentration and depth.

      Question:
      As long as that total concentration x depth remains constant, does the total absorption change depending on how the concentration is distributed?

      • Actually, for a single absorber and no emission, very little. I published a proof of that once. The total Beer’s Law absorption is pretty much dependent on the mass of absorbing material along the ray, however distributed.

        However, it’s different with competing absorbers and thermal emission happening as well.

      • David,

        Water vapor absorption is line absorption with the water vapor line widths being linearly proportional to atmospheric pressure (P/Po). This makes water vapor a less efficient absorber with decreasing pressure. Thus, the same amount of water vapor near the tropopause will absorb a lot less solar radiation than if that same amount of water vapor was at ground level.

        Also, since water vapor absorption is line absorption, it therefore does not follow the Beer-Lambert law, except on a monochromatic basis. To get the atmospheric absorption right in the presence of strongly varying absorption with wavelength, you need to either do line by line calculations, or use a correlated k-distribution approach.

      • David L. Hagen

        A. Lewis
        “This makes water vapor a less efficient absorber with decreasing pressure.”
        Thanks Andy, that is a clear physical reason for the difference.

        Re: “To get the atmospheric absorption right . . . you need to either do line by line calculations, or use a correlated k-distribution approach.”

        1) I would welcome any comments/references you might have as to the relative accuracy of LBL vs k-distribution calculations, especially any good reviews.

        2) I found:
        Intercomparison of radiative forcing calculations of stratospheric water vapour and contrails
        GUNNAR MYHRE et al. Meteorologische Zeitschrift, Vol. 18, No. 6, 585-596 (December 2009) DOI 10.1127/0941-2948/2009/0411
        http://www.igf.fuw.edu.pl/meteo/stacja/publikacje/Myhre2009.pdf

        Detailed line-by-line codes agree within about 15 % for longwave (LW) and shortwave (SW) RF, except in one case where the difference is 30 %. Since the LW and SW RF due to contrails and SWV changes are of opposite sign, the differences between the models seen in the individual LW and SW components can be either compensated or strengthened in the net RF, and thus in relative terms uncertainties are much larger for the net RF.

        These differences are much larger than the 1% level agreement you noted above. Is this primarily due to trying to evaluate contrails?

        Are these reflective of the difficulty in modeling clouds absorption/ reflection vs water vapor?

        3) By contrast I found:
        An improved treatment of overlapping absorption bands based on the correlated k distribution model for thermal infrared radiative transfer calculations, Shi et al. Journal of Quantitative Spectroscopy and Radiative Transfer
        Volume 110, Issue 8, May 2009, Pages 435-451

        This paper discusses several schemes for handling gaseous overlapping bands in the context of the correlated k distribution model (CKD). . . . flux differences did not exceed 0.8 W/m2 at any altitude. . . .

        Compare: Chou, Ming-Dah, Kyu-Tae Lee, Si-Chee Tsay, Qiang Fu, 1999: Parameterization for Cloud Longwave Scattering for Use in Atmospheric Models. J. Climate, 12, 159–169.

        A parameterization for the scattering of thermal infrared (longwave) radiation by clouds has been developed based on discrete-ordinate multiple-scattering calculations. . . .
        For wide ranges of cloud particle size, optical thickness, height, and atmospheric conditions, flux errors induced by the parameterization are small. They are <4 W m−2 (2%) in the upward flux at the top of the atmosphere and <2 W m−2 (1%) in the downward flux at the surface.

        That appears much better than an 1% error.

        4) Would you consider Miskolczi’s LBL using 3459 spectral ranges to provide sufficient resolution “to get atmospheric absorption right” for water vapor absorption in each of his 150 vertical segments, assuming the HITRAN data base etc.?

    • >but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.<

      Are you suggesting this has NOT been done ? If it hasn't, which I really doubt, then I am dumbfounded – another one of my assumptions shot to pieces

  62. tallbloke 12/8/10 1:58 pm

    Miskolczi (2007) said in his abstract,

    >>Simulation results show that the Earth maintains a controlled greenhouse effect with a global average optical depth kept close to this critical value.

    For this, and without validating his analysis, I applaud him. He claims to support his results with radiosonde data.

    As a matter of philosophy, climatologists should assume that the climate is in a conditionally stable state, because the probability is zero that what we observe is a transient path between stable states. Then they should set about to estimate what controls that state, and the depth or dynamic range of the controls. From this modeling, they could determine how Earth might experience a significant state change, and it would help them distinguish between trivial and important variations.

    Instead, the GCMs model Earth as balanced on a knife edge, ready to topple into oblivion with the slightest disturbance. This is analogous to finding round boulders perched on the sides of hills, and cones balanced on their apexes. This is Hansen’s Tipping Points. This modeling is intended to achieve notoriety and to frighten governments.

    Of course Earth’s greenhouse effect is controlled. Earth has two stable states, a warm state like the present, and a cold or snowball state. Temperature is determined by the Sun, but controlled in the warm state by cloud albedo, the strongest feedback in climate, positive to amplify solar variations, and negative to mitigate warming. In the cold state, surface albedo takes over as the sky goes dry and cloudless, the greenhouse effect is miniscule, and white covers the surface. The cold state is more locked-in than controlled.

    The regulating negative feedback is proportional to humidity, which is proportional to surface temperature. IPCC admits the humidity effect, but doesn’t make cloud cover proportional to it. Remember, proportional only means global average cloud cover increases with global average surface temperature, not that they occur in some neat linear relationship. And to answer your question, this all occurs at a pressure of one atmosphere.

    • Thanks Jeff.
      The reason I asked is that I noticed this curious apparent relationship between specific humidity at the 300mb level, and solar activity, and I wondered how this might fit with Miskolczi’s scheme:

      http://tallbloke.files.wordpress.com/2010/08/shumidity-ssn96.png

      This is up around the altitude the Earth mainly radiates to space from, and I was wondering if it might indicate that the temperature there is proportional both to temperature at that altitude and solar irradiance received at that altitude. The interface…

    • Jeff – I find numerous errors in your statement, wbich I believe could be rectified if you reviewed the two threads in this blog addressing the greenhouse effect, as well as well as other sources (you could start with Hartmann’s climatology text and then graduate to Pierrehumbert’s “Principles of Planetary Climate” due out very shortly.

      The reason I don’t address them here is that I find myself unable to provide an adequate response without consuming many pages, and so I would simply end up listing the errors without explaining what the correct answers are. There are probably others who can be more succinct, and I hope they may respond.

      • I would be glad to try to respond to individual points, however, if you bring them up singly (e.g., the “knife-edge” fallacy).

      • There are probably others who can be more succinct, and I hope they may respond.
        The “big gun” is just making things up.

    • For this, and without validating his analysis, I applaud him.

      Join the crowd. All we need now is a reputable validator of his analysis.

      But even if his analysis survives this validation, what good is that if it doesn’t refute global warming? Reducing flow of water vapor into the atmosphere could well increase global warming instead of cooling it as FM claims.

      In any event this is simply yet another model. Some of us out there, on both sides of the debate, don’t trust complex models that we have no way of verifying or validating ourselves. Until easily understandable specifications are written for these models, and the models have been shown to meet those specifications, they can’t be trusted.

      In the meantime simply looking at the temperature and CO2 records is a lot more convincing.

      • In the meantime simply looking at the temperature and CO2 records is a lot more convincing.

        I couldn’t agree more lets look at Vostok ice core data nice long term stuff. I did check this and know the guy/gal who did this was a bit naughty adding in a flask (dare one hope) data point for recent CO2 level. Nevertheless CO2 has had a positive trend for ~ 7,500 years and for the same period the temperature trend has been negative. This raises a question about how Arrhenius’ logarithmic relation is validate on a long term in the real atmosphere rather than a laboratory.

      • Which side of that graph would you say is the “present day”?

      • Yes it’s frustrating when graphs are inadequately labelled.
        The x axis is missing the BP acronym

      • I expected most people would recognise the younger Dryas on the right of the graph.

    • As a matter of philosophy, climatologists should assume that the climate is in a conditionally stable state, because the probability is zero that what we observe is a transient path between stable states.

      Yeah, right.

      As soon as we stop doubling the CO2 we pump into the atmosphere every third of a century we can say something like this.

  63. Tomas Milanovic

    I have the same issue as Al Thekasski.
    There is no dynamics in the line by line radiation transfer models as far as I know.
    Given a ground temperature, it needs a fixed atmospheric profile (temperature, concentrations and pressure) to run.
    As I have seen the units used in the posts W/m² (e.g ONE number!), there is some averaging going on.
    I am not a specialist of radiative transfer but it seems impossible to run at every time step of the model as many individual radiative transfer calculations as there are horizontal cells.
    Especially when convection is involved which massively changes the temperature and humidity profiles at every instant.

    If it is true that only a few “standard” profiles are considered without spatial coupling, I do not believe the accuracy of the radiation flows of 1% which has been thrown around here.
    So the question, how many profiles (temperature, pressure, concentration) are considered for every time step of the model?
    Besides it is also not true that for any column of basis 1 m² “radiation in = radiation out” as the temperature variations readily show.

    • Tomas – the modelers are the ones who should probably be addressing your comment. However, you may be confusing climate modeling with the radiative transfer equations as a means of assessing climate forcing from changes in CO2 or other moieties. Forcing is calculated by assuming everything remains constant in the troposphere and on the surface except for a change in radiative balance (typically at the tropopause). That means an assumption of unchanging temperature, humidity, pressure, etc.. Convection is a response to changes induced by forcing and is excluded from the forcing calculations themselves.

      The models then attempt to incorporate the other variables over the course of time and grid space (certainly including convection), but that is a separate issue from the radiative transfer equations as a means of determining the effects of forcing.

      • I should add that the models don’t assume, even with forcing calculations, that pressure, temperature, humidity, etc., are the same all over the globe or at different seasons. This is one reason why their estimates of the temperature change from CO2 doubling (without additional feedbacks) is 1.2 C instead of the 1 C change estimated simply by differentiating the Stefan-Boltzmann equation and assuming a single mean radiating altitude and lapse rate.

      • With regard to convection, I was referring to a change in convection. Lapse rates themselves reflect the effects of convection, but forcing calculations assume no change in these, but only a radiative imbalance, and it is the latter that provides the basis for applying the radiative transfer equations.

      • Tomas Milanovic

        Convection is a response to changes induced by forcing and is excluded from the forcing calculations themselves.

        Well as I said, I am not a specialist in radiative transfer but this is certainly not true or not meant what it is saying.
        Convection or more generally fluid flows are in no way an “answer” to radiation and even less to some “forcing”.
        One could as well say that the radiation is the answer on the particular properties of fluid flows (like temperature and pressure fields).

        The right expression is that both are coupled so that neither can be considered independently of the other.
        So considering a radiative transfer uncoupled of the fluid dynamics is a nice theoretical exercice but has nothing to do with the reality.
        Hence my question.

      • We may not disagree as much as it seems. Forcing (at least on planet Earth) is a hypothetical concept based on the assumption of unchanging conditions outside of the radiative imbalance. It has utility, however, as the basis for adding feedbacks.

        On a planet without water, it might be possible to measure forcing directly.

    • Thomas

      As I have seen the units used in the posts W/m² (e.g ONE number!), there is some averaging going on.

      A single value is the usual outcome of integration whether of the spectrum as is the case of line by line integrators like HARTCODE and FASCODE for example and since the output is a value for energy an extensive property it’s perfectly valid to average it over time and space.

      The line by line integrators mentioned work with atmospheric profiles taken from radiosonde ascents from all over the world. Though in TIGR 1 data set the tropics are under represented.

    • Actually Tomas, I am not yet into the question of dynamics and associated possibility of substantial errors due to order of averaging of fluctuating functions. My concern was about validity of static calculations under realistic atmospheric conditions, when vertical gradient of air temperature changes its sign in the middle.

      Consider the following example. Let assume that we have a standard profile of atmosphere — temperature goes lower for the first 11km, then some 1-2km tropopause, and then goes the stratosphere where temperature is increasing with height. Assume that the absorption spectrum consists only of two bands. Let a narrow band (say, 14-16um) have quite strong absorption, while another, wider area (say, 3x wider than the strong band), have a very weak absorption (aka “transparency window”). Let assume their the average “emission height” of the whole range to be at 6km. According to the standard approach of averages, the ”average emission height” will go up with increase of CO2, where “higher= colder”. Colder layer emits less, and therefore the global imbalance of OLR would occur and will force climate to warm up. This is the standard AGW concept.

      However under a more careful consideration the average (“effective emission height”) of our hypothetical spectrum of 6km is made of 0km for 3/4 of IR range, and of 24 km for the remaining 1/4 of the band. If we add more CO2 , the increase of 0 km band gives you zero change in OLR, while increase in 24km emission height will give you MORE OLR, because the temperature gradient in stratosphere is opposite to one in troposphere, so “higher = warmer”. As result, the warmer layer would emit more, and the overall energy imbalance would be POSITIVE. This implies climate COOLING, or just exactly opposite to what the standard “averaging” theory says.

      In reality the spectrum is more complex, edges of absorption bands are not abrupt, so many different trends would coexist. But the above example suggests that it seems very likely that warming and cooling effects may cancel each other in the first approximation. Therefore, the sensitivity of OLR to CO2 increase is a second-order effect, and must be much harder to calculate accurately. Hence my question to one of the fathers of “forcing” concept.

  64. Tomas Milanovic

    Earth has two stable states, a warm state like the present, and a cold or snowball state.

    Earth can’t have any stable states because if it could , it would have already found it in the last 4.5 billions years and stayed there forever.
    At least if the word “stable” is to be understood as “a fixed point in the phase space”.
    Earth as a dissipative system has never been in any kind of equilibrium and will never be – its trajectory in the phase space is dynamical and wandering between an infinity of possible regions in the parameter space.
    It is precisely because the energy supplied and the energy dissipated are not stationary at any time scale that there can’t be any stable point.

    The Earth can be only understood in terms of its dynamical orbits in the parameter space and not in some inexisting “equilibrium” or “stable” states.

    • Tomas,

      The radiation is indeed being calculated at every gridbox of the model for every physics or radiation time step (more than 3300 times for each time step for a moderately coarse spatial resolution of 4 x 5 lat-lon degrees. At each grid box, the radiative heating and cooling rates are calculated for the temperature, water vapor, aerosol and cloud distributions that happen to exist at that grid box at that specific time. The (instantaneous) radiative heating and cooling information is then passed on to the hydrodynamic and thermodynamic parts of the climate model to calculate changes in ground and atmospheric temperature, changes in water vapor, clouds, and wind, as well as changes in latent heat, sensible heat, and geopotential energy transports. All these changes in atmospheric structure are then in place for the next time step to calculate a new set of radiative transfer inputs to keep the time marching cycle going.

    • Thomas

      Earth can’t have any stable states because if it could , it would have already found it in the last 4.5 billions years and stayed there forever.

      I do believe there is sufficient evidence that the earth has been in snowball state (an example and also for the current state it’s in.

      • My theory about snowball states of the Earth is that life adapts to it and fills every niche until the snow is black (blue, green, brown, anything but white) with life. This reverses the albedo and the snow then melts.

        The obvious objection is that snow should still be black today, teeming with the same life, instead of white.

        I have the following suggestions.

        1. The snow/ice species were completely killed off by global warming. There was no snow at all in the Cambrian.

        Counter-objection: we’ve had 49 million years since the Azolla event for those species to regroup.

        Counter-counter-objections:

        (a) Snow species evolve so slowly (because of the temperature) that we’re not there yet.

        (b) Snowball Earth was a much more stable place for snow species than today’s ice caps, which are subject to storms too violent for species to evolve comfortably on account of the huge difference between tropical and polar temperatures. When the equator was icebound it would have been a far calmer place than today’s ice caps.

      • (b) Snowball Earth was a much more stable place for snow species than today’s ice caps, which are subject to storms too violent for species to evolve comfortably on account of the huge difference between tropical and polar temperatures. When the equator was icebound it would have been a far calmer place than today’s ice caps.

        That sounds plausible enough for me to accept it until something better comes along.
        “Life” is very resilient, as we have seen over and over.

      • That sounds plausible enough for me to accept it until something better comes along.
        “Life” is very resilient, as we have seen over and over.

        So what do you think of JoNova?

      • So what do you think of JoNova?

        Less plausible then supernova

        http://www.nature.com/nature/journal/v275/n5680/abs/275489a0.html

      • Maksimovich
        Could the rapid growth in the industrial activity since 1920’s have changed ( increased ) ionisation of the stratosphere?

      • The reason we have a comprehensive test ban is obvious.Industrial activity is less then the effects of the decrease in modulation of the earths geomagnetic field over the last 100 years eg Svalgaard,

        As a thought experiment ie as an inverse model.Are the temperature excursions in the latter part of the last century due to an increase in forcing say GHG etc or a decrease in the efficiency in dissipation say such as heat transport to the poles ?

      • I don’t understand the segue but I’m not surprised.

        I don’t know Jo Nova personally. I do however like what she does and how she does it on her blog. I participate often and help out as much as I can.
        Anything else you’d like to know Vaughan?

      • Anything else you’d like to know Vaughan?

        Sure. I’d been trying to decide between JoNova, Ann Coulter, and Michelle Malkin as to who was the most vicious. Currently I’m shooting for JoNova. If you disagree I’d love to know why.

        Another dimension is intelligence. I put Ann Coulter miles ahead of the other two. Love to hear your arguments on that one too. Reckon JoNova is a genius?

        I’d also be interested to know whether Michelle Malkin wins in any category. Feel free to be creative in making up categories.

        Should I start a pool on these three?

        I left out Leona Helmsley because no one likes the Queen of Mean, and also Martha Stewart because every likes her after what the system put her through and she took it all with her usual consummate grace. Not unlike Bill Gates who also came out smelling like a rose after the wringer we technocrats put him through.

        I say this as the designer of the Sun logo, which ought to incline me to a more cynical view of Gates but it doesn’t, perhaps because my logo has been appearing on the back cover of the Economist about every third week since we were acquired by Oracle and Scott McNealy was fired. The Wikipedia article attributes the Sun name to my Ph.D. student Andy Bechtolsheim but it originated with Forest Baskett, Andy’s previous advisor and my predecessor on the Sun project.

        Like God, capitalism works in mysterious ways.

      • Glad to see you’ve opened yourself up nice and wide for all to see.
        I won’t be getting down into this trash but feel free to wallow all by yourself.

      • I won’t be getting down into this trash

        Looks like we understand each other’s position.

        Staying above the fray is a good idea for those working in environmental science. Since I don’t do the latter I don’t find a need to do the former.

      • “I say this as the designer of the Sun logo”

        I’m impressed. :-)

        I got a sparcstation IPC in 1995 with a sony 1152×900 motitor.
        my PC owning friends were very jealous.

      • The logo was a stroke of brilliance, apropos of nothing much.

  65. @Fred Moolen

    I asked this question as a reply to your post upthread a bit, but it’s a long thread and you may miss the question

    >but I wonder whether this couldn’t be approached by measuring absorption in the laboratory in the relevant frequency as a function of concentration, pressure, and temperature, so as to derive a useful extrapolation. Is someone here has spectroscopic expertise, he or she should comment.<

    Are you suggesting that this has NOT been done ? If it hasn't, and I really doubt that, then I am dumbfounded – another of my assumptions shot to pieces

    • I do assume it has been done. My uncertainty relates to how extrapolable the results are to atmospheric conditions involving far less concentrated gas concentrations and far longer path lengths. In any case, I’m encouraged by the fact that observational and modeled radiative transfer data seem to correlate well, at least in the troposphere under clear sky conditions.

    • Absorption, or actually spectral transmission has been measured in the laboratory by spectroscopists for practically every atmospheric gas that matters. The line by line absorption line coefficients (for more than 2.7 million lines) are tabulated in the HITRAN data base.

      Actually, it not measured laboratory spectra that is being tabulated, but it is a combination of theoretical quantum mechanical calculations and analyses (which they can do very precisely and very effectively), normalized and validated by the laboratory spectra, that enable the modern-day spectroscopists to define the absorption line spectral positions, line strengths, line widths, and line energy levels along with all their quantum numbers – everything that is needed to perform accurate line-by-line radiative transfer calculations for any combination, amount, and vertical distribution of atmospheric greenhouse gases.

      • Thank you for your reply. I am slowly garnering the hard, basic data against which to test my doubts on the significance of AGW. This thread has been very useful, primarily because you and Chris Colose have been honest in your replies to honest questions

        The to&fro of actual technical debate on this thread has been the most comprehensive I have seen for 4-5 years. Treating people such as myself as suitable fodder for press releases is guaranteed to aggravate polarisation of debate. You and Colose have slowly changed from that also – perhaps Judith C was right to try this blog experiment :)

  66. David Hagen, 12.8.10 8:14 pm

    You inquired about Figure 10 in Miskolczi (2010). He says,

    >>we now simply compute τ_sub_a every year and check the annual variation for possible trends. In Fig. 10 we present the variation in the optical thickness and in the atmospheric flux absorption coefficient in the last 61 years.

    The two ordinates in Figure 10 are both in percent, indicating the relative change year by year, confirming what he said in the text. The text and chart are clear that these are differences, the discrete analog of differentiating.

  67. David Hagen, 12.8.10 8:35 pm

    You asked about my statement,

    >> Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity.

    And then refer to an article by Roy Spencer questioning the nature of cloud feedback, and especially questioning whether “clouds dissipate with warming giving a positive feedback”.

    The argument for my position qualitative, not quantitative. It is not complex, and it involves physical phenomena admitted by IPCC or which are everyday occurrences.

    Spencer discusses a regional phenomenon since the 1950s to say,

    >> These problems have to do with (1) the regional character of the study, and (2) the issue of causation when analyzing cloud and temperature changes.

    And later,

    >>I am now convinced that the causation issue is at the heart of most misunderstandings over feedbacks.

    I agree.

    Spencer concludes,

    >> The bottom line is that it is very difficult to infer positive cloud feedback from observations of warming accompanying a decrease in clouds, because a decrease in clouds causing warming will always “look like” positive feedback.

    Spencer’s inference is à posteriori modeling, developing a model to fit the data. A more satisfying method is to create an à priori model, one which relies on physical reasoning first. The à priori model must contain a cause & effect. The à posteriori model may, depending on modeler skill. The main difference is that the à priori model is rational in the physics, while the à posteriori model is a rationalization of physics to fit data.

    I would not make the inference Spencer finds objectionable. On this topic of cloud feedbacks, I argue from causation first, based on physics, leading to a model that can be validated against data.

    First I note that cloud albedo is a powerful feedback. It gates solar radiation, so has the greatest potential among Earthly parameters to be a feedback. It is a quick, positive feedback because of the burn-off effect witnessed by everyone. Cloud cover dissipates upon exposure to the Sun, so when the Sun output is temporarily stronger, the effects on Earth are increased proportional to the TSI increase, but more, magnified because burn-off occurs sooner. The reverse holds as well. The effect is to cause solar variations to be a predictor of Earth global average surface temperature, as shown in my paper SGW. http://www.rocketscientistsjournal.com . IPCC denied this relationship exists.

    At the same time, cloud albedo is a slow, negative feedback with respect climate warming. It is slow because ocean heat capacity makes ocean temperature changes slow. Next, I assume that the climate throughout recorded history has been in a conditionally stable state. Hansen’s tipping points never occur. The cause of this negative feedback I attribute to increased warming causing increased humidity, resulting in increased cloud cover. A little calculation shows that this effect could be as large as reducing the instant climate sensitivity parameter by a factor of 10 without being detectable within the state-of-the-art for measuring cloud albedo. Recent work by Lindzen (“On the determination of climate feedbacks from ERBE data”, Geophysical Res.Ltrs., 7/14/09) shows climate sensitivity is about 0.5ºC instead of a nominal figure of about 3.5ºC. That’s reduction by a factor of 7, an empirical validation.

    Now cloud cover is the result of humidity condensing around CCNs. The probability that humidity and the concentration of CCNs are exactly in balance has to be zero. So one or the other must be in superabundance, leaving cloud cover dependent on the other parameter. In order for cloud albedo to be the regulator stabilizing Earth in the warm state, it must be able to respond directly to changes in humidity. That means that the CCN must be in superabundance. The alternative is that cloud cover could not respond to warming or cooling, meaning we would have to find an alternative, powerful mechanism. The candidate set seems to have one member: cloud cover.

    • David L. Hagen

      Thanks Jeff for expanding on your cloud perspective.
      You may find interesting the work by Willis Eschenbach on clouds acting as a global thermostat. See WUWT: Willis publishes his thermostat hypothesis paper

      See also Spencer at WUWT Dec. 9, 2010
      The Dessler Cloud Feedback Paper in Science: A Step Backward for Climate Research

      “What we demonstrated in our JGR paper earlier this year is that when cloud changes cause temperature changes, it gives the illusion of positive cloud feedback – even if strongly negative cloud feedback is really operating! . . . We used essentially the same satellite dataset Dessler uses, but we analyzed those data with something called ‘phase space analysis’. Phase space analysis allows us to “see” behaviors in the climate system that would not be apparent with traditional methods of data analysis.”

      Spencer’s phase space approach appears key to distinguishing cause and effect.

    • *****
      First I note that cloud albedo is a powerful feedback. It gates solar radiation, so has the greatest potential among Earthly parameters to be a feedback. It is a quick, positive feedback because of the burn-off effect witnessed by everyone. Cloud cover dissipates upon exposure to the Sun, so when the Sun output is temporarily stronger, the effects on Earth are increased proportional to the TSI increase, but more, magnified because burn-off occurs sooner.
      *****
      So this is an observation rather than a derivation from first principles? Do you have studies that would lend a lot of confidence to this assertion?

  68. I picked out one element of Jeff Glassman’s original claims (from December 7, 2010 at 9:31 am).

    He originally stated this:

    IPCC declares that infrared absorption is proportional to the logarithm of GHG concentration. It is not. A logarithm might be fit to the actual curve over a small region, but it is not valid for calculations much beyond that region like IPCC’s projections. The physics governing gas absorption is the Beer-Lambert Law, which IPCC never mentions nor uses. The Beer-Lambert Law provides saturation as the gas concentration increases. IPCC’s logarithmic relation never saturates, but quickly gets silly, going out of bounds as it begins its growth to infinity.

    I challenged these claims on December 7, 2010 at 9:31 am.

    In his long response of December 8, 2010 at 10:40 am he says:

    The applicable physics, the Beer-Lambert Law, is not shown by Myhre, et al., of course.

    Of course!

    “Of course” seems to mean here “they didn’t use it, they made stuff up instead”.

    Many early papers from the 60s and 70s do include all of the equations and the derivations – and the simplifications – necessary to solve the RTE (radiative transfer equations).

    It might seem incredible that 100’s of papers that follow – which include results from the RTE – don’t show the equations or re-derive them.

    Of course, these authors probably also made up all their results and didn’t use the Beer Lambert law..

    Well, I might seem like a naive starry-eyed optimist here, but I’ll go out on a limb and say that if someone uses the RTE they do use the physics of absorption – the Beer-Lambert law. And they do use the physics of emission – the Planck law modified by the wavelength dependent emissivity.

    Then in his followup claims, Glassman says:

    None of Myhre’s traces comprise measurements, not even the output of the computer models, which will surprise some. You can depend on these models, the NBM and BBM, to have been tuned to produce satisfactory results in the eyes of the modeler, and in no way double blind simulations. Just like GCMs.

    What’s the claim?

    Is Glassman’s problem that Myhre doesn’t use the Beer Lambert law OR that Myhre hasn’t got a pyrgeometer out to measure the DLR. And how would Myhre measure the radiative effect of 1000ppm CO2 in the real atmosphere?

    I believe Glassman’s real issue is not understanding the solution to the RTE in the real atmosphere.

    The RTE include emission as well as absorption. The absorption characteristics of CO2 and water vapor change with pressure and temperature. Pressure varies by a factor of 5 through the troposphere. Water vapor concentrations change strongly with altitude. These factors result in significant non-linearities.

    The results for the RTE through the complete atmosphere vs concentration changes are not going to look like the Beer Lambert law against concentration changes.

    Glassman says:

    The logarithm model makes CO2 concentration proportional to the exponent of absorption. The Beer-Lambert Law makes absorption proportional to the exponent of CO2 concentration.

    Perhaps this is Glassman’s issue – he doesn’t believe that the results presented are correct because he thinks that doubling CO2 should result in a change in proportion to the Beer Lambert absorption change? That would only happen in an atmosphere of constant pressure, constant temperature and a constant concentration vs altitude.

    When he doesn’t see this he imagines that these climate scientists have been making it up to fit their pre-conceived agendas?

    Well, despite many claims and accusations about climate scientists it is still true that when Myhre et al did their work they used the Beer Lambert law but not just the Beer Lambert law. And it is still true that the IPCC relied on Myhre’s work and therefore the IPCC also “used” the Beer Lambert law.

    Glassman’s original claim is still not true.

    But for the many who know that we can’t trust these climate scientists who just “make stuff up” – anyone can calculate “the real solution” to the RTE vs increasing concentrations of CO2.

    The RTE are not secret. Anyone with a powerful computer and the HITRANS database can do the calculations and publish the results on their blog.

  69. A Lacis, 12/9/10, 12:31 am

    If Miskolczi’s papers had been published in the mainstream scientific literature, we could be sure of just one thing: they conformed to the AGW dogma. I critiqued them only on request, partly in the hope of finding a gem, but always to hone the science.

    I disagree with you that my education in radiative transfer is need of augmentation. As I said before, no matter how the climate models might calculate radiation, or how you might think they do, in the end they produce a radiative forcing dependent on the logarithm of CO2 concentration. That might be valid in a narrow region, but for climate projections over a doubling of CO2 concentration, the models violate physics. That is something you might want to study.

    You suggested IPCC as a source for my education on radiative transfer. The Fourth Assessment Report contains this revelation:

    >> The results from RTMIP imply that the spread in climate response discussed in this chapter is due in part to the diverse representations of radiative transfer among the members of the multi-model ensemble. Even if the concentrations of LLGHGs were identical across the ensemble, differences in radiative transfer parametrizations among the ensemble members would lead to different estimates of radiative forcing by these species. Many of the climate responses (e.g., global mean temperature) scale linearly with the radiative forcing to first approximation. AR4, §10.2.1.5, p. 759.

    RTMIP was the Radiative-Transfer Model Intercomparison Project, a response to a chronic problem with radiative transfer modeling. The models didn’t agree, and still don’t. Furthermore, the modelers reduce the problem to parametrization, putting in a statistical estimate for a process too complex or too poorly understood to emulate. So, regardless of what you perceive as my needs in the theory of radiative transfer, in the last analysis radiative transfer is pretty much a failure and irrelevant in IPCC climate models.

    It is a failure because, in the end, the modeled climate responses scale linearly with the radiative forcing to a first approximation. And if climate models could get the climate right in the first order, we would have a scientific breakthrough. As I wrote to you above re barking up the wrong tree, the fact that global mean temperature turns out to a first approximation to be proportional to radiative forcing, means that the models to a first approximation are producing a dependence on the logarithm of CO2 concentration. I have no doubt that that could be true and valid, all in the first order, over the domain of CO2 concentration seen at Mauna Loa. I also have no doubt that it is not valid for a doubling of CO2.

    Radiative forcing will follow a form like F0 + ΔF*(1-exp(-kC)), where C is the gas concentration. That is an S-shaped curve in the logarithm of C, showing a saturation effect. It is not a straight line. You may apply this function in bulk, a first order approximation, or in spectral regions as you have the time and patience to do. However, knowing the greenhouse effect of CO2 requires knowing where you are on the S-shaped curve. This is just one of many fatal errors in IPCC modeling.

    Why would anyone be motivated to master radiative transfer for arbitrary failed models? They are initialized in the 18th Century by zeroing out on-going warming from the last several cold epochs, causing modelers to attribute normal warming to man. The models place the surface layer of the ocean in thermodynamic equilibrium. They assume CO2 is well-mixed and long-lived in the atmosphere. They make the solubility of natural and anthropogenic CO2 different. These IPCC models are open loop with respect to the positive feedback of CO2, and open loop with respect to the positive and negative feedback of cloud albedo.

    Perfecting radiative transfer will have no effect on this sorry excuse for science.

  70. I am dumbfounded. Since I put up my contribution at December 7, 2010 at 9:12 am, there have been numerous posts with claims and counter claims. This reminds me of the ancient philosophers arguing as to how many angels can dance on the head of a pin. I cannot see how these differences can be easily reconciled, and I come back to my main point.

    It is impossible with current technology to MEASURE the change in radiative forcing for a doubling of CO2. So it will never be possible to find out who is right and who is wrong. And the IPCC can never establish that CAGW is correct using the “scientific method”. We can never know what the true value is for the change in radiative forcing for a doubling of CO2, not whether such a number actually means anything.

    • Jim,

      Radiative transfer is based directly on laboratory measurement results, and as such, has been tested, verified, and validated countless times. John Tyndall in 1863 was one of the first to measure quantitatively the ability of CO2 to absorb thermal radiation. Since then spectroscopists have identified literally thousands of absorption lines in the CO2 spectra, and have tabulated the radiative properties of these lines in the HITRAN data base. They have full understanding of why each of the CO2 absorption lines is there, and why it has the spectral position, line strength, and line shape that is measured by high resolution spectrometers for a sample of CO2 in an absorption cell for any pressure, temperature, and absorber amount conditions.

      It is more a matter of engineering than science to calculate by radiative transfer methodology how much radiation a given amount of CO2 will absorb. Just as there is no need to throw someone off a ten story building to verify how hard they will hit the pavement, there is no real need to measure directly how much radiative forcing doubled CO2 will produce. Nevertheless, the actual experiment to do this (doubling of CO2) is well underway. In the mid 1800s, atmospheric CO2 was at the 280 ppm level. Today it is close to 390 ppm, and increasing at the rate of about 2 ppm per year. At that rate, before the end of this century we will have surpassed the doubling of CO2 since pre-industrial levels.

      The radiative forcing for doubled CO2 is about 4 W/m2. The precise value depends on the atmospheric temperature profile, the amount of water vapor in the atmosphere, and also on the details of the cloud cover. The 4 W/m2 is a reasonable global average for current climate conditions. The direct warming of the global temperature in response to the 4 W/m2 radiative forcing is about 1.2 C.

      Best estimates for water vapor, cloud, and surface albedo feedbacks increase the global mean temperature response to about 3 C for doubled CO2. Because of the large heat capacity of the ocean, the global temperature response takes time to materialize, but that is the global equilibrium temperature that the global climate system is being driven toward.

      Current technology measurement capabilities are not adequate to measure directly the radiative forcing of the GHG increases. But current measurement techniques do measure very precisely the ongoing increases in atmospheric greenhouse gases, and radiative transfer calculations (using the laboratory measured properties of these GHGs) provide an accurate accounting of the radiative forcing that is driving climate change.

      • I fail to see how this discussion can advance case of the CO2 induced GW, if a simple question related to reconciling:
        – 1920-45 modest rise in CO2 with sharpest rise in temperature
        – 1945-80 cooling period with the steepest rise of CO2 in the recorded history

        http://www.vukcevic.talktalk.net/CO2-Arc.htm
        has no clear answer.

      • Andy Lacis has stated that it’s something to do with oceanic cycles, but has declined to tell us if or how these oceanic cycles have been incorporated into his model, or what the magnitude of their contribution to the recent warming was.

        This is unsurprising, because if he did, there would have to be a reassessment of the modeled climate sensitivity to co2, which would inevitably drop.

      • Let’s assume Dr. Lacis is correct.
        If there was no increase in CO2 (providing CO2 does as suggested) then inevitable conclusion is that in 1960s temperature would have been much lower, about 0.8 C, approximately as 1810s (Dalton minimum).
        I think IPCC needs to do some work on that one.

      • Best estimates for water vapor, cloud, and surface albedo feedbacks increase the global mean temperature response to about 3 C for doubled CO2.

        That’s the only bit that’s relevant, and also the only bit which can’t be determined by your RTMs.

        The radiative forcing of 4W/m2 may be correct and measurable (I have no argument with it), but it’s largely irrelevant. The relevant bit is the sensitivity, as expressed as degrees per CO2 doubling. And that, as I see it, is little more than guesswork.

      • A. Lacis writes “It is more a matter of engineering than science to calculate by radiative transfer methodology how much radiation a given amount of CO2 will absorb. Just as there is no need to throw someone off a ten story building to verify how hard they will hit the pavement, there is no real need to measure directly how much radiative forcing doubled CO2 will produce.”

        This is complete and utter garbage. There is lots of experimental data that if someone falls off a 10 story building they will hit the pavement hard. However, it is still IMPOSSIBLE to measure radiative forcing directly. If it was possible to measure radiative forcing directly, then there would be no need to rely on estimates from radiative transfer models.

        Can you describe the way that radiative forcing would be actually measured.

      • A. Lacis writes “Current technology measurement capabilities are not adequate to measure directly the radiative forcing of the GHG increases. But current measurement techniques do measure very precisely the ongoing increases in atmospheric greenhouse gases, and radiative transfer calculations (using the laboratory measured properties of these GHGs) provide an accurate accounting of the radiative forcing that is driving climate change.”

        You seem to agree that radiative forcing cannot be measured directly. That is all that matters. I agree that when you add CO2 to the atmosphere, it disturbs the radiative balance of the atmosphere. CO2 is a GHG. But this still does not alter the fact that radiative forcing cannot be MEASURED.

        This will be a much more important issue when Judith introduces how much a change in radiative forcing affects global temperatures.

      • Radiative forcing can never be measured. No improvement in empirical capabilities can make it possible, because it is a concept related to such a modification in the atmosphere that cannot occur. It is defined as the net flow of energy in a situation where the radiative transfer changes without any change in the temperature profile, but in all real changes the temperature profile will also change.

        Thus the radiative forcing will always remain a parameter calculated using some theoretical model.

        The related concept of climate sensitivity (including feedbacks) can be measured at some future time as it refers a change that can occur on the real earth. The accuracy of this measurement may remain low, but some direct empirical measurements will be possible.

      • Pekka you write “The related concept of climate sensitivity (including feedbacks) can be measured at some future time as it refers a change that can occur on the real earth.”

        Absolutely correct. HOWEVER, and it is a big however, the rise in temperature for a doubling of CO2 WITHOUT feedbacks, CANNOT be measured. That will be the isssue as we discuss this in detail, when Judith introduces the subject.

      • Many concepts discussed in connection with climate change are artificial. They can be considered as intermediate quantities in model analysis. They have no real meaning outside the world of models, but they may be useful concepts in the interaction between modelers, whether the models are simple and very approximative conceptual models or complicated models like AOGCM’s.

        For some people such artificial concepts are worthless. This is not my view as I consider many of them to be very useful, but only provided that their meaning is understood.

      • Quinn the Eskimo

        Pekka Pirilä at 3:43 PM

        That radiative transfer can never be verified seems an important caveat for a scientific enterprise of this importance.

        But let not such small things be the hobgoblins of our minds. There are more significant caveats still to go.

        The actual real world effects depend on climate sensitivity, which depends on the cumulative effect of all relevant feedbacks.

        Per the IPCC there is a low level of scientific understanding of the influence of the sun and of clouds, to pick two out of many important issues adrift in the same rudderless boat.

        The low level of scientific understanding of cloud feedbacks precludes determination of a valid, reliable or verifiable climate sensitivity.

        It is therefore logically impossible to claim 90%+ confidence in the understanding of the effects of GHGs, or projections of the climate models.

        But the IPCC does anyway.

        Conversely, however, we can say with high confidence as a matter of logic that until the feedbacks are very well understood, which they aren’t, the IPCC won’t know jack about the net effect on climate of increased CO2 concentrations, or any other climate driver for that matter.

        This conceptual confusion is confirmed by recalcitrant, ill-fitting facts. Vukevic keeps asking about, but not getting sufficient answers about:
        – 1920-45 gentle rise in CO2 with sharpest rise in temperature
        – 1945-80 cooling period at the time of the steepest rise of CO2 in the recorded history.

        To which we might add the recent 15 years of no statistically significant warming despite continuously increasing levels of CO2.

        Or the studies showing that the model projections are not accurate.

        That CO2 just doesn’t seem like much of a control knob, does it?

        It’s not just despicable dumb old me. Kevin Trenberth said:

        How come you do not agree with a statement that says we are no where close to knowing where energy is going or whether clouds are changing to make the planet brighter. We are not close to balancing the energy budget. The fact that we can not account for what is happening in the climate system makes any consideration of geoengineering quite hopeless as we will never be able to tell if it is successful or not! It is a travesty!

        Reducing human emissions of GHGs – geoengineering- is “hopeless” because “we can not account for what is happening in the climate system.” The emperor has no clothes. Are we now to be told that Trenberth is a moron, or an idiot, or an ignoramus on RTE, or a recipient of Heartland Institute grant, or that it depends on what the meaning of “travesty” is?

        We certainly live in interesting times!

        Cheers!

  71. Vaugn Pratt, 12.9.10 2:42 am

    So my ten paragraphs, each eminently refutable, can be refuted by just “shooting down the tenth”. Pratt commands the firing squad where one man has a bullet, and everyone else a blank.

    So Pratt knows that no published data exists in which CO2 appears to be in saturation.

    I invite him to open the radiation and absorbing spectra published in Wikipedia. http://en.wikipedia.org/wiki/File:Atmospheric_Transmission.png. Now check the CO2 absorption spectra. All these bands will move up or down as CO2 concentration increases or decreases.

    The band at about 13.1-17.2 mu (3 db points) appears to be saturated, i.e., about 100%, between 13.8 and 16.0 mu for whatever conditions applied for the chart. (Of course it can never be 100%, but that’s the best that can be resolved on the unfortunate linear scale.) This band is visible between about 12.1 and 18.8 mu (of course, it could be much more, but that is again a limitation of the linear scale.)

    Ignoring absorption unresolvable on this scale, added CO2 can decrease OLR, which runs from about 8.1 to 13.2 mu, only in the region of 12.1 to 13.8 mu, or 33%. This is a first order limitation or saturation effect.

    CO2 also has absorption bands between about 9.2 and 9.7 mu and between about 10.0 and 10.7 mu. These are in the OLR band, and are just barely visible along the base line. Again ignoring unresolvable absorption, these bands could absorb about 1.2 mu, or 23.5% of the OLR.

    The logarithm of concentration model has no limitations. It doesn’t care if CO2 concentration exceeds 100% of the atmosphere (more than a million parts per million), much less whether the gas has absorption band limitations. I prefer to work around unresolvable absorption bands than resort to a silly function.

    Now, I assume, all ten paragraphs are vindicated.

    • You misinterpreted the diagram, Jeff. It shows that only about 15-30% of IR emitted from the surface escapes to space uninterrupted, but of course, 100 % must escape eventually for energy balance. The remainder is intercepted by greenhouse gas molecules (CO2, water, etc.), and eventually re-emitted. At 15 um, the emission to space is mainly at high cold altitudes such that further increases in height won’t change the temperature (because the lapse rate at these altitudes is about zero). In the wings of the CO2 band, however, absorbed photon energy is re-emitted at lower altitudes that must rise with increasing CO2 to overcome the increased opacity. This results in the warming effect. It is nowhere near saturated, and can’t be at any conceivable CO2 concentration compatible with human existence, much less the concentrations estimated for coming decades and centuries. I urge you to read more on the greenhouse effect in order to understand this, because you appear to be repeating the same misconceptions in your various commentaries.

      • For clarity, 100 percent of absorbed radiation must escape for balance. As shown in the wikipedia diagram, a fraction escapes directly via “window” regions. The rest escapes after absorption by CO2, H2O, etc., and subsequent re-emission. The total actually emitted from the surface exceeds 100 percent because of back radiation.

    • The absorption bands that you list block efficiently the direct IR radiation from the earth surface at wavelengths well inside the bands, but at the edge of the band there are always wavelengths where the blocking is incomplete. Increasing the CO2 concentration will always increase absorption at the edge of the band. In other words the band will always get wider with increasing concentration over the full range even remotely possible for the atmosphere.

      Widening the band requires progressively more CO2 when the concentration is increasing. This is the reason for the approximately logarithmic relationship between concentration and radiative forcing. Logarithmic relationship means that the influence gets weaker and weaker but does not saturate.

  72. Gordon Robertson

    Judith wrote “While the models have not been validated observationally for a doubling of CO2, we infer from the above two tests that they should perform fine”.

    Judith, the IPCC has itself admitted that the mdoels have NOT been validated.

    http://www.climategatecountryclub.com/profiles/blogs/the-ipccs-climate-models-are

    When Vincent Gray, an expert reviewer for the IPCC, pointed out to the IPCC that their models were not validated, the IPCC changed the term validation to ‘evaluation’ . They also changed the term prediction to projection, presumably because of their TAR claim that future climate states could not be predicted.

    Also, if the models are validated, why did they fail to predict the El Nino extremes of 1998 and 2010? It’s pretty obvious, is it not, that models cannot predict future events because they don’t have the programming to do that.

    James Hansen was forced to admit that the predictions of his models between 1989 and 1998 had been wrong. Pat Michaels covers some of that here:

    http://www.sepp.org/Archive/reality/michreviews.html

    There have been claims that satellite data is now in step with model prediction. It seems odd to put the cart before the horse in that manner. A look at the UAH satellite data from Roy Spencer’s site reveals the lie in that assertion. Whereas the trends may be similar they mean entirely different things.

    http://www.drroyspencer.com/latest-global-temperatures/

    The zero reference on the graph is the 1979 – 1998 mean. Spencer has pointed out that the recent Tropical mean is now below that average as of November 2010. It seems the entire global mean is headed there as well. His partner, John Christy, has been trying to point out for some time, that the global mean was also below that zero anomaly from 1979 till about 1995.

    True warming did not occur till after the 1998 El Nino anomaly, whereby it jumped about 0.2 C, with no further warming trend till the 2010 ENSO anomaly. That warming in no way resembles the surface record, especially that of GISS, which has shown a steady warming trend since 1980.

    In the climategate emails, Kevin Trenberth admitted that the warming has stopped. He claimed later to have been taken out of context. From what I gather, he is now claiming that the warming is being hidden by ENSO activity.

    Models don’t do ENSO, at least not very well. With ENSO able to swing the global mean over 0.8 C in one year, I hardly think the CO2 programmed into a climate model as a fudge factor is significant, especially if its theoretical trend can be so easily obliterated.

    • Gordon, my original comment was made in regard specifically to the radiative transfer models, not to the global climate models. As per the citations made in my original post, I think substantial support is provided for the validation of the clear sky atmospheric radiative transfer codes.

      • Rattus Norvegicus

        Judith,

        Maybe you should make a post taking your denialist commenters to task. They are just wrong as Moolten and Lacis have pointed out.

      • Gordon Robertson

        Rattus Norvegicus…”Judith, Maybe you should make a post taking your denialist commenters to task. They are just wrong as Moolten and Lacis have pointed out”.

        I have not read your experts, but I have spent the last couple of years reading what experts like Lindzen, G&T, Spencer and Christy have to say about the theories of believers. I prefer their objectivity to the virtual science of modeling and the probability theories of the IPCC.

        The one thing believers cannot explain is the objective data from satellites, which is a sampling of billions of emissions from atmospheric oxygen and covers 95% of the troposphere. The data reveals that models are wrong, that the major warming is quite localized to the Arctic, in winter, and that the warming trend ended a decade ago.

        Believers can offer all the thought experiments they like based on atomic theory. The macro evidence shows clearly that laboratory experiments on high densities of CO2 cannot be transferred to the atmosphere, where CO2 is a rare gas.

        Furthermore, the radiative theory upon which the models are based is a minor player in the atmosphere. It is plain, according to G&T, that the surface is heated by solar energy, and that the surface warms atmospheric gases by conduction. That brings O2 and N2 into the equation and they account for 98% of atmospheric gases.

        We were taught in high school that hot air rises. Heat is also transferred by convection, as Lindzen explains, and is released from high in the atmosphere, not from the surface per se. Without convection, according to Lindzen, the mean surface temperature would be 72 C. It is convection that carries off most of the heat, not radiation. The models are based on faulty physics.

      • But the empirical observations disagree.

        Willis over on WUWT observes:

        Evans and Puckrin 2006
        http://ams.confex.com/ams/Annual2006/techprogram/paper_100737.htm

        found that in the presence of DLR from H2O of more than 200W/m2 the DLR attributable to CO2 was dramatically suppressed

        Lacis claims that

        “In round numbers, water vapor accounts for
        about 50% of Earth’s greenhouse effect, with
        clouds contributing 25%, CO2 20%, and the minor
        GHGs and aerosols accounting for the remaining
        5%. Because CO2, O3, N2O, CH4, and
        chlorofluorocarbons (CFCs) do not condense and
        precipitate, noncondensing GHGs constitute the
        key 25% of the radiative forcing that supports
        and sustains the entire terrestrial greenhouse effect …”

        In the Evans clear-sky observations (no clouds) CO2 was measured at 11% of the total downwelling radiation from water vapor, CO2, and minor GHGs.

        In the Lacis10 study CO2 was 27% of the total of water vapor, CO2, and minor GHGs. In other words, the Lacis10 computer results show about two and a half times more radiation from CO2 (and minor GHGs) than Evans’ observations.

        Alternatively, we could approximate the clouds. In Lacis10 clouds have half the forcing of water vapor. We can apply this same percentage to the Evans observations, and assume that the clouds are half of the water vapor forcing. This increases the total forcing by that amount.

        This (naturally) reduces CO2 forcing as a percent of the now-larger total. In this case, which is comparable to the Lacis formulation above, CO2 is 8% of the total, with the minor GHGs at 2%.

        Once again, the Lacis10 results are about two and a half times larger than the Evans observations.

        The other finding of Evans was that more water vapor in the air means less CO2 absorption in both absolute and relative terms. In the winter it was measured at 105 W/m water vapor versus 35 W/m for carbon dioxide (33%). In summer CO2 radiation goes down to 10 W/m2 versus 255 W/m2 for water vapor (4%), due to increased absorption by water vapor.

        This means that CO2 will make less difference in the tropics, with its uniformly high humidity, than in the extratropics where the Evans observations were taken.

        And:

        The Evans paper is important in this regard. It says that when relative humidity is high (most of the time in the tropics) most of the radiation is from water vapor.

        In the Evans paper, GHGnc were the source of 6% of the summer clear-sky radiation (with high humidity). If that were the case here, the clear-sky radiation is 300 W/m2, so the split would be 282 W/m2 from H2O, and 18 W/m2 from GHGnc. This is physically quite possible, since H2O alone adds 291 W/m2.

        Even using the straight MODTRAN calculations, however, shows that the Lacis10 claim is unlikely. The loss of the GHGnc gases gives only a 9 W/m2 change in downwelling radiation, according to MODTRAN. While this is a significant change, it is far from enough to send the planet spiraling into a snowball.

        The surface is currently at 390 W/m2. For radiation balance, MODTRAN says the surface would cool by about 3°C (including water vapor feedback, but without cloud or other feedbacks).

  73. Gordon Robertson

    Judith “Gordon, my original comment was made in regard specifically to the radiative transfer models, not to the global climate models”.

    Judith…I respect your work and I am not trying to create an issue with you over this, but I think all model theory has a shaky basis with regard to modeler interpretation of physics. Lindzen has pointed out clearly that radiative transfer is but one part of the atmospheric equation, and that the Earth is never in balance with respect to energy. Gerlich and Tscheuschner pretty much agree with Lindzen and have put the radiative model to bed.

    https://www.cfa.harvard.edu/~wsoon/ArmstrongGreenSoon08-Anatomy-d/Lindzen07-EnE-warm-lindz07.pdf

    http://gazettextra.com/news/2010/apr/08/con-earth-never-equilibrium/

    http://icecap.us/images/uploads/Falsification_of_CO2.pdf

    I know the alarmists will be at me for G&T and they will claim the Smith and Halpern papers have disproved it. However, neither Smith nor Halpern seem to be able to distinguish the 1st law of thermodynamics from the 2nd law. Halpern et al talk around the 2nd law and seem lost when it comes to explaining it.

    If the clear sky models are accurate, they should be able to tell us exactly how much ACO2 energy is being returned to the surface as back-radiation. They can’t. Furthermore, they can’t explain how ACO2, which accounts for 1/1000th percent of atmospheric gases can block enough surface radiation or back-radiate enough to super-warm the surface.

    Most importantly, the basis of AGW is twofold: that ACO2 is helping trap heat, and/or, that ACO2 backradiates sufficient energy to super-warm the surface beyond the temperature it is heated by solar energy. Craig Bohren, a physicist/meteorologist calls the trapping effect a metaphor at best, and at worst, plain silly. He refers to the second theory as a model.

    The problem with the model is that it blatantly disregards the 2nd law. Clausius put it in words when he asserted that a cooler body warmed by a warmer body, cannot warm the warmer body to a temperature above what it was when it warmed the cooler body. That is basically common sense. There are losses in the system, and you cannot increase energy through positive feedback when those losses are in place and there is no independent energy source to achieve the heat gain required.

    That is basic thermodynamics but the AGWers have brought in the 1st law through an obscure net energy balance (see G&T paper). Clausius created the 2nd law for that reason. Under certain conditions, the 1st law permits a perpetual motion machine, in which ACO2 can take energy from the surface and behave as an independent heat source. It is not possible to add energy from a dependent source to increase energy in a system.

    I noticed a post from Jeff Glassman. If he is the same one who had an interchange with Gavin Schmidt, perhaps he would be good enough to explain mathematically how feedback works in the atmosphere and why it cannot exist without an independent energy source.

    Halpern et al are thoroughly confused about back-radiation as explained by G&T. Whatever the modelers think they are modeling is apparently wrong. How they extract the effect of ACO2 from background CO2, of which ACO2 is a small fraction (IPCC), is beyond me. Roy Spencer has already pointed out that we don’t have the instrumentation to do that. So, what are modelers seeing?

    If you read the G&T paper, which I hope you do, you will see them discuss Planck, Boltzman et al. In this thread I have seen assumptions made about energy flow in the atmosphere which is pretty basic. One poster discusses a cavity resonator, which is a theoretical concept. Blackbody radiation a la Boltzman is not directly transferable to the atmosphere according to G&T.

    One of the problems is a condition Planck put on his formula. It applies only when the wavelength of the radiation is much shorter than the surface dimensions upon which the radiation is incident. G&T describe a bulk problem with CO2 at 300 ppmv, where 80 million CO2 molecules in a cube with sides of 10 microns, a typical wavelength for infrared radiation. They claim that applying the formulas for cavity radiation in such a cube are sheer nonsense.

    If they are correct, then the application of Boltzman and Planck in a model is wrong. Both G&T and Bohren agree that energy exchange in the atmosphere cannot be described by simple vectors. Many people think that photons can be exchanged as discreet particles, but that is still a theory. A photon is a definition, and we have no idea what EM looks like or how it operates.

    When you claim that models accurately represent clear sky models, I wonder what you base that on.

    • Here we have again full proof that science can never convince people who do not want to accept any evidence, but prefer all kind of unscientific claims.

      What is the accuracy of experimental evidence that would be sufficient, when the 8 significant figures of the experiments that confirm our understanding of electromagnetism and photons are not enough?

      Good try, Judith, but ..

  74. Tomas Milanovic

    A.Lacis
    The radiation is indeed being calculated at every gridbox of the model for every physics or radiation time step (more than 3300 times for each time step for a moderately coarse spatial resolution of 4 x 5 lat-lon degrees

    Thanks, that answered the question.

    Jan Pompe

    I do believe there is sufficient evidence that the earth has been in snowball state (an example and also for the current state it’s in.

    This might be true and it also was in countless other states during the last 4.5 billions of years.
    Trivially none of them was/is stable because the Earth never stayed there what would be the definition of being a “stable state”.
    Clearly the Earth we observe is no more in a “snowball” state if it ever was there.
    That’s what I have been saying – there was/is no stable state for the Earth as long as it has a fluid (liquid/gas) envelope.

  75. Tomas Milanovic

    As per the citations made in my original post, I think substantial support is provided for the validation of the clear sky atmospheric radiative transfer codes.

    I also think that.
    An important mention however is that this this is verification is done for a given vertical profile (temperature, concentration).
    In a gas column in equilibrium where everything is constant, there is no reason why the radiative transfer equations should not give a rather correct answer at least as far as the spectrum is concerned.
    Of course, like Judith rightly says, this doesn’t allow any conclusion about the validity of the calculation once the whole system becomes dynamical.

    For instance if the temperature and concentration fields in the model (we won’t talk clouds and particles here) don’t evolve like they should , the radiation transfer will still be computed correctly but give a wrong result.
    But even in this case it would not be the radiation module that would be in cause but the dynamical module and/or the coupling radiation-fluid flow.

  76. The clear-sky atmospheric radiative energy transport models and codes and application procedures are very likely to have been subjected to adequate testing for verification and validation.
    The primary focus, in my opinion, needs to be on the calculations by the GCMs, or other models and codes that are applied to the Earth’s climate systems. The primary question within this arena is, Do these models / codes / application procedures sufficiently resolve the radiative energy transport to a level of fidelity that is necessary to account for the few W/m^2 change in radiative energy transport that is due to the activities of humans?
    The initial approach to the question can be within the same spherical-cow version of the problem that has been used in this thread.
    The real-world problem of the Earth’s climate systems, and its rough process-model approach through the critical parameterizations, very likely introduces so much fuzziness that the question can’t be answered.

  77. Gordon,

    I would help a lot if you tried to be a bit more objective. I don’t know this for a fact, but I would surmise that you have never read any of my published paper on radiative transfer, or those by Jim Hansen. All of these papers are freely available through the GISS webpage: http://pubs.giss.nasa.gov/

    You should also read some of the papers on radiative transfer by Michael Mishchenko. He has published two technical books on radiative transfer, on which I am also a co-author. To my knowledge, Lindzen, Spencer, and Christy have never published any papers on climate related radiative transfer. Lindzen is an expert in atmospheric dynamics, and Spencer and Christy are expert in satellite microwave measurements. The G&T paper is so far off the wall that it is simply beyond just being wrong. If you have a dental problem, clearly, you would consult a dentist. Why on earth would you want to go and consult a gastro-internist for a dental problem?

    Climate is what physicists call a boundary value problem. And the important boundary value for climate is the solar-thermal energy balance at the top of the atmosphere, which is 100% by radiative means. Atmospheric dynamics is certainly important in climate, but dynamics is much more important in the case of weather forecasting applications.

    If you were to take the time to read more of the main stream published papers on radiative transfer, you would develop a clearer and more objective perspective to interpret the issues that affect global climate change.

    It is radiation that sets the global energy balance of Earth, and radiation is fortunately the one physical process that is understood best and is amenable to accurate modeling. There are plenty of other dynamic and thermodynamic processes in climate modeling where significant uncertainty exists, and which do affect the accuracy of climate model forecasting capability.

    • It is radiation that sets the global energy balance of Earth, and radiation is fortunately the one physical process that is understood best and is amenable to accurate modeling.

      So long as there are parameterizations available for changing the results of the model, characterization as “accurate modeling” does not apply. Never has, never will.

      Applications of radiative-energy transport within the Earth’s climate systems critically rely on ad hoc, heuristic, somewhat empirical parameterizations for all the aspects of the systems that interact with radiative energy transport. There’s very little that’s based on fundamental first-principles.

      Not to mention that there’s never been a time when the Earth’s radiative energy budget is in balance. And never will be.

      • oops, the

        got dropped from the first paragraph.

        It is radiation that sets the global energy balance of Earth, and radiation is fortunately the one physical process that is understood best and is amenable to accurate modeling.

    • Andy,

      It would help a lot if you tried to listen to what people are saying, and thought about it, and spent less time repeating the same dogma about radiative transfer. You could learn a lot from Judith.

      Here is a selection of comments:

      Gordon –

      Heat is also transferred by convection, as Lindzen explains, and is released from high in the atmosphere, not from the surface per se. Without convection, according to Lindzen, the mean surface temperature would be 72 C. It is convection that carries off most of the heat, not radiation.

      Tomas Milanovic –

      So considering a radiative transfer uncoupled of the fluid dynamics is a nice theoretical exercice but has nothing to do with the reality.

      Peter317 (previous greenhouse thread)-

      Yes, a body will radiate ~3.7W/m2 more energy if its temperature is increased by ~1c. However, that does not mean that increasing the energy flow to the body by that amount is going to increase its temperature by ~1C. Any increase in the temperature has to increase the energy flow away from the body by all means it can, ie conduction, convection, evaporation as well as radiation. When the outflow of energy equals the inflow then thermal equilibrium exists with the body at a higher temperature. But that temperature cannot be as much as ~1C higher, as the radiative outflow has to be less than the total outflow.

      You should also look at the thread at WUWT (title “knobs”), if you can bring yourself to do so, which points out the erroneous claim in your recent paper.

      Your statement in the paper,
      “Because the solar-thermal energy balance of Earth [at the top of the atmosphere (TOA)] is maintained by radiative processes only, and because all the global net advective energy transports must equal zero, it follows that the global average surface temperature must be determined in full by the radiative fluxes arising from the patterns of temperature and absorption of radiation. ”
      is untrue – see comments above and my comment at WUWT.

      • Paul,

        Climate GCMs include a full accounting of the atmospheric dynamic and thermodynamic processes. That is what determines the atmospheric temperature profile as well as the distribution of atmospheric water vapor at each model grid box and time step. It is by radiative transfer means that the relationship between the surface temperature and the outgoing LW flux to space is established – quite contrary to what your cited commentators understand or believe.

    • Gordon Robertson

      A Lacis | December 10, 2010 at 10:51 am | Reply

      Gordon,

      I would help a lot if you tried to be a bit more objective. I don’t know this for a fact, but I would surmise that you have never read any of my published paper on radiative transfer, or those by Jim Hansen. All of these papers are freely available through the GISS webpage: http://pubs.giss.nasa.gov/

      I have no respect for Jim Hansen whatsoever. He is a physicist whose background has been mainly in astronomy, and he moved into climate modeling without an adequate background in meteorology or climate science itself.

      He fell under the influence of astronomer Carl Sagan, who had a theory that the atmosphere of Venus was due to a runaway greenhouse effect. IMHO, Hansen has been trying to foist the same theory on the Earth’s atmosphere.

      Hansen has also fostered a generation of modelers who have tried to rewrite the theories of meteorology and atmospheric physics. I think Hansen is wrong and all of his disciples are wrong. He has been proved wrong. He made certain model-based predictions in 1988 and had to recant them in 1998.

      http://www.sepp.org/Archive/reality/michreviews.html

      Personally, I don’t like the way Hansen mixes science and politics. I regard him as an alarmist of the first order, and that GISS has corrupted the surface database.

      The following site is not authored by a scientist but the man has done his homework. He reveals the damage done to the surface station record by Hansen et al. It’s all there, but you have to dig a bit for all the details.

      http://chiefio.wordpress.com/gistemp/

      The GISS record is the only one showing a distinct warming trend over the past 10 years. IPCC lead author and AGW guru, Kevin Trenberth, admitted in the climategate emails that the warming has stopped. He explains it away as modern instrumentation not being adequate for separating the warming signal from background signal created by ENSO. That raises the question as to why ENSO warming is not being recognized as the true force behind the warming.

      I have not read your paper on radiation but I have read physicist/meteorologist Craig Bohren’s entire book on atmospheric radiation. In the book, he refers to GHGs trapping heat as a metaphor at best, and at worst, plain silly. He gives more credence to the back-radiation theory but points out that it is a simple model. Here is a link to comments by Bohren:

      http://www.usatoday.com/tech/columnist/aprilholladay/2006-08-07-global-warming-truth_x.htm?csp=34

      Bohren makes a comment pertinent to this thread:

      “My biases: The pronouncements of climate modelers, who don’t do experiments, don’t make observations, don’t even confect theories, but rather [in my opinion] play computer games using huge programs containing dozens of separate components the details of which they may be largely ignorant, don’t move me”.

      I think it’s important to recognize that many modelers are way in over their heads when it comes to physics, meteorology and the degreed discipline of climate science.

      I mean you no disrespect, and I have not read your work. However, I think some scientist are far too focused on radiation as a mechanism for heat movement. G&T claim in their article that a huge difference exists between experiments on CO2 using high densities of the same, and CO2 in the atmosphere where the anthropogenic component is exceedingly rare.

      They claim further that the properties attributed to CO2 in the atmosphere have never been demonstrated in the lab. Inferring that CO2 is responsible for more than 15% of atmospheric warming, given its rarity, goes far beyond what has been found experimentally. Satellite data has verified that CO2 is not causing the warming. Only modeling theory holds to that theory.

    • Subsequent comments have made the point that in mathematics and physics “boundary value problems” differ somewhat from what he typical GCM is trying to do. Some have suggested that they fall into the category of “initial value problems”. I’m not sure that if they do the GCMs are well posed.

      I think people might be talking past one another here.

      The reason mathematicians etc study these kinds of problems is because they are interested in the conditions under which the systems allow well defined solutions. The issues of interest are things like the conditions where unique solutions exist.

      However I think we can say with some confidence that the nature of GCMs is such that characterising them as either type of problem, or both, still leaves them outside the conditions where they can be solved uniquely. They (or at least the phenomena they are modeling at this level of granularity) are just too complex for well behaved solutions (not withstanding their relatively simple boundary conditions and the seductive deterministic nature of the models used). So this issue is all a bit irrelevant.

      If I was going to forecast future climates I’d be looking at stochastic modeling. Now GMCs could help here using Monte Carlo techniques if only they more systematically incorporated uncertainty and we had unlimited computer resources, so that isn’t going to be a solution.

      However on a more positive note I think I saw a link on this thread to the recent seminar at Issac Newton Institute looking at how to model incorporating stochastic models into GCMs to help deal with this issue.

      To round off, reading through this thread this rather leads me to ask two questions:

      1. If we want to investigate the impact of CO2 on the climate why would we use a GCM? and
      2. Why incorporate all this level of detail about who absorbs what, and what they emit in a model designed to tell us in in inevitably crude terms what our climate might be in 2050?

      Two different horses for different courses in my view.

  78. I see the Dessler cloud feedback paper has had a bit of a kicking at Roy Spencer’s site:
    http://www.drroyspencer.com/2010/12/the-dessler-cloud-feedback-paper-in-science-a-step-backward-for-climate-research/

  79. Tomas Milanovic

    Climate is what physicists call a boundary value problem.

    No it is not and there is not the beginning of a proof that it could be.
    It is a dynamical problem highly likely dependent on initial conditions.
    Some climate scientists call it like that but certainly not physicists .
    I already explained why I don’t think that the climate can be a “boundary problem” in the thread Testimony follow up , here is the argument:

    This paradigm sees the climate as a non dynamical problem what is obviously wrong.
    Already from the mathematical point of view, as the climate states (descibed by the values of the dynamical fields such as velocities, temperatures, pressures, densities etc) are given by the solution of a non linear PED system, it is obvious that this solution can be only obtained when one specifies both the boundary AND the initial conditions.
    Saying that the initial conditions don’t matter, what is exactly what the statement that this is only a “boundary value” problem does, condemns to stay forever in ignorance about the dynamics of the system.

    On the other hand non linear dynamics is (almost) all about initial conditions and the dependence on initial conditions of the evolution of the states of the system .
    As a great number of examples in fluid dynamics shows, this dependence is paramount and even a very slight modification of the boundary and/or initial conditions may change dramatically the dynamics of the system.
    Specially studies of spatio-temporal chaos show that the systems are only rarely if ever described by an invariant stochastical distribution of future states which would be independent of the initial conditions.
    Conversely when such an invariant probability distribution exists, it is called the ergodic hypothesis.
    But the ergodic hypothesis is not a given, it must be proven and it is extremely unlikely that the hypothesis is valid for the complex climatic system.

    The Earth system is neither in equilibrium nor in a steady state.
    There is not a beginning of the proof that it could be ergodic for the simple reason that it probably is not.
    Btw if somebody could come up with a proof of climate ergodicity, he could apply the same method on the 1M$ Clay Institute problem of Navier Stokes equations which is “easier”.
    It is not happening to my knowledge.

  80. Alexander Harvey

    There may well be reasons as to why the IPCC CO2 doubling forcing is relatively easy to calculate with some accuracy.

    I haven’t, possibly couldn’t, think this all through but I can see that there may be a simplication that means that things like the H2O concentration with height may not make much of a difference and the same may be the case for clouds and the weather in general.

    Doubling CO2 attenuates the spectrum primarily in the window left after all the offer GHG are taken account of, limiting the effect oof other gasses on the calculation.

    For much of the CO2 absorption spectrum the effective radiative height is already above the level of the weather. Where this is not so the lines are weaker and may not contribute much change to the forcing if the concentrations are doubled.

    It may just be that deltaF (the change in radiative flux) is much easier to calculate with some accuracy than say F (the magnitude of the radiative flux).

    As I say I haven’t thought this through but in essence all that one needs to do is calculate the the change in the effective radiative height for each line to give one the change in effective radiative temperature for each line and hence the the change in flux for each line. This change in flux may well be dominated by lines whose radiative height is alredy above the weather. One does need to know the lapse rate but (for all I know) this maybe easy for heights above the weather.

    Cloud top heights will make a difference but again calculating the deltaF may still be relatively straightforward (even if calculatinf F isn’t) and the final result may not be very sensitive to the distribution of cloud types (a speculation).

    It could turn out that the IPCC Co2 forcing is one of the few things that can be calculated with such precision.

    Alex

  81. All this theorising to the minutia of molecular scale, is hardly going to convince anyone, let alone growing army of sceptics.
    In order to do that the experts (from both sides of the fence) have to concentrate on climate aspects that matter and majority of people can relate to: jet-stream, AMO, PDO, Sothern Oscilation, Arctic, Antarctica etc.
    Omnia mutantur nihil interit

  82. Tomas – You responded to Andy Lacis as follows: “Climate is what physicists call a boundary value problem (Andy’s statement).

    “No it is not and there is not the beginning of a proof that it could be.
    It is a dynamical problem highly likely dependent on initial conditions.”

    It will be best for Andy Lacis or other climate modelers to respond. In the meantime, I see this as gettings back to the “climate vs weather” issue, because the latter is clearly an initial value problem. To my mind, climate is less so. Where I live, in the Northeastern U.S., I can predict with confidence that the temperature here next July will be much warmer than it is today (December 10), and will probably average in the 80’s in degrees F. I can make that prediction with equal confidence regardless of whether today is unseasonably cold or unseasonably warm, or whether it is snowing, raining, or the sun is shining – the initial conditions make no difference. Just as in the real world, climate model ensembles can start with different initial conditions, incorporate various dynamical considerations and boundary conditions, and yield output that converges over time toward similar results. This doesn’t operate equally well in all areas of model performance, but for long term global responses of temperature to CO2 as an example, the simulations aren’t bad.

    • Alexander Harvey

      Fred,

      Is the temperature anomaly for next July independent of the temperature anomaly for last July?

      Probably not, but eventually such memory patterns fade. I think there is a fading memory of passed dynamics, in the atmosphere perhaps for a week or so but also longer for some aspects. In the oceans it perhaps extends to years or decades, perhaps somewhat longer.

      At what timescale could the dynamics of a “changing” climate be said to be purely a boundary problem.

      Alex

  83. Tomas Milanovic

    Fred Moolten

    Where I live, in the Northeastern U.S., I can predict with confidence that the temperature here next July will be much warmer than it is today (December 10), and will probably average in the 80′s in degrees F. I can make that prediction with equal confidence regardless of whether today is unseasonably cold or unseasonably warm, or whether it is snowing, raining, or the sun is shining – the initial conditions make no difference.

    This is the classical but highly misleading argument that actually has little to do with the issue I have treated.
    Yes it is trivial, if you live on the South pole, to estimate that the temperature in winter will be lower than the temperature in summer.
    But this example has nothing to do with the initial conditions vs boundary conditions problem.
    It has to do with 2 facts:
    – you use a local (not global) region
    – this region is chosen such as the local energy flow (Sun’s radiation) is seasonaly extremely variable and quasi periodical between the 2 chosen time points.

    You could of course not use your example if you happened to live in Singapour because the temperature on a 6 month scale would depend there on initial conditions – not only in Singapour but all over the globe.
    And even in NE US you could make no such prediction for something else than one cherry picked time scale example (of the summer-winter kind)

    Most of these arguments I read regarding the “independence of initial conditions” over long time scales for the whole globe have always only been hand waving, impressions, convictions without any solid scientific foundation.

    • Alexander Harvey

      Tomas,

      I believe that to model a changing climate one is more or less forced to make the assumption that it must be treated as a boundary problem.

      One doesn’t know the real world initial conditions and even if one did, they cannot be imposed on the model without causing a transient, as the world and the model are two different dynamic systems, and the real world state may be a virtually impossible model state.

      I cannot see that we could make much progress without letting the model stabilise (run up) which in effect allows it to come up with its own initial conditions.

      If you are saying that we must theorise about the real world as being subject to initial conditions. I cannot see how you can be wrong. But what does that imply and where does it get us? I can see the point, but I cannot see how it helps beyond allowing us to say something about persistence and the realised variability (distance from equilibrium) in any particular epoch and its dependence on the prior epoch.

      Alex

      • Alexander Harvey

        Tomas,

        Is part of your point that my use of the term equilibirum is crazy as no such thing can be defined?

        Alex

      • Alexander,
        It is probably true that most of the analysis of climate change is based on assuming that climate is a boundary problem. That is, however, a property of the models, not necessarily of the real earth with oceans. The assumption is perhaps likely to be true for the earth over time spans of several hundred years, if the boundary conditions remain stable over that period. Over a couple of years ENSO and other similar factors add a lot of dependence on initial conditions. AMO at other multidecadal influences may be important initial conditions for several decennia. There is also evidence that even very much slower processes are influencing glacial cycles. How much confidence we may now have that initial values are not also very important on the time scale of a century or two?

      • Alexander Harvey

        Pekka,

        I think I can see both sides of the argument. I think that as long as we are talking about a particular realisation, (which in the real world is all there is) initial values are important to varying degrees according to the timescale.

        Such things would be true for systems that are linear and hence not chaotic so, I think Tomas must be saying something more than this.

        Alex

  84. The overly-simplistic mis-characterization offered by A Lacis is one that contains no useful information. There are several other examples of this approach within climate science. The mathematical problem considered by GCMs, and other models / codes, is always set as an initial-value boundary-value problem ( IVBVP ). If time is an independent variable a mathematical BV problem would require that information be specified at future time. Kinda hard to accept.

    Even when those famous steady-state energy budget diagrams are presented, so simple that all aspects could be discussed in detail, never a word is said about the range of the values that the few constants included in the analysis could attain; the Solar Constant, the albedo, the ( always missing ) emissivity for the Earth’s surface, and the Earth’s “greenhouse factor”. Small changes in the values assumed for these will make the perfectly balanced budget become un-balanced. The few parameters are simply assigned numerical values with the sense, based solely on authority, that there are well-defined and that the values presented are known to be correct with great accuracy and that they are forever un-changing.

    Plus at this point CO2 is introduced while conveniently over-looking the plain, well-known fact that the phases of water in the Earth’s atmosphere are responsible for the greatest part of the albedo. Except when our well-established-from-first-principles radiative energy transport model needs a little tuning and we have to throw in some other radiatively-interacting stuff.

    • I said,

      If time is an independent variable a mathematical BV problem would require that information be specified at future time.

      To expand a bit. That means that the calculated response at any time, let’s say the present time, is a function of information at some future time.

  85. Alex – I believe that there are extensive records for July, probably going back to around 1850 in the Hadcrut series. The anomalies are not independent, nor are they hopelessly wide-ranging so as to confuse July with December. They reflect internal climate variations (ENSO and probably others) and also exhibit clear warming trends consistent with anthropogenic influences. The influence of ocean changes probably persists for millennia, but the rate dwindles to insignificance as the time is stretched out.

    The trends don’t conflict with the principle of boundary values as I understand it, because I don’t interpret “boundary values” to necessarily mean unchangeable values. For example, an analysis of the effects of CO2 over time might utilize CO2 concentrations at time T1 and T2 as boundary values. Perhaps the modelers will comment.

    • Is the “reply” function misbehaving or am I just being careless? My response was to Alex, but in my view, the empirical, “climate vs weather” type of evidence that long term global climate behavior is mainly a boundary value problem also applies to other comments above. How well the models can simulate climate is a separate issue and depends on what type of simulation is attempted.

  86. Tomas Milanovic

    Alexander Harvey

    At what timescale could the dynamics of a “changing” climate be said to be purely a boundary problem.

    At none because this question is precisely a question about the dynamics which can’t be answered (understand solving the relevant dynamical equations where time appears explicitely) without the knowledge of the initial conditions.
    At short time scales you can neglect slow variations like orbital parameters , ocean currents or continental movements but they kick in when you get to longer time scales.
    So there is always something whose initial condition matters.
    This is simply a property of differential equations.

    • Yes, that’s right, there is a common misunderstanding here (fed by climate scientists and the IPCC). Weather is chaotic because of short-term, medium-space-scale interactions between things like cyclones and fronts and clouds. Climate fluctuates in a probably chaotic way on longer times, involving larger structures such as ice sheets and ocean currents. But many climate scientists appear to live under the delusion that the climate would be stable and stationary were it not for “radiative forcings” (this relates to some of Tomas’s earlier points). It is absurd to try to “explain” every little wiggle in the temperature history is being the result of some (usually man-made) “forcing”.

      Here is a nice quote on this from Roger Pielke about the IPCC AR4:

      “Their claim that

      Projecting changes in climate due to changes in greenhouse gases 50 years from now is a very different and much more easily solved problem than forecasting weather patterns just weeks from now.

      is such an absurd, scientifically unsupported claim, that the media and any scientists who swallow this conclusion are either blind to the scientific understanding of the climate system, or have other motives to promote the IPCC viewpoint. The absurdity of the IPCC claim should be obvious to anyone with common sense.”

  87. Tomas Milanovic

    Dan Hughes

    The mathematical problem considered by GCMs, and other models / codes, is always set as an initial-value boundary-value problem ( IVBVP ). If time is an independent variable a mathematical BV problem would require that information be specified at future time.

    Yes. This is really standard PED theory. There should actually be no discussion about such trivia. Unless, of course, one postulates that the PED theory doesn’t apply in climate science.

  88. Alexander Harvey

    Tomas and Fred,

    To begin with I have no definition of climate but I think I know what a standard climatology is:

    The monthly mean value of a measured variable over a thrity year period.

    Now obviously even in a world or model where all the forcings stay the same the climatologies for successive 30 year periods are not going to be indentical and to that extent each could be seen as a particular realisation influenced by the initial conditions at the start of each period. Here strict determinism has been assumed which can be inforced in the modelled case.

    (Interestingly, I believe strict determinism is not always inforced in models.)

    Now given that the system is chaotic, viewing each realised climatology as capable of being modelled as a stochastic variation about some expected value for the climatology is not strictly valid, but could be “effectively” valid, e.g. not making any material difference.

    Now I can accept that in a strict sense the initial values problem never goes away and in a completely deterministic world or model they make a real difference to particular realisations.

    What I wish to know is whether each of you thinks this makes a significant difference to the particular realisations of the climatology beyond that which could be explained as a stochastic variation on an underlying expected climatology.

    My guess is that you would disagree on this point, but I don’t know.

    Alex

    • Alex – When it comes to initial value/boundary value issues in climate modeling, my understanding is too shallow for me to be dogmatic, and I would defer to the modelers who do this sort of thing for a living.

      Here is my take on it, however. Utilizing initial and boundary values, appropriate equations for momentum, energy, mass conservation and other physical relationships, and parametrizations, models are constructed so as to yield output that tends to oscillate around a climate state that is relatively stable over time but subject to the stochastic variations you mention. When the model varies from the behavior of real world climate, its parameters are adjusted to tune it so that it corresponds as well as possible.

      The “tuning” ends there, however. If one is evaluating response to a forcing such as changes in CO2, aerosols, or solar irradiance, the values of these changes are added as input to the model. The model, or an ensemble of models, is run under a variety of initial conditions and the output is recorded. If the output tends to reproduce observed values (in hindcasts or the occasional forecasts), so much the better. If it does not, the modeler cannot retune it to make it “come out right”. Under these circumstances, models have varied in performance, but simulate some long term global changes (e.g., in temperature) better than shorter term or more regional changes, with initial values increasingly uninfluential as intervals increase from a few years to multiple decades. Part of the disparity between modeled and observed behavior reflects the stochastic variations you mention, but this not preclude useful results based on the elements of climate that are reasonably predictable from basic physics principles. In theory, the chaotic elements within climate could overwhelm the predictable ones, but empirically, the observed behavior of climate over timescales of interest to us (multiple decades or centuries) does not support such an interpretation.

      • Alexander Harvey

        Fred,

        Thanks for taking the time.

        I don’t disagree but there is a bridge that needs constructing that shows why a non-linear chaotic system spontaineously gives rise to a climate that has near linear stochastic aspects.

        The global temperature record and the global temperature data from one model I have looked at (HadGEM1) have certain properties in particular spectra of a certain form. They do not differ much from what one might expect from filtered noise plus warming signal. The level of the noise can be estimated and one can come to some conclusion about estimating the variance due to the stochastic component.

        It is though all that marvelous chaotic behaviour results in something that when averaged over monthly periods is not readily distinquishable from a white noise flux, plus a warming flux, forcing a linear system (in fact an LTI system).

        Now there are reasons to suspect that cannot be quiet true, but on the scale of just one century or so the data may not be able to show any non-linearity in the response.

        One of these is ENSO, widely thought to have a chaotic basis but it is a well damped phenomenom, (the critical time of the damping much shorter than the periods involved) and is tolerably modeled by a damped resonance driven by noise.

        Another is the PDO, which does seem to be a real effect and may be chaotic but again the ability to explain variance in the global temperature record, although probably non-zero is at a level I judge similar to the level of the noise making attribution difficult.

        Now nature occassionally gives the system some hard knocks due to volcanism. I am not aware that any of these experiments have ever shown any significant non-linearity in the the response-recovery profile. The effect on global temperatures seems to be what one might expect of a linear model, a decrease in temperatures followed by a relaxation.

        All in all we seem to have complex deterninistic non-linear origins giving rise to effects well explained by simple stochastic linear models.

        It is that bridge that requires some characterisation.

        On the other hand, those that believe that no such bridge exists must surely need to show cases where the global response has differed markedly from that which one could model as the result of a forcing plus noise impinging on a linear system at the standard climatic timescales of months/years.

        Such a linear stochastic noise plus forcing model is still feature rich, it is capable of exhibiting natural variability due to external, internal including stochastic forcings.

        Alex

      • Alex,
        The relationship between chaotic and stochastic behaviour is very interesting. Many climate models are deterministic models that lead to chaotic results, but one could modify these models by adding small stochastic perturbations at each time step. The resulting model would be stochastic but might lead to very similar results when model runs are repeated and the resulting range of future climates is presented.

        Actually I would consider the original deterministic model doubtful unless the results remain the same when small stochastic perturbations are added. By small I mean something of the order of uncertainties of the variables at each time step.

      • Alexander Harvey

        Pekka

        Thanks, such things are it seems done when evaluating long-term weather forecasts.

        This may interest you:

        “Stochstic representation of model uncertainties in ECMWF’s forecasting system”

        http://sms.cam.ac.uk/media/952887

        The video is about 40 minutes long but the presenter seems to know his stuff and gives a competent performance.

        From the blurb:

        “The stochastic schemes used for the model error representation will be presented. These are the Spectral Stochastic Backscatter Scheme (SPBS) and the Stochastically Perturbed Parametrization Tendency Scheme (SPPT). … The two schemes address different aspects of model error. SPPT addresses uncertainty in existing parametrization schemes, as for example parameter settings, and therefore generalizes the output of existing parametrizations as probability distributions. SPBS on the other hand describes upscale energy transfer related to spurious numerical dissipation as well as the upscale energy transfer from unbalanced motions associated with convection and gravity waves, process missing in conventional parametrization schemes.”

        Alex

  89. Willis Eschenbach

    Dr. Lacis, thank you for your participation. As you know, I have commented on your work at WUWT. In addition, I have analysed the GISS ModelE results using a totally different metric here. It would be great if you could find something wrong with what I have written, and let me know what it is … science at its finest.

    It’s a sincere invitation.

    Here’s an encapsulated version of the problem. We have no estimates of how accurate the models are. We have no estimates of error propagation through the models. We have no V&V or SQA on the models. Forget the unknown errors, we have no idea of how even the known errors in the models affect the outcomes.

    For example, in “Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data” (warning: 2.2 Mb PDF), the GISS NASA modelers (James Hansen, Gavin Schmidt, et al. including you yourself) say:

    Occasionally, divergence along a particular direction might lead to temporarily negative gridbox masses. These exotic circumstances happen rather infrequently in the troposphere but are common in stratospheric polar regions experiencing strong accelerations from parameterized gravity waves and/or Rayleigh friction. Therefore, we limit the advection globally to prevent more than half the mass of any box being depleted in any one advection step.

    Let me be clear about this. It is “common” in your “physics-based” GISSE model to end up with “negative masses”.

    I must have missed that part of my physics class … and if I found that in one of my models, I would take it as a clue that something is very wrong.

    But that’s not what you guys did. Rather than actually fix the problem, you dodge it. You limit the advection globally to half the mass … and what error results from that procedure? Given that the parameterized gravity waves lead to those physically impossible results, what other errors are created by that obviously incorrect process? We don’t know. At a minimum it must screw up the conservation of energy … but since the GISS Model E doesn’t conserve energy, it’s not clear what violating conservation of energy does either.

    Actually, to be fair, your GISSE model does conserve energy. It does it with as much grace as it handled the parameterized gravity waves … with a kludge. And a clumsy kludge at that.

    At the end of every time cycle, the GISSE model simply takes all the global excess or shortage of energy and distributes it evenly around the globe, without even checking how much out of balance the calculation is … true word, confirmed to me by Gavin Schmidt. No routine printout of the error, and no error-trap in the code to see if the error might be big on one cycle or in a certain situation. No matter how far the model is off of the rails at the end of a cycle, the error is arbitrarily spread out evenly around the globe, with no checking, and everything is just fine.

    I mean, who’d want to check conservation of energy? If you checked that, you might find an error, and then you’d have to fix it …

    It is this constant ad-hoc patching of the errors with kludges and arbitrary parameters that gives me little confidence in your code. For example, the code had a problem handling melt-pools on the ice. It was getting melt-pools when it shouldn’t, during the winter.

    Your solution? At least you were consistent. You didn’t fix whatever the errors were in the underlying code that made the bogus melt-pools.

    Instead, GISSE now arbitrarily states that melt-pools can only form on the ice during the six months of the summer. Otherwise, no pools. I can show you the code. I can explain the code to you if you are not a programmer.

    That’s what your vaunted “physics-based” model is doing. When it encounters a problem that indicates there is an error in the physics, you just say something like “OK, no melt pools in March” or the like, and keep on going.

    And on a more important scale, the model is artificially enforcing an energy balance. If you guys were accountants and tried that, you’d be sued so fast your wallets would be spinning. Gavin Schmidt described the error as “small”, and said it was almost always under a couple of W/m2, and usually under 1 W/m2 … an error of 1 W/m2 in each step of an iteration is many things, but it is not “small”. In particular, when Hansen claims an accuracy of 0.85 ± 0.15 W/m2 in his “smoking gun” paper (discussed here), a one-watt error at each and every timestep looms very large …

    I’ve spent a lifetime programming computers. I know the limits of models from bitter and costly experience. And while I find your faith in your model encouraging, it would mean more if your models didn’t constantly fail the simplest of tests.

    But to be fair, your model is great at forecasting warming. Heck, your model shows warming even if there is no change in the forcings. The CMIP Model Intercomparison Project shows that in their control run. And how much did the GISSE model warm in the CMIP control runs, when the forcing didsn’t change in the slightest?

    Oh, only about 0.7°/century … or about the same rate that the globe warmed last century.

    Which is kind of cool, when you think about it, because your model predicts the 20th century warming pretty well (correlation =0.59) with no inputs at all. And that’s a real achievement for a model even with calm seas and a following wind … but it is still an error.

    And with all of those errors and failed tests, you want us to believe your model is accurate, not just for the tiny effects of incremental variations as in the past, but to be accurate regarding pulling out all of the non-conducting GHGs?

    Me, I’ll pass. I’m old-fashioned, I like my models to pass real-world tests before I’ll trust them. Because at the end of day, while your faith in your model is certainly impressive, I’m a follower of Mark Train, who remarked:

    “Faith is believing something you know ain’t so.”

    w.

    • Hmmmm this is getting interesting.
      I’ve cancelled all family functions, I’ve stocked up on nutrients and refreshments and am comfortable in front of my laptop.

      Over to you Dr Lacis

    • Seeing no reply to this yet, I will give my perspective. GCMs can show why Mexico is warmer than the US, July is warmer than March, etc. You can choose to believe these aspects of them or just to look at the real world and use your own knowledge of how climate works. Similarly, at another level, they show what happens when you add CO2, and again you choose to believe them, but only based on the physical interpretation of what they suggest, not blindly. In the case of climate models, the physical interpretation is very straightforwards, to the same degree as understanding seasons and latitude variations, and this makes them credible.
      Some people do actually prefer the heuristic arguments of Spencer and Lindzen, but for most scientists such hand-waving is not an explanation, and quantification is the key. Models ranging from GCMs to back-of-the-envelope one-dimensional estimates point in the same direction regarding CO2-doubling.

  90. Willis Eschenbach

    For Dr. Lacis, who thinks that forecasting the climate is a “boundary problem”, a question.

    Over at MathWorld, a “boundary problem” is defined as:

    Boundary Value Problem

    A boundary value problem is a problem, typically an ordinary differential equation or a partial differential equation, which has values assigned on the physical boundary of the domain in which the problem is specified.

    An “initial value problem”, on the other hand, is defined as:

    Initial Value Problem

    An initial value problem is a problem that has its conditions specified at some time . Usually, the problem is an ordinary differential equation or a partial differential equation.

    That seems quite clear. If we know the temperatures of the edges of a sheet of steel at various times, the rest is a differential equation.

    So Dr. Lacis, my questions about the climate boundary problem are:

    1. What is the “physical boundary of the domain in which the problem is specified”?

    2. What are the variables whose values are known at the physical boundary?

    3. What are the values of those variables?

    4. At what time are the values of the variables at the boundary specified (time of measurement)?

    5. What evidence do we have that climate is actually a boundary problem? I ask because no less of an authority than Benoit Mandelbrot said that there is no statistical difference between weather and climate:

    Among the classical dicta of the philosophy of science is Descartes’ prescription to “divide every difficulty into portions that are easier to tackle than the whole…. This advice has been extraordinarily useful in classical physics because the boundaries between distinct sub-fields of physics are not arbitrary. They are intrinsic in the sense that phenomena in different fields interfere little with each other and that each field can be studied alone before the description of the mutual interactions is attempted.

    Subdivision into fields is also practised outside classical physics. Consider for example, atmospheric science. Students of turbulence examine fluctuations with time scales of the order of seconds or minutes, meteorologists concentrate on days or weeks, specialists whom one might call macrometeorologists concentrate on periods of a few years, climatologists deal with centuries and finally paleoclimatologists are left to deal with all longer time scales. The science that supports hydrological engineering falls somewhere between macrometeorology and climatology.

    The question then arises whether or not this division of labour is intrinsic to the subject matter. In our opinion, it is not in the sense that it does not seem possible when studying a field in the above list, to neglect its interactions with others, We therefore fear that the division of the study of fluctuations into distinct fields is mainly a matter of convenient labelling and is hardly more meaningful than either the classification of bits of rock into sand, pebbles, stones and boulders or the classification of enclosed water-covered areas into puddles, ponds, lakes and seas,
    Take the examples of macrometeorology and climatology. They can be defined as the sciences of weather fluctuations on time scales respectively smaller and longer than one human lifetime. But more formal definitions need not be meaningful. That is, in order to be considered really distinct, macrometeorology and climatology should be shown by experiment to be ruled by clearly separated processes, In particular there should exist at least one time span on the order of one lifetime that is both long enough for micrometeorological fluctuations to be averaged out and short enough to avoid climate fluctuations…

    It is therefore useful to discuss a more intuitive example of the difficulty that is encountered when two fields gradually merge into each other. We shall summarize the discussion in M1967s of the concept of the length of a seacoast or riverbank. Measure a coast with increasing precision starting with a very rough scale and dividing increasingly finer detail. For example walk a pair of dividers along a map and count the number of equal sides of length G of an open polygon whose vertices lie on the coast. When G is very large the length is obviously underestimated. When G is very small, the map is extremely precise, the approximate length L(G) accounts for a wealth of high-frequency details that are surely outside the realm of geography. As G is made very small, L(G) becomes meaninglessly large. Now consider the sequence of approximate length that correspond to a sequence of decreasing values of G. It may happen that L(G) increases steadily as G decreases, but it may happen that the zones in which L(G) increases are separated by one or more “shelves” in which L(G) is essentially constant. To define clearly the realm of geography, we think that it is necessary that a shelf exists for values of G near λ. where features of interest to the geographer satisfy G>=λ and geographically irrelevant features satisfy G much less than λ. If a shelf exists, we call G(λ) a coast length”

    After this preliminary, let us return to the distinction between macrometeorology and climatology. It can be shown that to make these fields distinct, the spectral density of the fluctuations much have a clear-cut “dip” in the region of wavelengths near λ with large amounts of energy located on both sides. But in fact no clear-cut dip is ever observed.

    When one wishes to determine whether or not such distinct regimes are in fact observed, short hydrological records of 50 or 100 years are of little use. Much longer records are needed; thus we followed Hurst in looking for very long records among the fossil weather data exemplified by varve thickness and tree ring indices. However even when the R/s diagrams are so extended, they still do not exhibit the kinds of breaks that identifies two distinct fields.

    In summary the distinctions between macrometeorology and climatology or between climatology and Paleoclimatology are unquestionably useful in ordinary discourse. But they are not intrinsic to the underlying phenomena.

    I get the impression that saying “it’s a boundary problem” is regarded as a shibboleth for the true faith, the mere utterance of which clears the way.

    But me, I’m not much into faith, so until I get answers to my five questions above, I will continue to be a heathen disbeliever …

    Now here, Judith, we have the opportunity for what you are discussing in another thread, some education. If Dr. Lacis comes back and answers questions, we have some hope for fighting my ignorance about the boundary problem.

    But I have little faith that Dr. Lacis will actually answer them … I’m happy to be pleasantly surprised, however.

  91. While we wait for Lacis to explain, I’ll give an idea of why the climate problem is a boundary problem. Typical boundary problems are steady-state solutions, such as sensitivity of climate to doubling CO2. This is a fixed forcing, where forcing has a specific meaning, and includes the steady climate drivers, solar, and CO2. Climate models may generalize these forcings to be specified but time-dependent, such as CO2 scenarios , solar variations, or specified volcanoes and aerosols, but this is still a boundary problem because of the climate evolution being determined by the specified forcings. The “boundary” is not a physical boundary in every sense, though the top boundary condition of solar radiation from space is a part of it, but it is a boundary in the sense of a constraint given by the specified forcings.

    • Willis Eschenbach

      Jim D, thanks for the response. You say:

      While we wait for Lacis to explain, I’ll give an idea of why the climate problem is a boundary problem. Typical boundary problems are steady-state solutions, such as sensitivity of climate to doubling CO2. This is a fixed forcing, where forcing has a specific meaning, and includes the steady climate drivers, solar, and CO2.

      But as I’ve shown above, the GISSE model gives incorrect answers even when all of the forcings are set to zero. If climate is truly a boundary problem, and the model is truly simulating the climate, and the forcings are set to zero, why is the temperature changing in the GISSe model? Your choices are:

      1. GISS model no workee.

      2. It’s not a simple boundary problem.

      3. Both.

      Also, saying it is a “boundary problem” if the “boundary” is the TOA implies that we understand all of the forcings. That seems doubtful.

      Thank you for answering. I didn’t really expect Dr. Lacis to answer me, once the going gets tough, the tough go answer an easier question … but like I said, I’m still happy and willing to be surprised.

      • I didn’t really expect Dr. Lacis to answer me, once the going gets tough

        You any relation to Bruce Willis? Maybe Andy just hasn’t seen the connection yet.

      • Willis Eschenbach

        I’m happy to be surprised, like I said.

      • You any relation to Bruce Willis? Maybe Andy just hasn’t seen the connection yet.

        No relation but there are some similarities in character.

        Both Willises cop a caning from the bad guys over and over again, but they don’t get deterred one iota, they just keep plugging away doing the right thing.
        And they’re both universally regarded as the good guy.

        By the way, any relation to Vince? He is a pretty bad comedian IMHO
        He fires off words like a gattling gun and one gets tired of hearing his voice after a short while.

      • By the way, any relation to Vince? He is a pretty bad comedian IMHO. He fires off words like a gattling gun and one gets tired of hearing his voice after a short while.

        Any relation to Bah Humbug?

  92. A Lacis

    ….” The G&T paper is so far off the wall that it is simply beyond just being wrong. “…..
    This is hardly an adequate reply to Gordon Robertson.

    So it should be easy to pick one major defect in the G&T paper and point it out.

    For instance G&T say that a major error in most versions of Greenhouse Theory is in violations of the Second Law.
    A fairly frequently expressed climate science understanding of this law is ;
    Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.
    This is wrong.

    • I fail to understand, what you want say.

      There is no doubt that the second law makes a statement about the net flow. There is nothing wrong in describing radiative transfer of energy between two bodies at different temperatures as a stronger radiative transfer from the hotter to the colder and a weaker radiative transfer from the colder body to the hotter. After all, radiation definitely occurs in both directions.

      The second law tells only that the net transfer is from the hotter to the colder.

      When the situation can be described in alternative ways, physicists usually prefer the choice that makes quantitative calculations easiest or most straightforward. In the case of radiative heat transfer this leads often to calculating separately both directions of radiative heat transfer and then their difference. Any paper that claims that this is in contradiction with the second law of thermodynamics is seriously wrong and shows that its authors know nothing about thermodynamics.

    • I believe you’re confusing heat with energy.
      The nettflow of energy is from the warmer to the colder body. Energy flows both ways between the two bodies, but there’s always more flow from the hotter to the colder than there is from the colder to the hotter – unless work is done.

    • So it should be easy to pick one major defect in the G&T paper and point it out.

      The hard part is having so many choices.

      • Vaughan Pratt

        Pick the first one you come to.
        Assuming you know how to count to one!

      • Pick the first one you come to.

        A truly major one is their claim that the 2nd law of thermodynamics disproves back radiation. As has been pointed by far too many people to keep track of, it does no such thing.

        Assuming you know how to count to one!

        Hmm, let me try it. 3, 2, 1, buggeroff! (Pardon my French.)

      • If one is going to use a foreign language one must first understand it. Now can you tell us exactly where in the G&T paper they claim that

        2nd law of thermodynamics disproves back radiation.

        I am having a lot of difficulty finding it.

      • Now can you tell us exactly where in the G&T paper they claim that 2nd law of thermodynamics disproves back radiation. I am having a lot of difficulty finding it.

        If you’re unable to find the problems in a paper after a whole slew of people have pointed them out, this might help explain why you consider the paper to be free of problems. (David Hagen appeared to be having the same problem with Miskolczi’s paper back when he was laboring under the delusion that Viktor Toth had arrived at the same number (2) that FM was claiming when VT had in fact obtained 0.4.)

        Unfortunately the ability to count to one will not help here because the claim is on page two of the arxived journal version.

        “The atmospheric greenhouse eff ect, an idea that many authors trace back to the traditional works of Fourier (1824), Tyndall (1861), and Arrhenius (1896), and which is still supported in global climatology, essentially describes a fi ctitious mechanism, in which a planetary atmosphere acts as a heat pump driven by an environment that is radiatively interacting with but radiatively equilibrated to the atmospheric system. According to the second law of thermodynamics such a planetary machine can never exist.”

        If one is going to use a foreign language one must first understand it.

        The same goes for English. What they’re saying there is that the second law of thermodynamics proves there cannot be back radiation, by virtue of being inconsistent with it. It is a proof by contradiction, reductio ad absurdum.

      • Pratt quoting G&T:

        “The atmospheric greenhouse eff ect, an idea that many authors trace back to the traditional works of Fourier (1824), Tyndall (1861), and Arrhenius (1896), and which is still supported in global climatology, essentially describes a fi ctitious mechanism, in which a planetary atmosphere acts as a heat pump driven by an environment that is radiatively interacting with but radiatively equilibrated to the atmospheric system. According to the second law of thermodynamics such a planetary machine can never exist.”

        Nowhere in that statement does it even remotely claim

        that the 2nd law of thermodynamics disproves back radiation.

        it doesn’t even imply it. It’s very clever of you to be able to count to two but it seems you can’t go far beyond that. I would have thought that at the very least you would have gotten to section 3.9 before quoting text that did not prove your claim.

        Now to an earlier matter:

        . However the “cannon ball” is really an N-atom molecule

        Naah the “cannon ball” is not a molecule of any sort but a parcel of air free to move vertically by convection. Advection (lateral movement) is irrelevant.

      • (Oops, sorry, only noticed your reply just now. I keep getting lost in these blogs.)

        Nowhere in that statement does it even remotely claim
        that the 2nd law of thermodynamics disproves back radiation. it doesn’t even imply it. I would have thought that at the very least you would have gotten to section 3.9

        We seem to have wildly different interpretations of what G&T meant by what I quoted. As I read that quote (from the abstract), they are saying that the 2nd law disproves the possibility of such a planetary machine. Now I would have thought, but you seem to disagree, that it is obvious from context that they’re referring here to “the idea that many authors trace back to the traditional works of Fourier (1824), Tyndall (1861), and Arrhenius (1896), and which is still supported in global climatology,” which they develop in more detail in Section 3.6.

        So what do you think they’re referring to there?

        I found Section 3.6 utterly unreadable because they kept quoting statements by other authors that I fully agreed with, and then sarcastically shot them down via logic I was utterly unable to follow. Their slogan could well have been “Everyone is wrong except Wood and us.” I just couldn’t follow it, it was like trying to hang on to a roller coaster.

        Somehow their observation that almost everyone but them in the past two hundred years of the subject is out of step doesn’t seem to bother them.

        That all comes well before Section 3.9, which is about a bizarre reinterpretation of “the traditional works” in which G&T make amazing statements like “The second law is a statement about heat, not about energy,” contradicting every physics textbook that says “Heat is a form of energy,” and “the Stefan-Boltzmann constant is not a universal constant” when it clearly is (unless you believe π or Boltzmann’s constant or Planck’s constant or the speed of light is different on Mars or Arcturus, which may be but not by much). Statements like that are what one might expect on a first year exam by a student who should not have enrolled in physics in the first place. How is one supposed to make sense of arguments based on nonsensical statements about physics?

        Did you say you work with Hitran data? How are you able to do that and yet find your thinking fully compatible with the G&T paper? You must be able to compartmentalize your thinking far better than I. Quite apart from the spiteful polemics lacing the paper, I found their arguments devoid of both logic and basic physics. Were I the handling editor for this paper I would not have dreamed of sending it out for refereeing, I’d lose face with my referees.

      • blockquote>We seem to have wildly different interpretations of what G&T meant by what I quoted. As I read that quote (from the abstract), they are saying that the 2nd law disproves the possibility of such a planetary machine

        The “machine” is the atmospheric heat pump that heats up the surface from a cooler source. It seemed straight forward enough to me so why not to you? I think I can see why:

        “The second law is a statement about heat, not about energy,” contradicting every physics textbook that says “Heat is a form of energy,”

        Heat might be a form of energy but that is no reason to confuse the two. For example there may be energy in an an isolated system but there will be NO HEAT if the entropy is maximised.

      • I don’t want to argue whether or not heat is energy any more, it’s a pointless discussion. It’s a waste of time attacking the G&T paper’s faults one by one, it’s like trying to exterminate an ant nest by squishing the ants one by one. The only other paper I’ve seen in a journal with anything like the same number of ridiculous statements is the Sokal paper. In Sokal’s case he admitted he did it deliberately.

        It is inconceivable that G&T could have Ph.D.s in physics and be able to write such utter nonsense. The only possible explanation is that, like Sokal, they did it deliberately, but unlike Sokal they won’t admit to having done so.

        Anyone who can’t see that has no business claiming to be knowledgeable about physics.

  93. Pekka Pirilä

    The statement that you seek to defend is;
    “Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.”

    I’m sure you’ll agree that the contentious part of the sentence is
    Heat can flow from a colder to a hotter object ……we will assume the other part is a condition.
    I f you look up the thermodynamics section of any reputable Physics Textbook it will explain that HEAT has the thermodynamic capacity to do WORK.
    So whatever the status of this radiation is it is not HEAT.

    • I know perfectly well, what I am talking about. I have been teaching thermodynamics and there is nothing controversial in what I said. Read again my previous message, and try to understand, what it tells.

      • Pekka Pirilä
        Peter above is correct.
        Who do you teach thermodynamics to?

      • I’m sorry Bryan, Peter and Pekka. After reading your discourse, I just can’t help but post this.

        Who do you teach thermodynamics to?

        Answer: Climatologists

      • I have teached at the Technical University of Helsinki for students of technical physics (those who understand most about physics) and to students of energy engineering.

      • Pekka
        Thanks for the reply.
        Does your syllabus include the Carnot Analysis(Carnot Cycle) of the perfect heat engine?

      • Naturally, and other cycles as well. For energy engineering students the various thermodynamical cycles are the most important thing to learn.

      • Pekka

        You will understand that the Carnot Analysis is the usual introduction to the Second Law.
        Is that your understanding?
        It also gives the most efficient method of transferring heat from a lower to a higher temperature.

      • In engineering Carnot cycle is rather a theoretical concept than a practical cycle because other cycles are much more easily realized as practical engines.

        It is always the first cycle to be taught as it is used to relate the second law to the best possible efficiency, but after that the other cycles get most emphasis.

      • Yes the Carnot Analysis is for an “ideal” or perfect engine.
        So any transfer of heat from a lower to a higher temperature will be less effiient!

        What law will govern the effect of radiative energy from lower to higher bodies?
        Will it be the first law such as Heat gained = CxdeltaT
        Or the Second Law?

      • The answer to your question is above in my message of December 11, 2010 at 6:41 am.

        The radiative heat exchange between two bodies is not a complete thermodynamic cycle. The second law tells only that more energy flows from the hotter to the colder than from the colder to the hotter, or in other words the net flow is from the hotter to the colder.

      • Pekka

        I gather the way you see it is lets say 300J of radiation arrive from the colder body lets say at 200K.(With the usual spread of wavelengths)
        This radiation is fully absorbed by a body at 250K.
        If we know the mass and specific heat(C) of the warmer body we might try a first law solution such as ;
        CMdelta T
        However the alarm bells might ring in our heads and say ;
        No this is the province of the Second Law!
        Where do you stand?

      • I’m mystified by these objections to Pekka’s statement about bidirectional heat flow. Take two square sheets of perfectly black metal, each of side 42 cm (about 1.9 sq.ft.), at respective temperatures T₁ hK (hectokelvin, i.e. 100T₁ K, so 3.17 hK = 317 K) and T₂ hK , and place them side by side 1 cm apart. Then by Wien’s displacement law, radiation at wavelengths peaking at 29/T₁ microns will flow from sheet 1 to sheet 2 while radiation at wavelengths peaking at 29/T₂ microns will flow from sheet 2 to sheet 1. By the Stefan-Boltzmann law, the power at the first wavelength received by sheet 2 is T₁⁴ watts (the sheets were sized to make the constant 1 to within .02%) while that received by sheet 1 is T₂⁴ watts. The net flux is then T₁⁴ − T₂⁴ watts, with the sign determining the direction of flow.

        This is completely consistent with everything Pekka has been saying. In particular it is clear that radiation is flowing in both directions, and we can even measure each direction separately when the temperatures, and hence wavelengths, are sufficiently far apart.

        Based on the comments I would estimate the fraction of the n contributors to this discussion who regularly teach thermodynamics to be 1/n.

      • I don’t think anyone disputes that radiation emanates from both objects. The issue is about the net flow. heat is a flow. The transfer of heat between objects is the result of the net flow. The presence of the cooler object slows the rate of cooling of the hotter object, but to talk of the cooler object ‘heating’ the hotter object just confuses the issue, and is contrary to natural language usage. The natural usage of ‘heating’ is that if something is being heated, it will get hotter. The cooler object doesn’t do this to the hotter object. It radiates towards it yes, but heats it? No.

        I understand why proponents of the AGW hypothesis wish to torture language this way though, given the average atmospheric temperature is lower than the average oceanic temperature. They have to convince people the tail wags the dog somehow.

      • to talk of the cooler object ‘heating’ the hotter object just confuses the issue

        Was someone claiming that the cooler object heated the hotter one? I thought the phrase Bryan was objecting to was “Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.”

        and is contrary to natural language usage.

        I looked up the dictionary definition of radiant heat and got

        radiant heat

        noun Thermodynamics
        heat energy transmitted by electromagnetic waves in contrast to heat transmitted by conduction or convection.

        That definition seems completely consistent with the wording of the phrase in question. The term “radiant heat” is in wide use in English, with the above dictionary meaning.

      • “The term “radiant heat” is in wide use in English, with the above dictionary meaning.”

        Certainly when sitting in front of a bar radiator which is hot and you by comparison are cold. Try not to confuse heat with energy (radiant or otherwise). Pekka who “teached (sic) physics” may have an excuse that you don’t.

      • Vaughan Pratt
        |There is a difference between the vernacular use of the word HEAT and the Thermodynamic meaning of the word.
        I had assumed that climate science is indeed a science and hence will use the scientific meaning of the word
        In fact Heat is more like a verb as it describes a PROCESS by which thermal energy is transferred from a higher to a lower temperature.
        If you are in any doubt about this look up a Physics textbook.
        Further anyone like Pekka who has been anywhere near a thermodynamics course knows I am correct.
        Clausius stated the Second Law in a most explicit way.
        If any climate science practitioners want to get on the wrong side of the law they must join the other crackpots for it is a massive obstacle and will not be faulsified.

      • There is a difference between the vernacular use of the word HEAT and the Thermodynamic meaning of the word.

        That’s irrelevant to my point, which if you look at my comment you will see was merely in response to the claim I was objecting to, “is contrary to natural language usage.” I take “natural language usage” to mean something like “vernacular use,” what do you take it to mean?

        If you are in any doubt about this look up a Physics textbook

        Tried a bunch, haven’t found one yet. Sounds like you’re the one unfamiliar with physics. How many units of physics did you take in college (assuming 3-5 unit courses)?

        Further anyone like Pekka who has been anywhere near a thermodynamics course knows I am correct.

        I’ll let Pekka be the judge of that. I thought he was contradicting you but if not then I am clearly at fault here.

        and will not be faulsified

        If you’re a native speaker of English you shouldn’t be complaining about Pekka’s “teached.”

      • Vaughan

        If you are in any doubt about this look up a Physics textbook

        ……….”Tried a bunch, haven’t found one yet. “…….
        Name one of the bunch.
        I dare you because I know you cant

        You will also find in any Physics textbooks a scientific meaning for words like WORK , FORCE and POWER
        I repeat what kind of crackpot wants to challenge the Second Law?

      • ……….”Tried a bunch, haven’t found one yet. “…….
        Name one of the bunch.
        I dare you because I know you cant

        Oh Bryan, when you lift a line straight out of “Trolling for Dummies” you really ought to try to obfuscate its source at least a little.

        The divine Richard S. Courtney advised us all not to feed the trolls (I would quote the thread but WordPress’s search mechanism won’t cooperate). I really should follow RSC’s advice but I’m just such a sucker for them with their pleading ways. Sigh, oh well, here goes.

        Resnick and Halliday, 6th printing, Oct. 1963, Part I, p. 466: “heat is a form of energy.”

        That contradicts what you wrote:

        In fact Heat is more like a verb as it describes a PROCESS by which thermal energy is transferred from a higher to a lower temperature.
        If you are in any doubt about this look up a Physics textbook.

        I didn’t need to look it up because I was in no doubt that physicists treats heat as a noun—I was trained as one (a physicist, that is, not a noun), in the early 1960s as you can tell from the print date of my copy of H&R.

        You may be thinking of your sustenance provider, who presumably thinks of “heat” as a verb when in the kitchen. (And so do physicists when they’re in their kitchens.)

        I was tempted to quote ten more introductory physics texts (20 feet of the shelf space in my home office is physics texts, a few percent of which are introductory) but that would be promoting troll obesity.

        I’ve told Al Tekhasski (aka Alexey Predtechensky of Austin, TX, trained in Novosibirsk under Victor L’vov who went to Israel rather than the US) and Arfur Bryant (no idea who he is) that, after much discussion, I have nothing further to say to them. You’re coming close to the end of the same line after much less discussion since you’re rather more more transparent than those two.

      • Vaughan

        ……”Resnick and Halliday, 6th printing, Oct. 1963, Part I, p. 466: “heat is a form of energy.””…..

        Of course HEAT is a form of energy but I think R&H went a bit further than that did they not?.
        You are adamant that you hold some kind of position in an American University.
        Therefor you will have access to library facilities which will allow you to verify these modern textbooks.
        University Physics by Harris Benson page 382
        Modern definition of Heat
        Heat is energy transferred between two bodies as a consequence of a difference in temperature between them.
        ……………………………………………
        University Physics Young and Freedman

        Energy transfer that takes place sole because of a temperature difference is called heat flow or heat flow transfer and energy transferred in this way is called heat page 470

        Heat always flows from a hot body to a cooler body never the reverse. pg 559
        ……………………………………………..

        When Feynman finished the thermodynamics sections in the famous 3 volume lectures he recommended interested readers who wanted to take the matter further only one book.
        The book he recommended was Heat and Thermodynamics by Zamansky.
        If you read pages 57 and 58 (Fourth Edition)
        We find him defining heat as the transfer of energy from a higher temperature to a lower temperature.
        On page 147 he gives the Clausius/Kelvin statement on the Second Law.

      • Bryan,
        Classical thermodynamics (in contrast to statistical thermodynamics) is a surprisingly abstract mathematical construction, where defining the concepts gets difficult. Heat and entropy are two prime examples of this difficulty. From the formulation of the theory follows that heat is defined through an equation that presents energy flows. Classical thermodynamics stops at this point. It cannot tell more about the nature of heat. The quotes that you gave are expressions of this fact.

        When one goes further to microscopic phenomena, one can derive classical thermodynamics from the laws of microphysics (best from quantum mechanics) and mathematical statistics. When this approach is taken, heat gets a better explanation. It is internal energy of matter or more precisely kinetic and potential energy of the atoms, molecules and even electrons within atoms. Looking this way it is also easier to understand the connection between radiative energy transfer and heat.

      • I add one point.

        The first law of thermodynamics tells that energy is conserved. It is a postulated “external” law also in microphysics. The second law of thermodynamics is, however, derived in the statistical thermodynamics. It is not any more a separate law that must be postulated.

      • Pekka

        I framed by posts in terms of Classical Thermodynamics .
        Statistical Thermodynamics (from Mechanics) takes a different road to say the same thing.
        As far as I know Statistical Mechanics does not contradict in any way the classical approach.
        So I will stick with Clausius, Feynman and Zemansky.
        I hope you agree that my posts here are consistant with the Classical Approach.

        Now surely with your background you can pick one detail of the G&T paper and explain how it is in error.
        If you do, as far as I know you will be the fist to achieve this.
        The Halpern et al Comments paper was a failure, as it appears they did not read the G&T paper carefully enough.

      • Bryan,
        Statistical thermodynamics does not contradict classical thermodynamics, but it extends it. Sticking with classical thermodynamics you miss the oppprtunity to know more. If you would know well statistical thermodynamics, you would not any more have those problems that you have now. Then you would understand everything better.

      • Pekka.
        I don’t think I have any problems here.
        On the other hand you have made serious allegations .
        The G&T paper you say is full of mistakes.
        School level equations have been mixed up with advanced equations in a cut and paste mess without coherence.
        I think you must substantiate your position or else withdraw these statements.
        You do appear to have competence in thermodynamics.
        As far as I know you are the first person with any competence in this area to dispute the G&T paper.
        None of the Halpern et al group have any special knowledge of thermodynamics.
        You can use Classical Thermodynamics or Statistical Mechanics to explain your position, its up to you.
        The economy of the World is being dislocated because of speculated unfortunate radiative properties of CO2.
        The stakes could not be higher.
        People with knowledge in this area should not regard themselves as being on one side or the other.
        Clear objective science is the only way forward.

      • For those, who know physics, I need not justify my claims, because they see the facts very easily themselves.

        For those, who know so little physics that they do not see the weaknesses themselves, I cannot teach enough through this kind of discussion.

      • Pekka.
        You might as well not post such an admission of defeat.
        Why post anything, nobody twists your arm.

        However you come on and make wild accusations. When asked to explain, you refuse to substantiate them, hiding behind ‘well I just know and I’m not telling you’ kind of remark.

        What would an independent observer think!
        You leave a very poor impression of yourself.

      • The Halpern et al Comments paper was a failure, as it appears they did not read the G&T paper carefully enough.

        The main service rendered by the Miskolczi and G&T papers is as a convenient test of whether to engage in conversation about climate science when so invited at a cocktail party. Just ask their opinion of those papers. If their opinion agrees with yours then you’ll be able to spend a pleasant evening together discussing the subject constructively. If it doesn’t then further conversation will be a pointless waste of time for both of you.

      • @Bryan (quoting Young and Freedman) Heat always flows from a hot body to a cooler body never the reverse. pg 559

        I’ll be happy to let you have the last word on this. Here’s mine.

        “Always” is just as dangerous a word to write in a physics text as “never.” In this case Young and Freedman must be restricting attention to conducted heat without making this clear, since this statement is demonstrably false for both radiant heat and convected heat.

        For radiant heat it fails whenever the cooler object is at some distance from the hotter one and is radiating a large amount of sufficiently narrow-band radiant heat focused in a sufficiently narrow beam at the hotter object, which is radiating diffusely as a black body. The temperature of the cooler radiator is defined by the Wien Displacement Law and can be considerably lower than that of the hotter body, yet the cooler one heats the hotter one to a greater degree than vice versa because more net power is being transferred from the cooler to the hotter one than vice versa.

        For convected heat, an ordinary desktop fan provides a counterexample. If it blows dry air at 10 °C over a wet cloth at 9.9 °C, one might expect the hotter dry air to heat the cooler cloth by convection. In fact it cools the cloth yet further by evaporative cooling (what we Aussies call a Coolgardie safe).

      • The only torturing of the language is by those skeptics who claim that the second law forbids the greenhouse effect.

      • What amazes me is the number of supposedly technically competent people who have been taken in by the G&T paper. How have they failed to see either the polemics or the illogic?

        I think what we’re seeing here is a bunch of information technologists who think that because they understand information technology very well they therefore understand technology very well, and therefore understand physics.

        Sadly it doesn’t follow, as a 30-minute physics exam would quickly reveal for any of them. I have not run across a single climate denier yet who could maintain a coherent discussion of physics for even two sentences. (Ok, Willis Eschenbach, I’ll make an exception in your case. Ten sentences.)

      • Willis Eschenbach

        Thanks for the vote of semi-confidence, Vaughn. FYI, I also think the G&T paper is nonsense …

      • My pleasure, Willis. But what does your WUWT fan base have to say about your position on G&T? Do you expect to lose closer to 10% or 90% of them?

        I ask this because Judith still has some G&T supporters on her blog. I can’t see any of them keeping you among their Facebook friends.

        But this also raises another question, whether the Curry-haters rank Judith above or below you in their rogues gallery of climate heretics.

        Either way I consider both of you worthy opponents in case there’s any general interest in debates about confidence in the IPCC’s judgments. For all I know they may prove I have less confidence in the IPCC than either of you. Theoretically no, but climate is too complex a matter to be analyzed other than empirically (Pratt’s Axiom in case it hasn’t already been named for someone else).

      • Willis
        ….”FYI, I also think the G&T paper is nonsense …”…..

        Perhaps instead of throwing out a pointless insult you could explain where some of the major errors in the G&T paper are!
        You could consult the Halpern et al paper if you are stuck for an answer.
        However it is generally considered that the the Halpern et al paper was a dismal failure.

      • However it is generally considered that the the Halpern et al paper was a dismal failure.

        To quote Ron Cram, “I call BS.” Name a single nondenier that has ever suggested anything remotely like that.

        I had been meaning to write my own critique of the G&T paper until I saw the Halpern et al paper, which made all the points I was going to make and more, so I didn’t need to.

        Stop kidding yourself. The G&T paper is utter trash, through and through, unless you’re looking for examples of how to incorporate extreme polemics into your next paper.

        I challenge you to name the last time in the past 700 years when a paper as polemical as that of G&T had the slightest influence on science textbooks of the following two centuries.

        You’re not a sceptic, you’re a denier. There’s a difference.

      • Vaughan Pratt

        I gather you couldn’t find any part of the G&T parer in error till you read the Halpern paper.
        You seem to be inspired by this deeply flawed comment to the G&T paper

        Ill make it really easy for you then!

        Pick one point from the Halpern paper that you think shows the G&T paper is “utter trash”
        Or continue with a blanket smear that fools no one.

      • Bryan,
        The reason that you get the same answer from all directions is in the nature of the G&T paper.

        It’s main fault is not in individual details and so easy to pinpoint. It’s main fault is that it contains nothing substantive and in the fact that it presents strong conclusions not supported by its main text. It picks bits and peaces from various fields of physics, but does not even try to justify the conclusions it presents. It is empty in content and strong only in unjustified claims.

        This is the whole real criticism that this paper is worth of. If you disagree, you should tell, where in the content you find justification for any single one of the strong conclusions. I claim that the paper does not contain one single example of that. What more can you expect from a critic without challenging this claim by a counterexample. If you think that you have a counterexample, we can proceed to study, whether we can disprove it.

      • Pekka Pirilä

        I would have thought an easier approach for yourself would have been to develop one of the Halpern et al attempts at a criticism of the G&T paper.

        However here is a start.

        G&T use the familiar approach of physicists to a problem they first look at the experimental reality.
        1. The famous experiment by R W Wood.
        This proved two things
        a. The reason a glasshouse is hot is not from trapping radiation.
        b. The radiative effects of atmospheric gases at typical temperatures are so small they can be ignored.

        2. They make use of the experimental work of A Schack who came to the same conclusions as Wood but added precise measurement of the effects of CO2.

        G&T then contrast this with the claims made for the Greenhouse Effect such as the claimed increase in average Earth Surface temperature of 33K and find these claims unproven and unphysical.
        Now I could go on but all I would be doing is rewriting their paper.
        Now its your turn to prove that their line of reasoning is in error.

      • Bryan,
        a. The reason a glasshouse is hot is not from trapping radiation.

        Not true as discussed by Vaughan Pratt, who studied it even experimentally. The glasshouse warms both through radiative effects and by limiting convection. Both factors are very important. Furthermore this is completely irrelevant for the main issue.

        b. The radiative effects of atmospheric gases at typical temperatures are so small they can be ignored.

        Empty claim, which is patently false and not justified in the paper.

        2. They make use of the experimental work of A Schack who came to the same conclusions as Wood but added precise measurement of the effects of CO2.

        The same ideas that Schack used have been repeated with much better data and they lead to the present estimate of radiative forcing. They cannot be used to counter these estimates.


        G&T then contrast this with the claims made for the Greenhouse Effect such as the claimed increase in average Earth Surface temperature of 33K and find these claims unproven and unphysical.

        What is their argument? I cannot find any. This is just the point that I made.

        They write into their paper many things about physics (most of them true, some less true), but these things have no value as arguments to support their conclusions as they do not even try to present the connection. They just jump to conclusions without any real supporting arguments.

      • Pekka Pirilä

        a. The reason a glasshouse is hot is not from trapping radiation.
        ………………………….
        Not true as discussed by Vaughan Pratt, who studied it even experimentally. The glasshouse warms both through radiative effects and by limiting convection. Both factors are very important. Furthermore this is completely irrelevant for the main issue.
        ……………………………………………..
        So Vaughan disproves the experiment by R W Wood.
        Where can I get hold of the evidence!
        ……………………………………………….
        b. The radiative effects of atmospheric gases at typical temperatures are so small they can be ignored.
        Empty claim, which is patently false and not justified in the paper.
        …………………………………………..
        Well R W Wood made such a claim and there is certainly plenty of evidence to back it up.
        ………………………………………..
        2. They make use of the experimental work of A Schack who came to the same conclusions as Wood but added precise measurement of the effects of CO2.

        The same ideas that Schack used have been repeated with much better data and they lead to the present estimate of radiative forcing. They cannot be used to counter these estimates.
        …………………………………………………………..
        So you are now disputing the experimental work of
        Schack!
        What evidence do you have that he was mistaken!
        …………………………………………………………….

        G&T then contrast this with the claims made for the Greenhouse Effect such as the claimed increase in average Earth Surface temperature of 33K and find these claims unproven and unphysical.

        They go into various models of the Earth to show that the average temperature of 15C has been arrived at without justification.
        They include a calculation showing Sun/Earth illumination results in an average of total radiation from Earth implying an average temperature of 279K without any speculation as to atmospheric effects.
        Richard Fitzpatrick arrives at a similar figure.

      • Bryan,
        Your additional comments were empty. They did not contain any indication contradicting my claim that G&T do not even try to justify their conclusions. They do not present any logical chain linking their other material to the conclusions.

        Why do you ask for evidence, when you know that you do not believe anything.

        The claims that I made are true, but there is no way I can prove them or any other claim through messages on Internet, when you just disregard any arguments without any attempt to find out, whether they are true or not.

        I have decided already a couple a times that it is useless to argue with you, but I cannot always keep my decisions. Now I try once more.

      • Pekka Pirilä

        I have supplied you with some of the evidence that G&T use to substantiate their paper.
        You dispute this but more or less say I should accept what you say without evidence!
        Take our first item for instance;

        a. The reason a glasshouse is hot is not from trapping radiation.
        ………………………….
        Not true as discussed by Vaughan Pratt, who studied it even experimentally. The glasshouse warms both through radiative effects and by limiting convection. Both factors are very important. Furthermore this is completely irrelevant for the main issue.
        ……………………………………………..
        So Vaughan disproves the experiment by R W Wood.
        Where can I get hold of the evidence!
        ………………………….
        As yet not supplied!
        ……………………………
        …” Furthermore this is completely irrelevant for the main issue.”……..
        Not true, the main issue is whether the atmospheric radiative gases produce a greenhouse effect that is so significant that it adds 33K to the planet’s surface temperature.
        Wood found that the radiative effect was very small almost negligible at room temperature.

  94. Surely the above discussion (425 technical comments and counting) makes it clear that the science is far from settled on this topic. In fact the controversy seems quite far reaching. This is important.

  95. i could use some help on my next post, which is on the topic of the no feedback CO2 sensitivity. If you know of any references that provide a specific value (like 1C or whatever) I would appreciate your pointing them out. thx.

    • I am curious as to why you want to continue to focus on CO2 forcing issues when the greatest uncertainties lie elsewhere?

      • trying to nail down what we actually understand; currently tearing my hear out since I can’t even make sense of how people are coming up with 1C and how to interpret it. So everyone seems to believe this number, i’m trying to figure out why people have confidence in this (I’m having less and less).

      • That makes sense, since it is your area of expertise. I look forward to the debate.

      • Judith, you write “trying to nail down what we actually understand; currently tearing my hear(t) out since I can’t even make sense of how people are coming up with 1C and how to interpret it. ”

        You wont find aynthing. The 1C rise in temperature for a doubling of CO2 without feedbacks is a purely hypothetical and completely meaningless number, which cannot be measured, and for which no error has been assigned. As an undergraduate at Cavendish Labs, has I present the scientific garbage in Chapter 6 of the TAR, any professor would automatically have awarded me 0 out of 100. I would never have dared present it to my tutor and mentor; a gentleman who went on the be Prof. Sir Gordon Sutherland, Head of NPL. Had I done so, he would have given me a tongue lashing so severe, I dont think that I would have ever recovered.

        Your words are most encouraging that I have read so far on this blog.

      • I agree with Jim
        There are so many assumptions and such unknown error bars that I don’t think there will be “proof” any particular number is the correct one for a long time.

        Welcome to the sceptics world, some of us have lost hair scratching our heads so hard. I hope you keep yours.

      • James Hansen mentioned 1-1.2DegC in his presentation to the Committee on Commerce, Science and Transportation
        United States Senate in May 2001

        His figure 2 is the one where he adds all the forcings, natural and anthropogenic (1.7Wm2) to come up with 1Wm2=0.75DegC ~ 1.7Wm2 = 1.2-1.3DegC

      • The real atmosphere has feedbacks. Thus the “no feedback CO2 sensitivity” is an artificial construct whose precise definition can not be inferred from nature. Instead it must be defined using some theoretical framework – or model, where one can tell what is feedback and what is not.

        Such a number is an intermediate result in some approaches of calculating the full CO2 sensitivity with feedbacks. In other theoretical frameworks it is just an additional result that may help in comparing certain components of different models or theoretical frameworks.

        I would no go as far as Jim Cripwell and claim that the number is meaningless, but I agree with him that the number has no direct connection to the real atmosphere or earth.

      • This is from a book “The Human Impact Reader” by A. Lacis, et al. On page 232 there is a table of models. The first model is for doubling of CO2 with no feedbacks. The calculations do not appear to be shown, but the 1.2 C number is there.

      • “Thirteen years ago, Danny Braswell and I did our own calculations to explore the greenhouse effect with a built-from-scratch radiative transfer model, incorporating the IR radiative code developed by Ming Dah Chou at NASA Goddard. The Chou code has also been used in some global climate models.

        We calculated, as others have, a direct (no feedback) surface warming of about 1 deg. C as a result of doubling CO2 (“2XCO2”). ….” – Roy Spencer

      • The are a couple of steps to explain. First we have the commonly used 3.7 W/m2 from doubling CO2. Then we convert that to a delta T using the derivative of Stefan-Boltzmann (4 sigma T^3). If you use the top-of-atmosphere effective T(=255 K), you get very close to 1 C for doubling CO2 at the top of the atmosphere. Even Lindzen and Choi posit these arguments between their equations (1) and (2), so it is quite uncontroversial. The next step is how this relates to surface warming, and that is where lapse-rates have to be considered, but it is generally going to be a similar magnitude, though this number would be very hypothetical, and less easily definable considering global variation, while the TOA temperature change is more firmly founded as an equivalent radiative temperature there.

      • Judith,

        Fred Moolten gave an explanation for that figure here
        However, as I subsequently pointed out further on, an increase of 3.8W/m2 energy flux does not necessarily lead to a 1.2C increase in temperature.

      • Pekka writes “I would no go as far as Jim Cripwell and claim that the number is meaningless, ”

        I should have explained why I claim the numnber is meaningless. No-one has ever associated an error to this number; no +/-. Surely any student of physics knows that a number with no +/- is meaningless.

      • Thanks peter, this is what I was looking for. I agree 100% with your last sentence, this is what I am trying to investigate. hope to have this post ready later tonite.

      • Judy – For the temperature response to CO2 doubling (absent other feedbacks), 1 deg C is an approximation, and the models yield values of about 1.2 C. The difference reflects the inclusion in the models of heterogeneity involving latitude, seasonality, etc., regarding lapse rates and other variables.

        For one reference regarding the models, see Soden and Held, Journal of Climate 19:3354, 2006 . The estimated parameters for the Planck response alone (i.e., the “no added feedback scenario) average about -3.1 to -3.2 W/m^2 per deg K. For a forcing from CO2 doubling of 3.7 W/m^2 (from Myhre), a rise of about 1.2 C is therefore required.

        Regarding the 1 deg C approximation, it is implied although not directly calculated in the Hansen et al 1981 Science paper – Hansen et al . Confusingly, they end up with a higher value than 1 C, but that is because their CO2 doubling estimate exceeds the 3.7 W/m^2 value. Ignore their values, but instead consider the basis of their approximation, which is a linear lapse rate and a mean Earth radiating temperature of 255 K. The value of 1 then follows (although they don’t do the arithmetic). The Stefan-Boltzmann equation gives Flux = sigma (a constant) x T^4 for a black body (assumed for the Earth for the absorbed, non-reflected radiation). Differentiating this to get dF/dT, we find it to be 4 sigma x T^3, and inverting, we find that dT/dF = 1/(4 sigma T^3). This makes dT = dF/(4 sigma T^3), and substituting 3.7 for dF yields a temperature change of almost exactly 1 deg C.

      • Fred, these are the exact papers i’m reading. The soden and held paper seems to be the most significant one in this regard, but i can’t figure out how they actually did their calculation for the Planck response. I have the same concern that Peter 317 has. I hope to have my post ready later tonite so we can dig into this topic.

      • The 3.1-3.2 Planck response estimates come from the models. The approximation I described calculates out to a value of 3.7, hence the 1 deg C temperature rise.

        I’m not sure I understood Peter’s point. Energy escapes to space only via radiaton, at an average temperature of 255 K. How it moves from the surface via radiation, convection, and a small amount of conduction is irrelevant. As long as the lapse rate is linear, the temperature change at the surface will be the same as at the radiating altitude. Forcing is defined as specifying an unchanged lapse rate, although feedbacks can subsequently alter lapse rates.

      • Peter’s point is about how this radiative forcing actually translates to increasing surface temperature. An extremely simple (and arguably unrealistic) model is being used to relate troposphere radiative forcing to surface temperature change. Look at the surface forcing, not the tropopuase forcing, and you don’t have to make all those assumptions linear lapse rate etc. Do the problem both ways, and see how different your answer is in terms of temperature change.

      • We have to think of the 1 degree warming at the TOA as a way to express the 3.7 W/m2 as an equivalent temperature change, so it is synonymous with the forcing. I don’t think it makes sense to translate this to the surface, because whatever you do needs assumptions beyond the forcing or even needs global models with the water vapor feedback disabled somehow, which is not going to be a very satisfactory result in terms of gaining agreement.

      • I wonder whether we’re talking about the same thing. As far as I know, forcing, except for some stratospheric adjustments is defined as the instantaneous change in radiative balance at the tropopause (in vs out) due to, in this case, a CO2 doubling. By definition, therefore, it is what happens before there is any change in temperature, humidity, lapse rate, etc. These things may change, but those changes are responses to the forcing-mediated radiative imbalance. In other words, the absence of response is a matter of definition, not assumption. I suppose the only assumption is that at the instant of doubling, lapse rates are linear or very close to it in the relevant altitudes. To the best of my knowledge, this is not in serious dispute, at least as a global average. Is there evidence to the contrary. What happens to lapse rates in response to forcing is a question of feedback.

        As long as lapse rates are linear, a 1.2 C change at any altitude must mathematically translate into a 1.2 C change at the surface. Of course, the feedback issue involves the magnitude of the actual change.

      • To elaborate slightly on my above comment, the calculations we’re discussing show, via Stefan-Boltzmann, that a 3.7 W/m^2 reduction in radiative balance at the tropopause reduces the radiating temperature from 255 K to a level 1-1.2 deg C lower. To restore balance requires the same magnitude of temperature increase, and via lapse rate, the same change at the surface, regardless of the way surface energy is transmitted upwards.

      • Peter,
        Do you refer to the fact that 1K increase of temperature from 255CKto 256K corresponds to 3.78 W/m^2 while the increase from 288K to 289K corresponds to 5.45 W/m^2 according to the Stefan-Boltzman law.

        The diffrence must obviously be covered by back radiation, if the temperature increases are equal.

      • Fred, I think we are agreeing, that this is the forcing we are talking about. The assumption of constant lapse rate is an added one to translate it to a surface temperature from a TOA temperature. It is a question to me: if we uniformly increase the whole atmosphere (and surface) by this temperature do we exactly get back the 3.7 W/m2 we lost by doubling CO2? If so, I have no argument with translating the temperature as a surface change too.

      • Jim – If I understand your question correctly, my response is that the surface, at 288 K, will experience a larger W/m^2 change for a 1.2 C increase than the mean radiating altitude at 255 K, the difference being due to back radiation. It is only at the radiating altitude that a 3.7 W/m^2 restoration is the quantity needed to restore temperature to 255 K. As an aside, but relevant to other comments, the Planck (no feedback) response is the basis for the calculated temperature change. It is purely radiative by definition, since it excludes changes in evaporation, humidity, lapse rate, clouds, etc. These things change, but their effect is combined with the Planck response to estimate the true temperature response.

      • Fred, understood, but can we say that 1.2 C applies to the surface (and all other levels) too? That was the gist of my question, because it is not obvious the TOA outgoing longwave radiation would be 3.7 W/m2 more when you warm the whole atmosphere by this amount. It won’t be far off, so I think this is just quibbling on my part.
        I am making a distinction between the effective TOA radiative temperature (a two-dimensional field) and the atmospheric temperature (a three-dimensional field) that produces that TOA temperature, where there are clearly some added degrees of freedom.

      • Fred,

        I agree that the surface will experience a larger W/m2 change for a 1.2 C increase than the mean radiating altitude, in fact I assert that it must do, because, besides the 3.7W/m2 radiation loss, it will also be losing energy through convection and evaporation.
        Here, we can assume that either the increase in back-radiation is just enough to balance the equation and hold the surface change to 1.2C, or the actual surface increase is less (or more) than 1.2C

        Apologies if I’m a bit unclear – I’ve rewritten the above about five times now and it still doesn’t quite say what I want it to. It’s been a long day for me ;-)

      • I’m tackling the non feedback co2 sensitivity, which is defined as Ta metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2concentration.” So, how does the CO2 forcing change the surface temp, in the absence of feedbacks? No reason to expect lapse rates to be linear (they really and truly are not linear), and this also assumes zero heat capacity for the surface, which is not correct.

      • Besides the zero heat capacity, the other assumption is that the w/m2 figure calculated for the average temperature is the correct one. Because of the non-linear relationship between the two, this cannot be true unless the range of temperatures making up the average is tiny.

      • agree completely

      • MODTRAN (as in the U Chicago online code) can be used to answer this because it has a selection of profiles and outputs the TOA flux. It has a sensitivity of only 2.8 W/m2 to doubling CO2, but you can displace the temperature profile by 0.9 C to restore the TOA flux to the original, keeping H2O constant ( I used the US standard profile which is also not bad as a proxy for the global mean).

      • Looking at earth from the space the 1K increase in the effective radiative temperature of the earth will not be uniform over wavelengths.

        The radiation will increase most at wavelengths passing thtough the atmosphere from earth surface. On the other hand the temperature of the top of troposphere does not necessarily rise at all as the change may influence the altitude of tropopause.

        Looking in this way the change turns out to be quite complicated.

      • I was considering the surface temperature change which, assuming things like a linear lapse rate, should be equivalent to the TOA equivalent. What I suppose I’m really questioning is whether the latter assumptions are indeed valid.

    • Re: curryja (undefined NaN NaN:NaN),
      May I suggest a presentation paper with back of the envelope calculations using 6 different observational methods to measure sensitivity.

      Although 5 are WITH feedbacks and only ONE without, comparisons between them is quite interesting.

      • Below is by Vincent Gray in the greenhouse Bulletin 121 1999

        “There is a considerable difference of opinion between various authorities making these calculations. Cess et al. (1993) found a range of calculated figures for the radiative forcing for a doubling of carbon dioxide concentration (considered on its own, without other greenhouse gases or “feedbacks”) of between 3.3 and 4.8Wm-2 for fifteen models, with a mean of 4Wm-2 . This is a variability of ±0.75Wm-2 or ±19%. Since each modellist will have chosen “Best Estimate” figures for his model the actual variability of possible forcing would be larger than ±19%.

        The Intergovernmental Panel on Climate Change (IPCC) in their first Report (Houghton et al 1990) gave the following formula for calculating the radiative forcing (Delta F) in Wm-2, of changes in atmospheric concentration of carbon dioxide:

        Delta F = 6.3 ln (C/Co), (1)

        where C is CO2 concentration in parts per million by volume and Co is the reference concentration. The formula is said to be valid below C = 1000 ppmv and there were no indications of the accuracy of the formula. The formula predicts a radiative forcing of 4.37 Wm-2 for a doubling of carbon dioxide concentration. This is 9% greater than the mean value assumed by the models (Cess et al. 1993).

        This formula is said to derive from a paper by Wigley (1987), but the formula in this paper is not quite the same. Wigley’s formula, derived from the model of Kiehl and Dickinson (1987), is

        Delta F = 6.333 ln (C/C0) (2)

        considered accurate over the range 250ppmv to 600ppmv; and “is probably accurate to about +10%”..

        Formula (1) has been used by the IPCC scientists for their calculations of radiative forcing “since pre-industrial times”, and for their calculations of future radiative forcing (and so, temperature change) for their futures scenarios.

        In the IPCC 1994 Report (Houghton et al 1994) the authors of Chapter 4 ( K.P. Shine, Y. Fouquart, V. Ramaswamy, S. Solomon, J. Srinivasan) sought to counter the prevalent belief that infra red absorption of carbon dioxide is saturated by proving an example showing the additional absorption from 1980 to 1990. Their graph (Figure 4.1, page 175) integrates to give a forcing of 0.31Wm-2 (Courtney 1999). If the Mauna Loa figures for carbon dioxide concentration of 338.52 ppmv for 1980 and 354.04 ppmv for 1990 are substituted in formula (1) you get 0.28Wm-2, 9% lower than the IPCC illustration.

        A revised formula for calculation of radiative forcing from changing concentrations of carbon dioxide has recently been published (Myhre et al 1998).

        Delta F = 5.35 ln (C/Co) (3)

        The authors express the view that the IPCC estimates “have not necessarily been based on consistent model conditions”. They carry out calculations on the spectra of the main greenhouse gases by all three of the recognised radiative transfer schemes, line by line (LBL), narrow-based model (NBM) and broad-based model (BBM). They calculate the Global Mean Instantaneous Clear Sky Radiative Forcing for 1995, for atmospheric carbon dioxide, relative to an assumed “pre-industrial” level of 280ppmv, as 1.759Wm-2 for LBL, 1.790Wm-2 for NBM and 1.800Wm-2 for BBM; a mean of 1.776Wm-2 with BBM 2.3 % greater than LBL.

        The new formula gives 3.71Wm-2 for doubling carbon dioxide; 15% less than the previous formula. It is also below the mean of 4.0Wm-2 of the models (Cess 1993).

    • David L. Hagen

      Judith
      Some other items per your request for CO2 references:
      Dr. Roy Spencer’s blogs on CO2 feedback
      http://www.drroyspencer.com/?s=co2+feedback

      e.g., On the Relative Contribution of Carbon Dioxide to the Earth’s Greenhouse Effect September 10th, 2010 by Roy W. Spencer, Ph. D.
      http://www.drroyspencer.com/2010/09/on-the-relative-contribution-of-carbon-dioxide-to-the-earth%E2%80%99s-greenhouse-effect/

      Why 33 deg. C for the Earth’s Greenhouse Effect is Misleading September 13th, 2010 by Roy W. Spencer, Ph. D.
      http://www.drroyspencer.com/2010/09/why-33-deg-c-for-the-earths-greenhouse-effect-is-misleading/

      Spencer cites Richard Lindzen 1990
      Some Coolness Concerning Global Warming.
      http://www-eaps.mit.edu/faculty/lindzen/cooglobwrm.pdf

      Note also: Richard Lindzen 1986
      CO2 Feedbacks and the 100 K climate cycle
      http://www-eaps.mit.edu/faculty/lindzen/127co2~1.pdf

      Further papers by Lindzen CO2
      http://scholar.google.com/scholar?q=lindzen+co2&hl=en&btnG=Search&as_sdt=800001

      Spencer cites:
      Manabe & Strickler 1964
      https://www.gfdl.noaa.gov/bibliography/related_files/sm6401.pdf
      Note 408 citations to Manage & Strickler 1964
      http://scholar.google.com/scholar?cites=12138464137035298601&as_sdt=800005&sciodt=800001&hl=en

      Miskolczi 2010 quantitatively evaluates absorption of each of the greenhouse gases. He calculates separate and combined sensitivities for CO2, H2O and temperature based on the TIGR radiosonde record and NOAA data.
      THE STABLE STATIONARY VALUE OF THE EARTH’S GLOBAL AVERAGE ATMOSPHERIC PLANCK-WEIGHTED GREENHOUSE-GAS OPTICAL THICKNESS
      http://www.friendsofscience.org/assets/documents/E&E_21_4_2010_08-miskolczi.pdf

      For the NIPCC review, see:
      Ch 1: Global Climate Models and Their Limitations
      http://www.nipccreport.org/reports/2009/pdf/Chapter%201.pdf

      Chapter 2 Feedback Factors and Radiative Forcing
      Climate Change Reconsidered, 2009 NIPCC Report, pp 27-61.
      http://www.nipccreport.org/reports/2009/pdf/Chapter%202.pdf
      (For Non-CO2 feedbacks)

      Further reviews are provided at CO2 Science.
      http://www.co2science.org/

      See: Carbon Dioxide etc.
      http://www.co2science.org/subject/c/subject_c.php

      CO2 Temperature Correlations
      http://www.co2science.org/subject/c/co2climatehistory.php

      See feedback factors
      http://www.co2science.org/subject/f/subject
      _f.php

      (The NIPCC & CO2 Science primarily address the issues of feedbacks rather than the CO2 itself.)

    • Why is there such a big fat silence from our resident professional radiative physics ‘experts’ on this question? Maybe they are keeping their powder dry for the main post?

      • actually, i think everybody is headed to the annual meeting of the american geophysical union in san francisco. I will post something on that meeting tomorrow nite

  96. Pekka Pirilä

    Get an online graphing tool and use its facilities to produce blackbody radiation curve (spectral radiant exitance against wavelength) for temperatures below for the same black body radiating at;

    600K
    900K
    1200K”

    We notice as expected that the higher temperatures produce short wavelengths that lower temperatures don’t reach.
    Pick any particular wavelength (say 4um);
    We notice that there are many more photons of this frequency for higher temperatures than for lower temperatures.
    Now imagine two of the temperatures examples radiating to one another.
    What sense does it make to say the colder one is heating the hotter one?

    • Radiation is transferring energy in both directions. It is like having a two-way street from East to West. During one time interval there may be 200 cars driving East and 300 driving West. The net fow of cars is 100 going West, but that does not tell that no car would be driving East.

      There is nothing more diffcult in the radiative heat transfer going both ways at the same time.

      • Pekka

        The net flow is what is called HEAT

      • The real point is not arguing about words. The real point here is that completely false claims have been presented and perpetuated by people who either do not know about the things they are talking about or are creating confusion against their better knowledge.

        All the claims that second law of thermodynamics proves that additional CO2 cannot heat the earth surface and lower atmosphere are absolute and full nonsense.

      • Words like HEAT and WORK have a special precise meaning in thermodynamics.
        We must use them carefully.
        Have you read the G&T paper?
        Most negative comment has come from people who have not read the paper

      • It occurs to me that G&T are confusing conduction, where heat can only flow in one direction, with radiation where it can flow in both. Note that IR radiation is also called heat, and its flow is also measured in W/m2, the units of energy flux.

      • Jim D
        If you read their paper you will find that they fully understand the radiative properties of CO2 and H2O.
        Their point of view is that the effects do not amount in magnitude what is claimed for the “greenhouse effect”.

      • Do they talk about the measurements of 200-300 W/m2 of IR from the sky to the ground, and how that affects surface temperature, because if not, they are missing the point.

      • Jim D
        Why don’t you read their paper and find out for yourself

      • When I read in the original paper that photons can’t flow from a colder to a hotter body, I stopped reading and dismissed T&G as cranks. Even a high school physics student could probably see that is wrong. And they weren’t talking about a dynamic equilibrium or net flow of photons either. The last time I googled for their paper, there were redactions, so I’m not sure what’s going on.

      • Jim or Jim D

        Perhaps you could quote the page number for your remarkable claim
        …..”When I read in the original paper that photons can’t flow from a colder to a hotter body”……..

      • I should have added that I don’t think you can.

      • There is this comment earlier by someone called Bryan
        “For instance G&T say that a major error in most versions of Greenhouse Theory is in violations of the Second Law.
        A fairly frequently expressed climate science understanding of this law is ;
        Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder.
        This is wrong.”
        Perhaps this misled people into thinking G&T said that. I am now looking at G&T and will see for myself if they did.

      • …and I don’t find they did, so it may be from the famous Dragon book that is all about this issue.

      • Jim D
        I hope nobody thinks that G&T said…….

        Heat can flow from a colder to a hotter object as long as more heat flows from the hotter to the colder….

        Their position is exactly opposed to that statement.

      • I included the “This is wrong.” sentence. This implies they don’t believe that statement. However, G&T say none if this, so it is irrelevant. My reading of G&T shows it to be mostly complaining about wording in science explanations than the science itself. I find it interesting that they completely denounce Fig. 23 which is a typical energy balance diagram, and this would also shoot down any idea Miskolczi had as a side-effect. But I could not see why they denounce those diagrams, and I haven’t seen anyone explain what they mean by that.

      • That was how I interpreted page 78. I could be wrong.

      • Jim

        Yes I agree that this page could have been a bit clearer
        Their basic opinion in the rest of the paper is quite clear
        Hotter and colder bodies can radiate to one another.
        However HEAT only flows one way from hotter to colder body.

        This is expressed in their reply to the Halpern et al comment

        (www.skyfall.fr/wp-content/gerlich-reply-to-halpern.pdf)

      • Bryan,
        Heat is a form of energy and their way of separating them is without any basis and misleading or erroneous. The chapter 3.9 contains many correct statements but there are no reasonable conclusions. For me it is unclear whether I should say that there are no conclusions or that the conclusions are wrong, because the text is formulated so vaguely.

        What is certain is that this chapter does not give any real justification for the main conclusions presented in the abstract. It is also clear that the text presents erroneous critique of other publications. This critique tells only that they have not understood the real issues correctly.

      • Pekka
        They make a very helpful summary of 16 points.
        Arthur Smith produced a paper taking issue with summary point 2.
        Someone then replied to Arthur Smith objection with a paper contesting some points in his paper.
        This is all healthy its the way science progresses.
        I think that this is the attitude that Judith is trying to establish.
        We are however well off topic here and on most other sites we would have been snipped.

      • Bryan,
        This paper is not science. It is not worth listing its weaknesses before there is any reason to think that the paper contains anything of any value.

      • There is actually not so much difference between conduction and radiative heat transfer. When one looks at the molecular level the conductive processes move also energy in all directions. The difference is that in conduction this occurs only at very small distances, because energy is transferred in very small steps.

        In a large crystal one phonon can transfer energy from one edge of the crystal to the opposite edge and this may happen from the colder edge to the hotter edge. Still the heat transfer through phonons is considered conduction.

        The second law of thermodynamics is based on a statistical rule that tells, which processes are more common and which less common. One clear example is that quants of energy move more often from hot to cold than from cold to hot.

      • Pekka, yes, conduction can be considered the net result of phonon vibrations, but is calculated as just a one-direction flux proportional to local temperature gradient, while radiation calculations have to explicitly treat both streams because of their longer-range dependencies.

      • Jim D,
        I may be rather expanding on your message than answering to it.

        The radiative heat transfer may in some cases also be calculated in the similar way. This would require that the radiation is the dominant mechanism and that the pathlenngth of the quanta is always small compared to the dimensions of the full compartement considered.

        Neither of these condiotions is valid for the atmosphere because the radiative heat transfer does not alone dominate the heat transfer (conduction is very important as well) and because some wavelengths have very long pathlengths and the quanta may escape the whole atmosphere.

        This is the reason that one cannot simply use diffusion equation for radiative heat transfer as one can for conduction. Further this is the reason that it is useful to consider back radiation separately.

      • In my message I write “conduction is very important as well”. I meant convection.

      • Pekke, saying it is nonsense and claiming your opponent is either ignorant or evil is not a counter argument. If you cannot explain (or teach) your opponent’s position then you simply do not understand it. That is what you should say if you cannot respond to their specific technical argument, that you do not understand them. It does not follow that they must be wrong.

      • It is not possible to teach anybody who absolutely refuses to learn.

      • Pekka, I am not talking about teaching anyone anything. Being able to teach something is a measure of understanding. You have not demonstrated that you understand the specific argument of G&T, which you must do to ague effectively that they are wrong. You merely say it is nonsense.

      • I have looked at G&T. It appears to be an incoherent collection of physical equations without any clear logic. The abstract claims:

        “The atmospheric greenhouse e ffect,[..] essentially describes a ctitious mechanism, in which a planetary atmosphere acts as a heat pump driven by an environment that is radiatively interacting with but radiatively equilibrated to the atmospheric system. According to the second law of thermodynamics such a planetary machine can never exist.”

        This is an extreme and obviously wrong claim, but I’m unable to find any place in the text where they would really try to justify this claim.

        Several of these quasiscientific skeptical papers including G&T make very strong claims without specific justification. They are also so incoherent that it is impossible to tell what they are really trying to tell. As one cannot find out, what they are trying to tell, it is also difficult to be specific in the critisism. One can only notice that the papers miss all real substanse.

      • Pekka
        I notice that you have yet to reply to my post of
        December 11, 2010 at 11:16 am
        Could it be that you realised you had dug yourself into a hole and had better stop digging?
        Your comments about G&T show you are unable to engage in any substantive dialog on the Second Law.
        Their mistakes you say are many but you cant be specific about any one apparently!
        Continually saying its rubbish but refusing to substantiate fools no one.

      • Pekka

        The point is NO HEAT goes from cold to hot.

        Lets see what Professor Clausius says;

        Rudolph Clausius (1822-1888) Germany

        Heat cannot of itself pass from a colder to a hotter body.

        It is impossible to carry out a cyclic process using an engine connected to two heat reservoirs that will have as its only effect the transfer of a quantity of heat from the low-temperature reservoir to the high-temperature reservoir. (1854?)

        “Es existiert keine zyklisch arbeitende Maschine, deren einzige Wirkung Wärmetransport von einem kühleren zu einem wärmeren Reservoir ist.”;
        It does not exist, a cyclically operating machine, whose only effect is heat transport from a cooler to a warmer reservoir.

        It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature. [kelvin’s translation]

        Do you get the poinPekka?

        Its not that heat mostly travels from hot to cold as you seem to think.
        No HEAT moves from colder object tahotterer one!

      • Can you be more precise in your question? Do you claim no photons can go from colder to hotter objects? I ask because heat can be transferred by photons.

      • Bryan, If heat is defined as the nett flow of energy, then yes.
        Just as, if I give you $10 and you give me $5 change, although $5 has flowed from you to me, the nettflow has been $5 from me to you.
        I think your argument with Pekka is 99% semantics – apologies if I’m wrong.

      • Peter317

        Read what Clausius says in post above.
        He couldn’t be clearer.

  97. Pekka.
    Semi-serious question. Would it be OK to describe Model based papers as quasi-scientific papers?

  98. Pekka
    On G&T
    Remember these are named individuals holding down real jobs and any criticism should be constructive if possible.
    When you taught at the Technical College you would have found it very unpleasant to be described as ‘stupid,incoherent,producing rubbish and so on’ all over the Web.
    In fact they are theoretical physicists with a strong background in experimental physics and a high acompetence in mathematics.
    Now that doesn’t automatically mean they are correct.
    Their aim in the paper was not to come up with an alternative climate theory.
    They restricted themselves to the radiative effects of CO2 in the atmosphere and in their opinion claims like a “greenhouse effect” leads to a33C rise in Earth surface temperature have not been proven.
    Look at their paper find some point where they are in error – you are eminently qualified to do that.
    I look forward to you being a bit more specific.

    • The first problem of the paper is that it does not really tell anything, exept that it is as critical on many well known scientists than I have ever been on that paper and that it has an abstract and conclusions that are not justidied by the papers content.

      With a paper of such qualities it is not really to possible or necessary to go into the other details.

      If somebody thinks that the paper contains something valuable, he should tell where in the paper it can be found. After that one can either accept that there is indeed something or present more precise critisism on that point.

      • It is quite possible that the G&T paper is simply incoherent, meaning not clear enough for others to understand the argument. Confusion and coherence are my field of study and this sort of failure is far more common than most people realize. I often make someone angry by saying “I do not understand what you just said.” People should say this more often.

        I have a diagnostic system of 126 kinds of confusion. It sounds like G&T are skipping over the detailed reasoning needed for comprehension. (“Leaving it to the student to work out” as it were.) Perhaps they are assuming a view their readers do not have.

        The problem is that if the G&T paper is incoherent then we can’t say whether what they are trying to say is right or wrong. It may be quite deep but there is no way to know. Given the author’s credentials it would be wrong to simply dismiss it because it is incoherent. Nor should we accept it of course.

      • David

        The paper was addressed to Physicists and published in the Journal of Modern Physics.
        Perhaps they could be persuaded to write a paper addressed to the general public.
        Unfortunately they do not participate in Blogland.
        Had the done so quite a few misinterpretations could have been avoided.

      • Your point is certainly correct, but it is unlikely to be the main problem of G&T.

        Their paper consists of several disjoint chapters. Each of them is elementary physics (formulae copied from elementary textbooks), partly correct and partly erroneous or misleading. Most of the chapters appear to have almost nothing to do with the issue considered.

        The paper is almost void of any real content, but presents strong unjustified conclusions.

      • Given the credentials of the writers, I find your claims difficult to believe but I do not have time to do a coherence analysis. If they are working at an elementary level it is most likely that they are being deep.

      • What are the credentials?

        Sincerely, I cannot believe that anybody with real credentials could author this paper.

      • Pekka
        They both have PhD Physics.
        One is a Professor of Mathematical Physics.
        How do your credentials stack up against them?
        Because you don’t understand the paper doesn’t mean the problem is with them.

      • My credentials are sufficient for concluding that the paper is in very many ways way below basic requirements for a scientific paper. It has no coherent structure or logic. It has chapters of no relevance to any of its conclusions. It mispresents severely application of the second law of thermodynamics. It presents strong conclusions not supported by the text.

        Is that sufficient.

      • Pekka
        Despite several times making assertions you have yet to prove that even one point that G&T list is in error.
        How can you comment on the Second Law when you are confused about the direction of heat flow?

      • Despite several times making assertions you have yet to prove that even one point that G&T list is in error.

        Repeating rubbish doesn’t make it true.

      • V. Pratt
        You are also confused about the word heat never mind the direction!

      • They both have PhD Physics.
        One is a Professor of Mathematical Physics.

        In that case surely G&T should have been able to get their paper published in a journal with an impact factor greater than 0.647.

        No reputable journal would dream of accepting this level of material.

        Because you don’t understand the paper doesn’t mean the problem is with them.

        Pekka didn’t say he didn’t understand it. Which leads to me to ask whether you understand it. Do you? If so would you be willing to sit a small examination on Judy’s blog concerning what it has to say? If not why not? Not able?

        Where do you stand on tobacco? This is all so like it.

      • Yes anytime you like Pratt!

      • Sincerely, I cannot believe that anybody with real credentials could author this paper.

        That mystified me too, as well as the number of people that took it seriously. What has happened to physics education?

      • Yes I was wondering what kind of pratt would look up the thermodynamic meaning of heat in an English dictionary rather than a Physics textbook?

  99. A.Lacis wrote: “Climate is what physicists call a boundary value problem. And the important boundary value for climate is the solar-thermal energy balance at the top of the atmosphere, which is 100% by radiative means. Atmospheric dynamics is certainly important in climate, but dynamics is much more important in the case of weather forecasting applications.”

    This write-up clearly shows that Dr. Lacis has no foggiest clue in mathematical physics and in hydrodynamics in particular. Climate is a boundary-value problem only if one considers it to be in a stationary state. The whole thing becomes completely incoherent when climatology invokes various “forcings”. With external forcings (and internal alteration of boundary conditions and imposition of abrupt parametric changes) , the problem transforms into more like an “initial-value problem”. The “the solar-thermal energy balance at the top of the atmosphere” becomes a partial _solution_ to the problem, not a boundary condition, ending up “in balance” just because the system has to. The only boundary condition on the top is “no flux of mass”, and the bottom condition is that vertical velocity of air is zero. The rest (temperatures, fluxes) are “prognostic” (using the ..oological lingo) boundary _solutions_ to atmospheric dynamics.

    The current state of climate science is such a mess beyond any hope. Professors of climatology have to understand that there are no averages of objects without objects themselves, so there is no climate without weather. They need to be informed that there is no closed mathematical formulation of averaged [hydro]dynamics of weather, so their faith that “predicting climate” is much simpler problem than predicting weather is sheer ignorance.

    I apologize for the harsh tone of my post, but I asked several questions to professors of climatology about details of radiation calculations, but they ignored them. I don’t like this. I assume now that they did not answer my questions because they don’t have an answer, and there is no right answer at all.

  100. As I think I had commented before, The G&T paper is a good example to test your proficiency and understanding of basic physics and radiative transfer principles.

    If after reading the G&T paper, you feel that G&T might be saying something profound and reasonable, then you are deficient in your understanding of basic physics and radiative transfer, and have a lot to learn. The G&T paper is so far off the wall that I can only characterize it being beyond simply just being wrong.

    Why people who should know better write stuff like that, I don’t really know. It is more a question better addressed by psychology rather than physics.

    • Andy, I hope you’ve had a chance to read our Halpern et al paper! (not that you’ll learn anything from it, but it’s the only formal reference out there to the IJMPB article by G&T). In any case, I know this blog has all been about ‘debate’ and ‘listening to all sides’ etc, but you’re going to end up wasting a lot of your time if you want to convince people that G&T or M have anything of meaning to say.

      • Andy, I hope you’ve had a chance to read our Halpern et al paper! (not that you’ll learn anything from it, but it’s the only formal reference out there to the IJMPB article by G&T).

        My only question is who was your intended audience? Anyone who knows the first thing about science can see the G&T paper is a total crock. Who else is going to look at your paper?

      • Gordon Robertson

        “…you’re going to end up wasting a lot of your time if you want to convince people that G&T or M have anything of meaning to say”.

        Here is G&T’s reply to your paper:

        http://arxiv.org/PS_cache/arxiv/pdf/1012/1012.0421v1.pdf

        Before I read the G&T reply, I came to the same conclusion as them. None of you understand the 2nd law of thermodynamics. I have heard that Halpern uses the nym Eli Rabbett. If so, I have debated him on Jennifer Marohasey’s site. I told him I thought he was wrong with his theories and I was surprised to see him come out with that paper, if he is in fact Halpern..

        In your paper, you criticize G&T as making the claim that one body of two bodies radiating against each other has one body not radiating. I could not believe that you had read that into G&T’s paper. They said no such thing.

        You guys seem to think that a cooler body, warmed by a warmer body, can back-radiate to the warmer body thus heating it up beyond the temperature it was when it warmed the cooler body. That is simply not possible, and I have seen words from Clausius to that effect.

        It seems to me you guys are using the 1st law in a way it cannot be used. That’s why the 2nd law was developed. If you read the history surrounding the problem, you will see that Carnot initially thought they’re would be no losses in a heat engine. He was wrong, and he caught himself. Clausius introduced the 2nd law to plug loopholes in the 1st law that would allow a perpetual motion machine.

        By allowing a dependent heat source to warm a source upon which it is dependent, you have created a perpetual motion machine. You have also created a runaway thermal effect .

        Where you guys get the theory about a net balance of energies is beyond me. Rahmstorf used the same argument and G&T called it nonsense. Ralf Tscheuschner is a physicist who works in the field of thermodynamics. G&T were too polite to call you guys laymen in thermodynamics, but I am more inclined to call a spade a spade.
        You mock G&T, who are both physicists, yet your page indicates you are a student.

        http://www.aos.wisc.edu/~colose/

        On your page, you get hopelessly lost in a mathematical argument with Goddard and Motl (a physicist) and later you chastise them as not understanding ‘climate physics’. Pardon?? What is climate physics?

        This is the problem in a nutshell. You guys are re-inventing physics and doing it badly. You claimed that Motl and Goddard did not understand planetary science. Well, look at this article on Venus by Andrew Ingersoll, a planetary expert. He claims that Venus is not a twin of the Earth and that a greenhouse effect cannot account for the 400 degree surface temperatures.

        You, a student, and Gavin Schmidt, a mathematician, seem to be experts in astrophysics as well as thermodynamics. You challenge physicists and astronomers. How much does James Hansen have to do with this? He studied with Carl Sagan, hence his theories on Venus. Schmidt works with him, and according to Motl, you two have ganged up against Motl.

        One of your partners on the Halpern paper is a physicist, Arthur Jones, but he has worked as a librarian for years, not in a lab. According to G&T, his paper rebutting their initial paper, and which preceded your paper, has been rebutted by Kramm, Dlugi, and Zelger (2009), who found it inaccurate.

        As G&T have pointed out in their rebuttal to Halpern et al, none of you have defined the greenhouse effect. You have presumed it exists but you have not proved it. My own take on that is simple. Since all GHGs are in the vicinity of 1% of atmospheric gases, an atmospheric greenhouse would be akin to a greenhouse with only 1% of the glass in place.

        Satellite data from UAH has disproved the AGW theory. There has been no warming trend for a decade. Trenberth claims the warming has stopped and Jones of CRU claims no statistically significant warming since 1995.

        Finally, I live in Vancouver, Canada. We don’t rely on CO2 warming here, we get ours from the Japanese Current in the Pacific Ocean. The Canadian prairies at a similar latitude to Vancouver are up to 40 C colder in the winter. They get even less solar radiation since Vancouver is typically under cloud cover, yet they are 40 C cooler. How do you explain that using radiative warming?

        It is obvious that the prairies are cooled by convective currents from the Arctic. If you look more objectively, you will see, as Lindzen has pointed out, that other forces besides radiation determine warming/cooling in the atmosphere.

        The theoretical physicist, David Bohm, pointed out that science is plagued by mathematical arguments. He referred to formulae as rubbish if they could not be explained. That perfectly described climate models for me. IMHO, the Halpern paper is based on faulty science derived from models.

      • Gordon Robertson

        Sorry…two corrections:

        1)One of your partners on the Halpern paper is a physicist, Arthur Jones….

        Sorry Arthur, that should read:

        One of your partners on the Halpern paper is a physicist, Arthur Smith…

        2)They get even less solar radiation since Vancouver is typically under cloud … should read:

        They get even more solar radiation since Vancouver is typically under cloud….

    • A Lacis

      ……..”The G&T paper is so far off the wall that I can only characterize it being beyond simply just being wrong.”……..

      Yet you cannot give one example of an error in the paper .
      You fool no one but yourself.

      • Yet you cannot give one example of an error in the paper .
        You fool no one but yourself.

        Repeating rubbish doesn’t make it true.

      • Vaughan Pratt

        I know you are a Pratt but is your first name really Vaughan?

      • I know you are a Pratt but is your first name really Vaughan?

        I know you are a brat but is your first name really Bryan?

      • Ironically my (completely unintended) name change from Vaughan Pratt to vrpratt (Ronald being my middle name) is the result of my struggles with WordPress’s completely brain-dead attempt to make their software an Android app. I tried 50 different ways on my Droid X of trying to get WordPress to recognize me, all of which kept failing. However at the end of it all, WordPress decided of its own accord to change my name on Judy’s blog from Vaughan Pratt to vrpratt, without the decency to ask me if that’s what I had intended.

        Now I have no idea how to change my name back to what it was before.

        So I guess the answer to Bryant’s question is, yes, my first name used to be Vaughan, but thanks to the programmers morons at WordPress it is no more.

        WordPress works in mysterious ways. I assume this is because they hire programmers with IQ’s below 80.

        It occurs to me that God works in mysterious ways for the same reason.

      • Who’s this vrpratt sock puppet character? I can’t even figure out whose side he (or she: Veronica?) is on. Rest assured I will monitor this revolting development with an eagle—er, duck—eye.

        vrpratt’s cockamamie story about WordPress’s inability to write a Droid X app is a complete crock. WordPress programmers have IQ’s approaching 180. Obviously vrpratt is some confused denizen of sock-puppet country who can’t fight her way out of a wet paper bag let alone a Droid X app interface.

      • Bryan cottoned on faster than most to this “Vaughan Pratt” character when he asked whether “Vaughan” was his first name. FYI Vaughan R. Pratt lives in the Philipines with his Filipino wife and has been president of the International Marinelife Alliance for many years. You can find his contact details at the upper right of this brochure.

        It was obvious to Bryan that this intruder on Judith’s blog was someone hoping to leverage Vaughan R. Pratt’s stirling record as an environmentalist in order to foist his CO2-centric opinions on us unchallenged. Well, we’ll see about that, won’t we?

      • That’s ridiculous. I’m a Stanford professor living across the road from the Stanford Hopkins Marine Station on Ocean View Boulevard. I checked my GPS just now and it reads 36.6198 °N 121.9051 °W. The sky has been a cloudless blue all day with the exception of a little smog over Santa Cruz 25 miles to the north. Morning temperature was 56 °F rising to 60 by 2 pm. An average of 200 people an hour were walking along the walkway on the bay side of the boulevard during the day. There were around 50 harbor seals hauled out on the beach below our house near the Marine Station library earlier today (it’s dark now).

        What other proof can I offer that I’m me? I’ve never been accused of not being me before, up to now I was utterly convinced of it. How would you prove you are you?

        What a ridiculous position to be in.

      • I can confirm that Dr. Vaughan R. Pratt appearing under that name is the person who claims to be. I contacted him personally at his “stanford.edu” email, regarding our exchange on this blog. We exchange number of emails elaborating further on the subject.

        Dr. Curry it’s time for moderator!

      • I could have written that myself, and I’m nowhere near Pacific Grove. Who’s going to bear out your story about the weather in Pacific Grove? That whole area tends to be foggy for much of the day, it’s ridiculous to think the sky was clear the whole day.

        It’s clear we have a major conspiracy on our hands here. It’s been going on for decades, but it wasn’t brought to light until the George C. Marshall Institute realized what was going on and drew the public’s attention to the deceptive practices of the scientists appointed to the IPCC in 1988, as part of a government plan to divert funds supporting fossil fuels to “alternative fuels” in the name avoiding some vaguely described global economic catastrophe.

        It only became clear what the IPCC was really up to when the George C. Marshall Institute found evidence of scientific hanky panky in the disturbing corruption of the preparation of the 1996 IPCC report. It turned out that lead IPCC author Benjamin Santer had made unauthorized and politically motivated changes to the report he was in charge of, raising very serious questions about whether the IPCC had compromised if not even lost its scientific integrity.

        That of itself did not demonstrate any global conspiracy, merely a panel that had run amok. It was not until Congress(wo)man Dana Rohrabacher, outraged by this evident corruption of science by politics, demanded that the Department of Energy withdraw all funding from the laboratory that employed Santer, that the American people could see for the first time which side their bread was buttered on.

        Had the DoE been neutral in this affair it would have seen the wisdom of Rohrabacher’s demand and withrawn the funding. That it did not was the first hint of the global conspiracy that is now threatening our jobs, our well-being, and the freedom to choose what vehicle to drive, freedoms we have had since the US obtained its independence over two centuries ago.

        We will not be able to rid ourselves of this conspiracy until we have seated Sarah Palin in the White House in January 2013.

      • You are a nuisance. I said:
        I can confirm that Dr. Vaughan Pratt appearing under that name is the person who claims to be. I contacted him personally at his “stanford.edu” email, regarding our exchange on this blog. We exchanged number of emails elaborating further on the subject.
        You may ask who am I; you can find me here with all my known ancestors:
        http://www.vukcevic.com/MAvukcevic.htm
        with the rest:
        http://www.vukcevic.co.uk/t0.htm
        Now kindly go away, to a blog that suits your kind of nonsense.

    • A Lacis – I have found your comments very interesting. If you have time, could you repond to Willis Eschenbach’s comments from Dec 11 at 1204am? I would be interested to read your detailed response to his criticisms of the GISS Model E. Thanks

      • One of the problems with this type of blog is that people can simply not answer questions. That is why many of us hope that CAGW will be tried in a court of law, where cross examination is allowed. I dont expect to see any sort of reply from Adrew Lacis. I hope I am wrong.

  101. Richard S Courtney

    Having read every post in this thread I am disappointed.

    In my conversation in this thread (which began at December 6, 2010 at 9:51 am) with the amiable poster who writes under the name of Randomengineer, I wrote this.

    “Please understand that I completely and unreservedly agree with you when you assert:

    “I’d said : “We also know that we put a lot of CO2 in the air, which according to basic equations ought to result in some warming. Whether this is a fraction of a degree or more isn’t relevant at this stage. “

    This isn’t a conclusion of rampant runaway warming, but a statement that merely adding GHG with all things being equal ought to raise the temp.”

    Yes, I agree with that. Indeed, I thought I had made this agreement clear, but it is now obvious that I was mistaken in that thought.

    And I was not trying to change the subject when I then explained why I think this matter we agree is of no consequence. The point of our departure is your statement – that I stress I agree – saying,
    “merely adding GHG with all things being equal ought to raise the temp.”

    But, importantly, I do not think “all things being equal” is a valid consideration. When the temperature rises nothing remains “equal” because the temperature rise induces everything else to change. And it is the net result of all those changes that matters.

    As I said, a ~30% increase to radiative forcing from the Sun (over the 2.5 billion years since the Earth has had an oxygen-rich atmosphere) has had no discernible effect. The Earth has had liquid water on its surface throughout that time, but if radiative forcing had a direct effect on temperature the oceans would have boiled to steam long ago.

    So, it is an empirical fact that “merely adding GHG with all things being equal ought to raise the temp” is meaningless because we know nothing will be equal: the climate system is seen to have adjusted to maintain global temperature within two narrow bands of temperature while radiative forcing increased by ~30%.

    Doubling atmospheric CO2 concentration will increase radiative forcing by ~0.4%. Knowing that 30% increase has had no discernible effect, I fail to understand why ~0.4% increase will have a discernible effect.

    I hope the above clarifies my view. But in attempt to show I am genuinely trying to have a dialogue of the hearing, I will provide a specific answer to your concluding question that was;

    “So I’m a bit confused regarding the position. Let me ask this then: should there be a temp rise with adding CO2 that doesn’t happen for [some reasons] or is adding CO2 something that never results in a temp rise?”

    Anything that increases radiative forcing (including additional atmospheric CO2 concentration) will induce a global temperature rise. But the empirical evidence indicates that the climate system responds to negate that rise. However, we have no method to determine the response time. Observation of temperature changes following a volcanic cooling event suggests that the response time is likely to be less than two years. If the response time is that short then we will never obtain a discernible (n.b. discernible) temperature rise from elevated atmospheric CO2. But if the response time is much longer than that then we would see a temperature rise before the climate system reacts to negate the temperature rise. And this is why I keep saying we need to determine the alterations to clouds, the hydrological cycle, lapse rates, etc. in response to increased radiative forcing. We need to know how the system responds and at what rate.”

    Nothing I have read in this thread changes my opinion one jot.

    Richard

    • As I said, a ~30% increase to radiative forcing from the Sun (over the 2.5 billion years since the Earth has had an oxygen-rich atmosphere) has had no discernible effect. The Earth has had liquid water on its surface throughout that time, but if radiative forcing had a direct effect on temperature the oceans would have boiled to steam long ago.

      Why is Judy’s blog getting this sort of scientific balderdash? It’s complete gibberish.

      • You gutless coward Willard.

        Have you any responses to the content of Courneys comments?
        Have you engaged in debate with him?

        You gutless coward you, shame on you, you low life, you should be blogging at a gossip magazine blog.

      • David L. Hagen

        Baa Humbug
        Please desist from ad hominem trash talk and rise to professional scientific discourse – or else go find Marley.

      • David you are ofcourse absolutely correct and I have no defence.

        I will point out Willards cloak n dagger attack on the man as opposed to my open and direct attack on the man.

      • The accusation of ad hominem is not always that easy to substantiate. Showing it is a fallacy needs a little more than saying it is so. Vaughan Pratt asked:

        > Why is Judy’s blog getting this sort of scientific balderdash?

        Reading the Sourcewatch page can help answer this kind of why-question, the same kind of why-question leads to pedigree, indoctrination, arrogance, ideologues, heresy and dogma.

        Vaughan Pratt will explain all by himself why he believes Courtney’s post is complete gibberish. He certainly does not need the Sourcewatch page for that.

      • I tend to agree with David.
        Its a waste of time getting irritated to the point of departing from the science.
        I must admit that I responded myself to the provocation of
        Vaughan Pratt but now realise that its pretty pointless.
        I have no wish to see the site as a bear pit where the best insult wins.
        Judith is to be respected for her attempts to get everyone focused on the science.

      • I must admit that I responded myself to the provocation of Vaughan Pratt but now realise that its pretty pointless.

        But entirely forgivable given how provocative it was. I’m sure Bah Humbug will be just fine after he counts to ten. Which we know Vaughan Pratt can’t do if he starts his counting with a single-digit number in order to count to one. No doubt this explains his tirades.

        Judith is to be respected for her attempts to get everyone focused on the science.

        Whoa, aren’t we forgetting the politics here? If we don’t keep our eye on the ball we’ll all end up down the rabbit-hole of science, and we’ve seen where that leads. Focus, people, please.

      • It was inevitable, the man has finally started talking to himself.
        But that’s OK, most of us mumble to ourselves at some stage. But you know what they say, worry when you start answering yourself.

      • Thank you for sharing your perspective on this.

        Do you happen to know if there are incorrect informations on that Sourcewatch page?

      • You gutless coward Willard.

        You said earlier you were helping out JoNova, BH. I was wondering whether her rather extreme attitude was entirely her own Maybe not.

      • You gutless coward Willard.

        Attaboy, disembowel that moron.

        Wait, how do you disembowel someone who’s gutless?

        Ok, cut off his left pinkie. And his right if he persists.

    • randomengineer

      Nothing I have read in this thread changes my opinion one jot.

      Why should it?

      This thread has been an exploration of what is known vs what we think we know vs what is certainly unknown.

      Some posters assert certain knowledge where the evidence is underwhleming, e.g. the paper by Hofmann that Vaughan Pratt references upthread. This paper (a posteriori modeling) seems absurd even on the face of it; the presumption that nature has contributed a fixed 280 ppmv CO2 and all CO2 rise since 1790 is just that, a presumption. Surely the planet must see some natural warming as recovery from the LIA? Hofmann’s paper begs the question re how the MWP and LIA even happened at all.

      Obviously there is a great deal that isn’t known. Could be that Dr Hofmann is correct. But asserting that this is knowledge is simply nonsense.

      I see Willard has “outed” you. BIG ENERGY SHILL! Bah. Who cares? You either have something to contribute or you don’t. [Willard, if anything, what you have shown that Richard here is consistent in his criticism and you are willing to discount what he says based on your politics. If anyone seems silly here, it’s you.]

      My own take on reading this thread is that the subject of climate change is dominated by the radiative physics crowd and everyone else is along for the ride. It’s interesting that Drs Lacis and Hansen have papers regarding venus starting in the 1970’s; it’s apparent they’re taking what they learned from there and applying it to earth.

      My own sense of things is that they’re not talking the biosphere into account very well. In fact the recent NOAA/NASA paper showing a 1.64 C warming due to the biosphere was a surprising and welcome addition. Not due to the reduced number, just that they took the biosphere into account and IMHO this is a step in the right direction. In the thread “no feedback sensitivity” it seems just as clear that a number of things felt to be known are not; they’re assertions based on calculations.

      Do these things make you right? Not necessarily. I reckon you’re wrong in a number of areas, and to make a case you need to concentrate on specifics. Your assertions re sensitivity boil down to […and then a miracle occurs] which frankly is worth even less than the paper from Dr Hofmann I referenced. The recent NOAA/NASA paper drills down into some specifics. I’d suggest that you find the papers etc that support specifics such as these and champion them: the fewer the miracles, the stronger the case… e.g. I don’t agree with Prof Pratt, but he’s at least referencing data rather than than what he feels about it.

      In summary, of course you didn’t change your mind. It wasn’t expected.

      • But asserting that this is knowledge is simply nonsense.

        This would be quite correct if Hofmann had simply pulled this formula out of some part of his anatomy.

        However there can be no science without observation, which is the whole point of the Keeling CO2 observation laboratory at Mauna Loa. Hofmann obtained his formula from two observations: the 280 ppmv dating back to 1800 and before, and the 1958-now Mauna Loa observations. The Hofmann formula is an exact match to both. Hence to deny Hofmann’s formula is to deny the evidence of your senses.

        If your senses tell you that you’re standing on the edge of a cliff and you walk forwards, Darwin’s theory of evolution will absorb your kind into the history of evolutionary biology.

      • Gordon Robertson

        “Hofmann obtained his formula from two observations: the 280 ppmv dating back to 1800 and before, ….”

        What am I missing here? He made an observation of 280 ppmv. How did he do that when that density was derived by cherry-picking CO2 guesses from ice cores?

      • The “observation” of 280 for CO2 in the 18th century would be more appropriately called the expert consensus estimate of that number. 270 is also pretty consistent with what we know, though 260 tends to strain the credulity of some experts.

        But the more recent numbers for atmospheric CO2 for the past half century are measured, with unprecedented care, at Mauna Loa (and I used “unprecedented” advisedly here). They are an excellent match to the exponentially growing population’s increasing use of fuel per capita: we can calculate quite precisely just how much CO2 we’re putting into the atmosphere, and the Mauna Loa observations show that roughly half of it is retained, the other half apparently being absorbed by nature.

        This precision of measurement fortuitously coincides with the period where CO2 has been having by far its biggest impact on the atmosphere. A century ago its miniscule contribution was lost amidst the “noise” of volcanoes, solar cycles, El Nino events and episodes, and the Atlantic Multidecadal Oscillation. Today we can correlate increasing temperature with increasing CO2 in a way that was impossible even in the first half of this century let alone earlier centuries. Not because CO2 was swinging wildly back then but because the temperature was.

        One might think CO2 from volcanoes would cause a noticeable swing but the Keeling curve shows Pinatubo actually pushing down CO2 very slightly. The primary impact of volcanoes on global temperatures seems to be cooling due to aerosols, which in turn drives down CO2 presumably via the feedbacks.

      • Gordon Robertson

        “Today we can correlate increasing temperature with increasing CO2”

        Several scientists disagree. IPCC poobah and AGW guru, Kevin Trenberth recently lamented that the warming signal has disappeared. In a recent study by Keenlyside et al, they acknowledged the warming has stopped and should resume in 2016. A.A.Tsonis found a natural explanation for the warming and wondered why we were getting hung up on CO2 warming when such a promising natural explanation was there.

        The satellite data from UAH presents another side to the argument. John Christy of UAH has claimed for years that the satellites were showing below average anomalies for the 1979 – 1995 period. That is apparent here:

        http://www.drroyspencer.com/latest-global-temperatures/

        In this article, it is pointed out that the decadal trend from 1998 – 2007 was flat (positive 0.04ºC/decade ):

        http://www.worldclimatereport.com/index.php/2008/01/08/musings-on-satellite-temperatures/

        Even Phil Jones of CRU claims there has been no statistically significant warming since 1995. Lindzen concurs.

        It would seem that the CO2/temperature correlation is coming only from GISS. With Hansen being such an alarmist, I have little confidence in their record. Satellites are far more accurate and have a 95% coverage of the lower atmosphere. The data has been verified against radiosonde data.

        BTW…this is all on-topic. Climate models and their advocates, like Hansen, are the ones showing a warming trend.

      • I don’t agree with Prof Pratt, but he’s at least referencing data rather than than what he feels about it.

        Thank you, randomengineer. That and Baa Humbug’s tentative acceptance of my second snowball-earth theory are the nicest things anyone’s said to me so far on this blog. Even nicer than the adulation from my (two?) fans, to which I must apply the obvious discount (sorry fans, greatly appreciated but I hope you understand under the circumstances). (In retrospect I feel badly about thanking BH by picking on one of his friends, very unappreciative of me.)

        Basically I hate climate modeling, not in principle but because of its claims of predictive prowess. I really wish climate modelers would calibrate themselves better on the limitations of their profession.

        Climate modeling is a worthy undertaking, but it doesn’t make climate modelers arbiters of the future.

      • David L. Hagen

        Basically I hate climate modeling, not in principle but because of its claims of predictive prowess. I really wish climate modelers would calibrate themselves better on the limitations of their profession.

        Lets hear three cheers for Vaughan’s affirmation of the scientific method.

        Might there be hope for full objective apolitical evaluation comparing against a null BAU hypothesis and using the principles of scientific forecasting for public policy?

      • Might there be hope for full objective apolitical evaluation comparing against a null BAU hypothesis and using the principles of scientific forecasting for public policy?

        Not sure I’ve parsed this correctly, but do you mean getting back to the way climate science was done back when the public was paying much less attention to it?

        In those days if you asked someone at a cocktail party what they did and they said climate science, your next question would be about their family, their hobbies, anything but climate science.

        Today everyone’s a climate expert.

      • I really wish climate modelers would calibrate themselves better on the limitations of their profession.

        Or at least do a better job of communicating their understanding of their limitations, which the bald statement “the science is settled” fails to do because it simply says “the science” when what is really meant is “the settled science.”

        Unfortunately “the settled science is settled” is a tautology. A better statement would be “climate scientists are in broad agreement on the elementary principles of their subject.”

        As with any subject the constants involved are known only up to some precision, which improves with time. Lord Kelvin famously said that this was all there was left to work on in physics: for him the science was completely settled. He did not anticipate either relativity or quantum mechanics. Neither did his colleagues, but some of them could see room for improvement in more than just the constants.

        Climate sensitivity is such a constant. Its biggest problem today however is not in either measuring it or calculating it, but in defining it. If you define it as the instantaneous relationship between CO2 and surface temperature you get something like 1.8 °C. If you define it as transient climate response using a 20-year response time you get larger figures when calculated, but the calculations are sensitive to the assumptions and produce values that vary with the assumptions. If instead of calculating it you measure it assuming a 20-year response you 2.6 °C per doubling, with 27 years you get 3.0 °C, with 32.5 years (for which there are two independent reasons to find interesting) you get 3.3 °C.

        These numbers while tentative and subject to all sorts of disclaimers like checking the relevant math, statistics, and science, peer review, general acceptance, etc. etc., do make the point very clearly that variations in definition of climate sensitivity can make a huge difference in the associated value.

        If you calculate climate sensitivity using one definition and then plug the resulting value into a calculation or model based on a very different definition, you run the risk of making wildly incorrect projections.

      • Vaughan Pratt, to which climate modeler are you attributing the proclamation “the science is settled”?

      • to which climate modeler are you attributing the proclamation “the science is settled”?

        I wasn’t attributing it to anyone, it’s the generic statement skeptics think they heard in response to their questioning of the science. That’s what I meant by doing a better job of communicating their understanding: scientists need to find better ways of preventing skeptics from hearing this science-is-settled message.

        The problem comes when scientists and policy makers state clearly that climate scientists are in broad agreement on the elementary principles of their subject, and it gets heard as “the science is settled.” A better statement would be, “Climate scientists are in broad agreement on the elementary principles of their subject, which nonetheless is far from settled.” Ben Santer for example makes a point of putting it that way, if not in those exact words.

        Even that’s no protection against those skeptics whose modus operandi is to lift statements out of context, in this case simplifying the above to “the science is far from settled” which sounds very different indeed. The most charitable interpretation of this sort of misquoting is that skeptics who do so may have some sort of attention deficit.

        Policy makers like Bill Clinton and Al Gore have expressed the science-is-settled sentiment one way or another in the past—surely you saw “An Inconvenient Truth” which left its audience with the impression that the science was settled. I knew nothing whatsoever about the role of CO2 in global warming when I first saw it, and formed the strong impression by the end that the science was irrefutable.

        Had you asked me at the time whether “irrefutable” and “settled” were synonyms in that context I’d have wondered what fine semantic point you were getting at. Now I know. But does the public?

        Physics had its Lord Kelvin who famously said his field was settled but for the constants, but I don’t know of any counterpart to Kelvin among still-active environmental scientists, despite the impression of some that the climate field must be crawling with them.

        Assistant professors who says their subject is settled are asserting their redundancy. It’s the retired ones who are more likely to imagine this, patting themselves on the back for a job well done, like Kelvin.

      • David L. Hagen

        “The science is settled, Gore told the lawmakers” was NPR’s summary of Vice President Al Gore’s absolute statement:

        First of all, there is no longer any serious debate over the basic points that make up the consensus on global warming.

        Testimony of the Honorable A1 Gore, before the
        U. S. House of Representatives, Energy & Commerce Committee Subcommittee on Energy & Air Quality and the Science & Technology Committee, Subcommittee on Energy & Environment, March 2 1,2007

  102. Baa Humbug

    I think it is reasonable to speculate the as to the reasons why some people on the so called ‘consensus’ IPCC side of the debate would like to see Judith’s attempt at open dialog fail.
    She herself was often cast as the’ defector to the dark side’.
    And that’s when they were being polite.

    • Unfortunately I fell for the trap. Judiths blog deserves better and so I apologise unreservedly.

      • Unfortunately I fell for the trap. Judiths blog deserves better and so I apologise unreservedly.

        Hey, no fair, I was much meaner than Willard when I wrote about the “divine Mr. Courtney”. What does one have to do to get a bit of recognition on this blog for mean remarks, eh?

      • David L. Hagen

        A gentleman and a scholar.
        May all participants rise to the level of professional scientific discourse.

  103. Pratt: Sporting matches are generally played according to a set of rules in which the officials are somewhat competent and unbiased, and scores are kept during the game rather than announced only after closed-door discussion between the officials and the home team, with the press accepting only the home-team’s commentary and results,
    save for some dismissive slagging of the opposition. If I were a fan treated to that kind of charade, well, I’d become a skeptic.

    • Right, “dismissive” describes this “Vaughan Pratt” character to a T. Like you, if I hadn’t been a skeptic for the past quarter century I’d become one on the spot after listening to him. He obviously figures out his positions after closed-door discussions with the IPCC, given the close match between his views and those of the global conspiracy.

      Until we get Sarah Palin into the White House we’re never going to rid ourselves of this conspiracy. With $47 trillion dollars as their carrot, what stick can beat them into submission?

  104. Gentleman, mind your manners!
    Offenders should be banned !

    • Gentleman, mind your manners!
      Offenders should be banned !

      Wasn’t Sergeant Pepper’s Lonely Hearts Club banned? All that accomplished was to make the club even more famous. Try something else.

  105. The first confrontation between Hooke and Newton came in 1672. Newton had written a paper on his demonstration of white light being a composite of other colours. It was presented to the Royal Society just prior to Newton’s reception as a Fellow of the Society. Newton thought a great deal of his demonstration, referring to it as “the oddest if not the most considerable detection which hath hitherto been made in the operations of Nature.” But Newton was met with a strong rebuff by Hooke. Hooke had his own wave theory of light, he had gone into some detail about it in the Micrographia, and he still believed in it strongly. He claimed Newton had not proven his idea clearly, and needed more detail.
    Newton had the equivalent of a temper tantrum. The situation was made worse for Newton because Hooke was not the only one attacking Newton’s theory, he had been joined by Christian Huygens, Ignace Pardies and the Jesuits of Liege. Newton had since childhood, reacted strongly to criticism. He constantly challenged authority, and to rebuff him, was to become an enemy. Newton demonstrated this over and over during his lifetime; his response was often either complete withdrawal, or open battle. On this occasion, Newton chose withdrawal (though usually for Newton withdrawal was some form of manipulation in battle plans.) In March 1673, Newton wrote to Henry Oldenburg, the current secretary of the Royal Society. Newton requested to withdraw from the Society. It took much gushing of admiration, respect, etc. on Oldenburg’s part, as well as an offer to wave dues to the Society to get Newton to change his mind. Oldenburg also offered an apology for the behavior of an “unnamed member.” The stage was set. Newton had successfully established his place in the Society, and had scored a victory, of sorts, over Hooke.

    I see no present day Newton or Hooke on this blog.

    • Gordon Robertson

      “I see no present day Newton or Hooke on this blog”.

      We live in more complex times. I admire Newton and what he did. How he managed to remain devoutly religious and still study science is amazing to me.

      I am disturbed by our current emphasis on consensus, as if enough people agreeing on an opinion makes it a scientific fact. That’s what I am seeing in this blog and in others. There are also people trained in one discipline offering expert opinion in other disciplines, which is plainly wrong. Yet they defend their pseudo-science with great emotion.

      Gavin Schmidt, of realclimate and GISS, showed disrespect to Richard Lindzen by suggesting Lindzen’s science is old school while Schmidt’s is ready for text books. Schmidt is a mathematician with little or no experience in atmospheric physics. Whereas Lindzen’s background is in math as well, he has worked in non-modeled atmospheric physics for over 40 years and has published over 200 papers. He was smart enough to become a professor at MIT.

      No…there are no Newton’s or Hooke’s in this blog, nor is there a Clausius. There are people who think they can represent the atmosphere with equations, when physical data suggests otherwise. There is certainly a lot of arrogance, including my own.

      Someone with my limited science background should be careful not to expound on theories that are beyond them. However, I don’t see many people willing to stand up to the status quo, and I feel my voice is better than nothing.

      I do have an extensive background in electronics, computers, and electrical work. I have been forced to realize the meaning behind the theory. In electronics, one must be able to translate the theory of electronics to the real, physical components.

      I have gained a practical understanding of positive feedback from that and it is nothing like what is being passed off for the same in climate models. People programming such models must understand clearly what the equations mean and how they are implemented. There is a big difference between the virtual world of a model and the real world. IMHO, modelers are allowing their arrogance and egos to get in the way of their discipline.

      Newton may have been egotistical and arrogant, but he did not let it get in the way of his science.

      • Christopher Game

        Dear Gordon Robertson,
        I am glad to see your explicit statement: “I have gained a practical understanding of positive feedback from that and it is nothing like what is being passed off for the same in climate models.”
        As I noted elsewhere in this blog, I have the same opinion. Yet they still cite Bode.
        Yours sincerely, Christopher Game

      • I am disturbed by our current emphasis on consensus,

        Marx was disturbed by society’s emphasis at the time on capitalism.

        And I’m disturbed by democracy in government. I just haven’t seen a better system. If you have a better system for science it would be fascinating to understand why it is better.

        Don’t gloss over the details, your system has to actually work.

      • I do have an extensive background in electronics, computers, and electrical work.

        Well, that explains a lot.

        The similarity between technology and science resides in one of the market places in which their battles are fought, namely that of ideas.

        The first difference lies in the other two market places, those of products, and of our understanding of nature. Whereas technologists compete for market share of their products, scientists compete for market share of their understanding of nature.

        The second difference is in the allocation of voting rights. Technologists have a CEO, backed up by a board, who insists that the customer is always right. The customer’s vote is final.

        Science has no CEO, and the scientists are always right. That is, until another scientist proves them wrong, for the scientist’s vote is always tentative, never final. On the other hand scientists set a high standard for voting rights: you must know whereof you speak.

        This is not to say that science today is a theocracy. Perhaps it was once, long ago, but today it is run as a meritocracy, to the extent political interests will let scientists do so. Anyone who believes science is being permitted to develop freely without political interference is living in a private fantasy land. Science is under attack all the time from a wide variety of vested interests.

        I can appreciate how terribly incestuous and self-serving all this must seem to a technologist. If you have a better system you have my full attention.

  106. Richard S Courtney

    If those who do not like the evidence and/or arguments I present then they would state any flaws they could find in what I wrote.

    My view is confirmed when my statement of irrefutable empirical evidence and argument from that evidence is dismissed as “gibberish” and is bolstered by links to lies and insults about me from those who do not like the evidence and argument I present.

    That those who so responded demonstrate their inability to find any flaw in what I wrote is very reassuring.

    Richard

    • > My view is confirmed when my statement of irrefutable empirical evidence and argument from that evidence is dismissed as “gibberish” and is bolstered by links to lies and insults about me from those who do not like the evidence and argument I present.

      This paragraph merits due dilligence:

      /1. The entailment is interesting: when Wallace states that the Moon is made of cheese and Gromit has a facepalm moment, Wallace is thus confirmed in his belief.

      /2. The relationship that the verb “bolster” is supposed to convey is not ours, as we specifically stated that

      > Vaughan Pratt will explain all by himself why he believes Courtney’s post is complete gibberish. He certainly does not need the Sourcewatch page for that.

      Source: http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-21303

      /3. It would be interesting to know what kind of statement is both “empirical” and “irrefutable”, what is an “irrefutable empirical evidence”, not to mention what evidence Courtney is talking about. In any case, during his discussion with gryposaurus, Courtney seems to know what is empiricism. Here:

      > It seems that nobody has explained to you what empiricism is.

      And there:

      > [T]he answer from gryposaurus seem to confirm that he does not understand the basic principles of empiricism.

      Reading the whole exchange between gryposaurus and Courtney makes us surmise that for him empiricism amounts to ask for evidence, deny the evidence offered, armwave his own pieces of evidence, attack his interlocutor and then go silent.

      In the quote above, notice the word “confirms”, yet again: sometimes, Courtney does not need much to be confirmed in his views. Perhaps these “confirmations” are what lead Courtney to be being quite “consistent” in his criticisms over the years: denying that any “convincing evidence” for AGW has been discovered; denying that recent global climate behavior is consistent with AGW model predictions. (We don’t know if he still holds that temperature affects carbon dioxide, but certainly not the other way around.)

      ***

      We could surmise that these disbeliefs makes Courtney’s skeptikism (I prefer with two k’s) akin to Vaughan’s Type 2 skeptik :

      http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-20415

      Here is the relevant paragraph:

      > But students and others who willfully feign ignorance, or insist on contradicting you with data you know from years of experience is completely wrong, count for me as sceptics. They are obstructing science, and then scientists start to act more like the police when dealing with someone who’s obstructing justice. The more egregious the offence, the meaner a scientist can get in defending what that scientist perceives to be the truth. Not all scientists perhaps, but certainly me, I can get quite upset about that sort of behaviour.

      Contrary to what arandomengineer might think, I believe that it would be very dangerous to discount Courtney’s claims and disbeliefs based on political points of view.

      • Richard S Courtney

        So, Willard can find no fault in what I said but spouts irrelevance concerning e.g. Wallace and Grommit and makes points about “political points of view”.

        No surprise there then.

        Richard

      • Courtney asserts:

        > So, Willard can find no fault in what I said but spouts irrelevance concerning e.g. Wallace and Grommit and makes points about “political points of view”.

        1. With respect, I think Courtney misses the point of the Wallace & Gromit example. This example is relevant to show that Courtney is committing a fallacy when he says something that implies that he’s right unless someone takes the time to show he’s wrong. Replying to him has nothing to do with him being right or wrong. In a party, a person think opinions like the one expressed G&T might simply be ignored:

        > Just ask their opinion of those papers. If their opinion agrees with yours then you’ll be able to spend a pleasant evening together discussing the subject constructively. If it doesn’t then further conversation will be a pointless waste of time for both of you.

        http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-22016

        If Vaughan Pratt ever stops to talk to Richard S. Courtney, I surmise it does not imply that Richard S. Courtney is right.

        2. With respect, Courtney challenges me to answer for Vaughan Pratt, in response to my accurate statement that said:

        > Vaughan Pratt will explain all by himself why he believes Courtney’s post is complete gibberish. He certainly does not need the Sourcewatch page for that.

        This challenge does seem to presuppose that I need to take up this challenge, i.e. that what Vaughan Pratt considers balderdash is somehow related to his Sourcewatch page, where the “consistency” of his viewpoints are well established, whereas my only contribution in this thread was to answer a question Vaughan Pratt was asking, viz.:

        > Why is Judy’s blog getting this sort of scientific balderdash? It’s complete gibberish.

        Source: http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-21228

        3. Finally, and again with all respect, Courtney should recall what evidence and arguments he’s supposed to have bolstered for all the conclusions he has proferred so far in the thread. Or, to make it sound a bit more like Courtney himself:

        > As I said, a ~30% increase to radiative forcing from the Sun (over the 2.5 billion years since the Earth has had an oxygen-rich atmosphere) has had no discernible effect. The Earth has had liquid water on its surface throughout that time, but if radiative forcing had a direct effect on temperature the oceans would have boiled to steam long ago.

        What evidence? What arguments?

        w

  107. Vaughan Pratt 12/14/10 11:45pm

    Re: “‘the science is settled'”

    >>[I]t’s the generic statement skeptics think they heard in response to their questioning science.

    Believers classify nonbelievers as skeptics and deniers. Yet skepticism remains a virtue among scientists.

    On 12/7/09, Dr. Rajendra Pachauri, IPCC Chairman, said

    >>the Fourth Assessment Report (AR4) of the IPCC. This report was completed a few weeks before COP 13 held in Bali in December, 2007, and undoubtedly had a profound impact on the deliberations there. Since then the global community has had adequate opportunity to further study, debate and discuss the findings of the AR4 and determine actions that are required to be taken globally. This conference must, therefore, now lead to actions for implementation by “all parties, taking into account their common but differentiated responsibilities”.

    >>One of the most significant findings of the AR4 was conveyed by two simple but profound statements: “Warming of the climate system is unequivocal as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice and rising global sea level”; and “most of the observed increase in temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations”.

    Not just scientists, but the global community has finished their reading of the Final Report, and AGW remains unequivocal (“having only one possible meaning or interpretation”, “absolute; unqualified; not subject to conditions or exceptions”). And this is based on the “observed increase in anthropogenic GHG concentrations”.

    For openers, that observation is a failed conjecture, one based on false, but settled, science!

    Pachauri appears not to have used the word “settled”, but his remarks are final. By the way, in IPCC parlance, “very likely” translates to a probability greater than 90%. That could be interpreted as a quantification from the settled science, or instead that science is still about 10% unsettled. We’ll never know because probability isn’t defined as IPCC uses it, as comfort in the minds of their investigators.

    Since then, on 2/23/10, U.S. EPA Administrator Lisa Jackson testified before Congress that

    >>The science behind climate change is settled, and human activity is responsible for global warming. … That conclusion is not a partisan one.

    http://www.nytimes.com/gwire/2010/02/23/23greenwire-epa-chief-goes-toe-to-toe-with-senate-gop-over-72892.html

    She is now regulating CO2 as a pollutant, as ordered by the US Supreme Court in Massachusetts v. EPA. So non-scientists, non-skeptics are hearing the same thing.

    Settled is not defined, but the science is unequivocal: anthropogenic CO2, though not natural CO2, accumulates in the atmosphere to pose a danger to the World.

  108. Bryan 12/12/10 4:30 am

    Challenge to A Lacis:

    >>Yet you cannot give one example of an error in the paper. You fool no one but yourself.

    On p. 11, G&T, v. 4, 1/6/09, say,

    >>In all past IPCC reports and other such scientific summaries the following point evocated in Ref. [24], p. 5, is central to the discussion:

    >>>>One of the most important factors is the greenhouse effect; a simplified explanation of which is as follows: …

    G&T’s reference 24 is J.T. Houghton et al., Scientific Assessment of Climate Change – The Policymakers’ Summary of the Report of Working Group I of the Intergovernmental Panel of Climate Change (WHO, IPCC, UNEP, 1990)

    That is 17 years out of date relative to “all past IPCC reports” in 2009.

    In the latest Assessment Report, IPCC says,

    >>The reason the Earth’s surface is this warm is the presence of greenhouse gases, which act as a partial blanket for the longwave radiation coming from the surface. This blanketing is known as the natural greenhouse effect. The most important greenhouse gases are water vapour and carbon dioxide. The two most abundant constituents of the atmosphere – nitrogen and oxygen – have no such effect. 4AR, FAQ 1.1, p. 97.

    I would model a blanket as a fixed thermal impedance to heat, and that impedance would cover conduction, convection, and radiation, as appropriate. IPCC prefers to dice the blanket into a number of layers that absorb radiation to warm and re-radiate up and down, presumably at a much lower temperature and intensity, but those considerations aren’t clear. Writing in 2009 and arguing against all that IPCC has written, G&T fail to discuss the greenhouse effect as a blanket.

    Discussions on this thread have argued for one model vs. another: boundary value (Lacis) vs. initial conditions (Milanovic), Beer’s Law vs. logarithmic dependence, radiative forcing vs. radiative transfer vs. heat flow. It might have included simulation vs. emulation vs. parameterization, and Henry’s Law vs. chemical equilibrium, and more. Science has no preference for any of these. There is no right or wrong answer here.

    Science leaves a modeler free to model the real world in any manner whatsoever, violating laws, axioms, and scientific principles, even the conventional wisdom, or not. His model may be invalid, of course, when shown to be inconsistent, or when it fails a critical validation. Regardless, the ultimate test for any model is whether it makes non-trivial predictions (hypotheses), validated by measurements (theories).

    • Jeff

      I agree with you that the atmosphere acts to insulate the Earths surface.
      All three methods of heat transfer are involved plus phase change for water.
      I agree it would have been better to have used the most up to date IPCC report.
      However there is not a big shift in thinking between the reports.
      The latest IPCC report still exaggerates the radiative effect and minimises
      the bulk of the atmosphere.
      …”The two most abundant constituents of the atmosphere – nitrogen and oxygen – have no such effect. 4AR, FAQ 1.1, p. 97.”……
      I would think the thermal capacity of the atmosphere, convective and phase change consequences completely outway the radiative effects of CO2.
      They use the experiment of R.W. Wood and values from physical tables to show that the radiative effects of CO2 are almost negligible at room temperatures.
      G&Ts aim in the paper seems to me to be a simple one.
      That is to falsify specifically the CO2 greenhouse effect.
      The do not offer any climate model of their own.

  109. All three methods of heat transfer are involved plus phase change for water

    How is conduction involved? Double glazing is strongly recommended for windows on account of the excellent insulating quality of air, which has a thermal conductivity of 0.024 W/mK. This means that a window with an air gap of one meter will experience a flux density of 0.024 W/m² per degree of temperature differential across the gap. Typical air gaps are around 6 mm, so 4 W/m²K. Hence a temperature drop of 10 °C would induce a flow of power density 40 W/m² across the air gap. For an air gap of 6 km that becomes 40 μW/m², a millionth of that, or 0.4 mW/m² when the temperature drop is 100 °C. Evaporative convection at 80 W/m² is some 200,000 times this.

    • How is conduction involved?

      At the interface between the atmosphere and surface referred to as thermals which account for 17 W/m^2 in the cartoon here.

      Please do not count this as an endorsment of this work.

    • David L. Hagen

      Vaughan
      Re: “How is conduction involved?”
      Judith stated:

      I know there are other things involved, too. Such as conduction (apparently not a big factor?), convection, reflection and refraction, and so on.

      She noted “note a big factor?”

      While it is not “a big factor”, if you are going to challenge Judith’s statement, you have to show conduction through the atmosphere is NOT involved, not that it is very small.

      Note that the official “lapse rate” from International Civil Aviation Organization (ICAO) definition of the international standard atmosphere (ISA) gives the temperature lapse rate of 6.49 K(°C)/1,000 m. With the dry air heat conductivity of 0.0257 W/m K (20 deg C), doesn’t the air lapse rate nominally give a conductivity 0.000169 W/m2 through the atmosphere?
      Not large, yes, but not zero.

      • While it is not “a big factor”, if you are going to challenge Judith’s statement, you have to show conduction through the atmosphere is NOT involved, not that it is very small.

        If I’d required you to show something this might be a fair request. I’m not asking you to show anything, so don’t ask me to.

        With Miskolczi it’s a different story: if he asks me to show something I will. He’s earned it, and then some, by heroically calculating the kinetic energy of the atmosphere from 2000 layers of it, which obligates me to carry out the two multiplications that show his calculations are off by a factor of 5.

        I view G&T as closer to you than Miskolczi in that regard.

      • With the dry air heat conductivity of 0.0257 W/m K (20 deg C), doesn’t the air lapse rate nominally give a conductivity 0.000169 W/m2 through the atmosphere? Not large, yes, but not zero.

        My apologies, I replied to your first half before I got to your second half. Yes, your calculations look good there. In the post you’re replying to I’d given 0.0004 W/m2 as the flux density through the first six km, which seems quite consistent with your flux density of 0.000169 W/m2 for the whole atmosphere.

        Certainly not zero, but as a fraction of what the 80 W/m2 that convection removes from the Earth’s surface, it is 2 parts per million. The uncertainty in the larger contributors so completely dwarfs this as to make it zero for all practical purposes.

        I can tell you’re a theorist at heart. ;)

    • Vaughan

      Ok, I’ll bite. If conduction isn’t involved and they are not greenhouse gases, how is heat being transferred to the nitrogen and oxygen in the atmosphere?

      • Can’t be conduction or there’d be an associated thermal conductivity k.

        People used to think there were just three states of matter, solid, liquid, and gas. But then they decided that plasmas weren’t really gases and declared them a fourth state of matter.

        Maybe this should be considered a fourth kind of heat transport? (I’m just an amateur and don’t get to vote on these sorts of things.)

      • Vaughan:

        I’m just an amateur

        It shows believe it or not.

        “Always” is just as dangerous a word to write in a physics text as “never.”

        If your little thought experiment could hold true then another similar situation would also hold true. The burn temperature of propane/air torch of ~1300C is not hot enough to melt platinum (MP 1769C) do you think focusing two torches with the same burn temperature will be able to melt it?

        The answer to that is no.

        Perhaps you are thinking of another similar case e.g. starting a fire with a magnifying glass. There again the effective radiation temperature of the sun at earth’s orbit is ~393C and there will be losses as the light passes through the atmosphere. The temperature is so low because the heat has dissipated isotropically from the sun. The magnifying glass does not amplify the light it concentrates it IOW reduces the entropy and increases the thermal potential in short makes the effective radiation temperature hotter. In fact hot enough to ignite paper. By confining all the heat in a narrow beam you are in fact making it hotter and the heat flow is still from hot to cold. Your though experiment fails.

        Many people smarter than you or I have been working many years to find a loop hole in the second law without success. I don’t think that climate scientists are going to manage it any time soon either.

        When it comes to direction of heat flow always is the appropriate word.

        Let’s let Sir Arthur Eddington have the last word:

        The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

      • The burn temperature of propane/air torch of ~1300C is not hot enough to melt platinum (MP 1769C) do you think focusing two torches with the same burn temperature will be able to melt it? The answer to that is no.

        That doesn’t refute my scenario because no matter how many propane torches you assemble you will not be able to generate either narrow-band radiant heat or a beam sufficiently focused as to be able to traverse a wide gap without losing radiation energy.

        Your magnifying glass example suffers from the same limitation.

        You have however proved one thing: that God is in the details when constructing counterexamples to whether a cold object can heat a hot one. You’re missing those details in your attempted refutations.

        Let’s let Sir Arthur Eddington have the last word:
        The law that entropy always increases holds,

        That’s like proving the impossibility of relativity and quantum mechanics by quoting 19th century physicists.

        Those playing the derivatives market make millions of dollars exploiting loopholes in “the law that entropy always increases.” See for example Michael Stutzer’s paper Simple Entropic Derivation of a Generalized Black-Scholes Option Pricing Model, published in the April 2000 issue of the journal Entropy, whose abstract reads,

        A straightforward derivation of the celebrated Black-Scholes Option Pricing model is obtained by solution of a simple constrained minimization of relative entropy. The derivation leads to a natural generalization of it, which is consistent with some evidence from stock index option markets.

        Notice the key word “minimization”?

        The bigger picture is that the emergence of life on Earth is itself a violation of the law that entropy always increases. The possibility for such violations arises on account of thermal fluctuations: local buildup of heat can create the conditions for a violation, which fundamentally depends on the available energy.

        You guys are still living in the dark ages of science. Get with the times. This paper is more than a decade old already.

        But I’m glad it’s clear I’m just an amateur. I wouldn’t want to leave people with any misimpressions on that score. ;)

      • What is conduction of heat?

        It is spreading of thermal energy through molecular motion and collisions in gases and liquids and through molecular or atomic vibrations in solids and liquids as well as electrons in (electric) conductors. Conduction leads to transfer of energy in accordance of diffusion equation.

        Conduction as defined above is present everywhere in atmosphere where there are temperature gradients. In free atmosphere convection by collective motion of molecules is usually much more important. Radiation is also a more efficient way of transferring heat in real atmosphere.

        Conduction is, however, important in a thin boundary layer very close to solid earth and water and even more important on the solid or liquid side of the boundary.

      • Conduction is, however, important in a thin boundary layer very close to solid earth and water and even more important on the solid or liquid side of the boundary.

        That’s a fair point that I can’t argue with. The latter part is obvious, but the former is how the atmosphere can be heated conductively by the ground. My understanding (which as usual may be wrong) is that this is the primary mechanism by which the temperature of the bottom kilometer of the tropopause fluctuates so widely between day and night, via a combination of conduction and convection. Higher up the tropopause is remarkably stable during any 24-hour period.

      • Radiation is also a more efficient way of transferring heat in real atmosphere.

        My point all along. To put my recent post concerning diurnal variation of the bottom km of the atmosphere in perspective, that variation while highly relevant to those on the ground has negligible impact on global warming because the rest of the atmosphere averages it out.

        Likewise millisecond smoothing of atmospheric fluctations is irrelevant to global warming.

        Bottom line: conduction is irrelevant to global warming.

      • Vaughan

        … no matter how many propane torches you assemble you will not be able to generate either narrow-band radiant heat or a beam sufficiently focused as to be able to traverse a wide gap without losing radiation energy.

        Yes that is partly right the temperature does not get hotter by adding torches.

        Your magnifying glass example suffers from the same limitation.

        What? You’ve never started a fire using a magnifying glass? Fact is concentrating light does make the beam hotter so flow is still from hot to cold and work has been done on it. You have effectively described a heat pump the work being done at quantum levels.

        You’re missing those details in your attempted refutations.

        No details missed the only way a cold object can heat a hot one is by use if a heat pump :- a detail that you left out not I.

        What on earth makes you think the financial market is a thermodynamic system? Another red herring.

        Local violations of the rule that entropy always increases in systems left to themselves, like the heat pump, has a cost of increased entropy elsewhere in the system. For example the power station where the electricity is generated that powers your air conditioner and the heat dissipated in the the transmission lines.

        Sorry

      • What? You’ve never started a fire using a magnifying glass? Fact is concentrating light does make the beam hotter so flow is still from hot to cold and work has been done on it. You have effectively described a heat pump the work being done at quantum levels.

        The reason we’re not agreeing here is because we’re using different definitions of temperature. You view sunlight as decreasing in temperature as the inverse square of distance from the Sun, whereas I view the temperature of sunlight as being independent of distance.

        Your definition is based on the temperature to which the Sun heats a small distant black object, which we can regard as a thermometer, subject to certain rules such as no magnifying glass and how well insulated the side facing the sun is from the back side (giving rise to the √2 difference Pekka mentioned between a perfect insulator and a perfect conductor, which I failed to spot for the reason Pekka guessed at: I’d only previously seen the value for the conducting case).

        To see that size of thermometer matters, enlarge the disk until it becomes a spherical shell completely enclosing the Sun, and assume the case of a thermal insulator. At that size the inside of the shell will reach the temperature of the Sun itself, without the aid of a magnifying glass, even though every point of it is 1 AU from the Sun. (Exercise: determine the temperature attained by a hemispherical shell, mutatis mutandis.)

        These considerations, insulation and size, along with albedo (a black thermometer of your design registers hotter than a white one), show that your definition of temperature is not robust, as it depends on unreasonably many details of your thermometer.

        My definition is based on the wavelengths involved, which don’t change with distance. The Sun radiates with a peak wavelength at 502 nm, which if narrow-band would make the Sun look green, but because the spectral distribution of solar radiation is a fair approximation to Planck’s law, 90% of solar energy spans two octaves of the EM spectrum, from 380 nm to 1623 nm, making the Sun look white at TOA and yellow at the surface, some of the blue having been removed by Rayleigh scattering. This is a more useful definition because it tells you the temperature to which sunlight can heat an object, namely close to 5700 K in principle. For sufficiently large objects like the shell above this happens directly, for smaller objects like Earth or Neptune some focusing helps.

        The definition also allows assigning a temperature to any radiation, whether black body or narrow band (though some would argue that temperature is undefined for the latter), by defining it as 2898/λ K where λ is the peak wavelength of the radiation in micrometers (microns for those of us raised on Elvis and the Beatles). By this definition an AM station broadcasting at 540 kHz radiates at a frigid 5.37 μK (microKelvin) while a Link-16 transponder broadcasting in the NATO L band at say 50 GHz is beaming a somewhat warmer 0.5 K. Earth’s atmosphere is completely opaque to radiation below 100 μK and above 10,000 K, as well as everything from 1 K to 100 K, and partially opaque in the 0.1-1.0 K and 400-2000 K bands. Convenient octave-wide holes in Earth’s atmosphere centered on 5700 K and 300 K make the formula for effective temperature of a planet a moderately good approximation for Earth’s surface temperature since it offers only mild resistance to inbound insolation and OLR at the respective temperatures of Sun and Earth. Venus lacks the hole at 300 K, which is how its temperature got to 740 K, more than double its effective temperature.

        With strategically located parasols and reflectors, every planet in the Solar System can in principle be brought to the temperature of Earth, as a step towards colonizing the whole Solar System. Even Earth’s temperature can be restored to some earlier preferred value, that of the Medieval Warming Period for example, if we could just stop fighting over its value for a minute.

        A heat pump uses work to bring heat from a cool object to a hot one. A magnifying glass is arguably a heat pump by your definition (though there is nothing quantum-mechanical about its operation, which involves only classical optics), but it is not a heat pump by my definition because the source of the heat is radiation at 5780 K, whence the magnifying glass is bringing heat from a hot object to a cool one.

        What on earth makes you think the financial market is a thermodynamic system?

        Yes, it’s certainly a strange idea. Nonetheless, except for the replacement of time t by −t in the equations, Black-Scholes obeys the same statistical laws as a thermodynamic system. It is essentially a time-reversed diffusion equation, see the paper I cited before, or for a more compressed account see Hannes Tilgner’s short overview of the sense in which Black-Scholes is time-reversed physics. Extremely interesting stuff.

      • To see that size of thermometer matters, enlarge the disk until it becomes a spherical shell completely enclosing the Sun, and assume the case of a thermal insulator. At that size the inside of the shell will reach the temperature of the Sun itself, without the aid of a magnifying glass, even though every point of it is 1 AU from the Sun.

        In the case of a perfectly insulated spherical shell, the internal surface temperature of the shell and the surface temperature of the sun increases without limit, or to the limit of the material of construction of the shell. There’s nowhere for all the solar fusion energy to go. Since the sun has a large heat capacity, it would be a slow process.

      • Excellent point. The thermometer should not be left in place but should be removed after taking a reading.

      • Vaughan: I’m not sure how I managed to miss this earlier but”

        Your definition is based on the temperature to which the Sun heats a small distant black object, which we can regard as a thermometer

        Precisely the thermometer gives us the temperature of the sun at the level it is measuring, when it reaches equilibrium of course. That is the point of thermometry (as opposed to optical pyrometry) is that we measure a temperture physically at the location of it’s sensor. Pyrometry measures temperature remote from the sensor for example the temperature of the photosphere which you talk about here:

        My definition is based on the wavelengths involved, which don’t change with distance. … This is a more useful definition because it tells you the temperature to which sunlight can heat an object, namely close to 5700 K in principle.

        Thankfully not at this distance or our satellites would not survive insertion into orbit.

      • That is the point of thermometry (as opposed to optical pyrometry) is that we measure a temperture physically at the location of it’s sensor.

        Yes, but it’s a strange sort of thermometer when it gets wildly different readings depending on how the incoming radiation is collected. When you try to collect more of it for a better reading with say a dish, the reading goes way up. You can certainly specify details such as the precise manner in which the radiation is collected, but I still think the wavelength-based definition is more robust.

        Me: it tells you the temperature to which sunlight can heat an object, namely close to 5700 K in principle.

        You: Thankfully not at this distance or our satellites would not survive insertion into orbit.

        Indeed, but notice the operative words “to” and “can”. Assuming and using perfect optics one can heat an object to something close to 5700 K.

        Even a black satellite does not reach a well-defined temperature without specifying whether it is spinning or one side faces the Sun at all times. And it would want to be in a polar orbit whose axis was aimed at the Sun, to avoid being shaded by Earth. This illustrates my point about the arbitrariness of choices made in defining your standard thermometer: its readout isn’t terribly meaningful for objects in orbit because they may vary in those details, giving a wide range of temperatures.

        Anyway we got off on to this tangent because you objected to my use of low temperature radiation to heat an object to a higher temperature than the radiator. I would focus a 100 kW beam radiating at 1 GHz on a 10 cm black plate 10 m away that completely absorbs 1 GHz. I would keep the whole transmitter and reflecting dish air-cooled to within 10 °C of the ambient temperature, and I am sure I could easily heat the object to 20 °C above ambient with this setup.

        If you objected to referring to narrow band radiation as heat then I would even spread the spectrum out to make it black body radiation. Then no one could deny it was radiant heat, in fact it would have a well-defined temperature of a few milliKelvins. (Good thing Lubos Motl’s not around, his ears would prick up at that.) The transmitter and dish themselves would be thousands of times hotter than the actual radiation, being at room temperature + 10, but the irradiated target would be hotter still.

        This is an example of radiant heat from a cool body (of ambient+10 °C) heating a hot body to a temperature of ambient+20 °C.

      • I would focus a 100 kW beam radiating at 1 GHz on a 10 cm black plate 10 m away that completely absorbs 1 GHz. I would keep the whole transmitter and reflecting dish air-cooled to within 10 °C of the ambient temperature, and I am sure I could easily heat the object to 20 °C above ambient with this setup.

        But the radiation even if generated by a non-thermal method <would have an effective temperature calculated by solving the Planck equation for temperature using the frequency and the power divided by the bandwidth of the radiation.

        If you objected to referring to narrow band radiation as heat then I would even spread the spectrum out to make it black body radiation.

        It would still have an effective or brightness temperature and that temperature would be a lot higher than the equivalent black body temperature for a peak frequency of 1 GHz of ~0.01 K. If you could generate it and focus it on a 0.01 m2 object, the effective temperature would be 3644 K.

      • If you could generate it and focus it on a 0.01 m2 object, the effective temperature would be 3644 K.

        Thanks, DeWitt. What I like about this thread is that are some reliable people to straighten out us amateurs when we run amok.

        Would it be reasonable to say that the concept of brightness temperature was invented in part to avoid the sorts of violations of the 2nd law of thermodynamics that would arise with a definition of temperature based solely on wavelength while neglecting intensity? If so that would make sense to me.

        A point about Planck’s law that is sometimes overlooked is that for every wavenumber, intensity at that wavenumber increases monotonically with temperature. One is tempted to picture the Planck law distribution curve as sliding towards higher wavenumbers (shorter wavelengths), so that once the peak has increased beyond a given wavenumber ν̃ the intensity at ν̃ decreases from then on. But this only looks at the shape and neglects the fact that the total intensity increases as T⁴. When the latter is taken into account, at any fixed wavenumber the intensity always increases with increasing T no matter how high T gets.

        The other point is that intensity is absolute, not relative. Hence intensity of black body radiation from a source cannot be more than what Planck’s law says it should be, at any wavenumber.

        Putting these two together, if one sees intense radiation at a low frequency, one can deduce it to be the tail end of black body radiation from a very hot source. In such a case we’re at the classical or non-quantum end of the Planck curve dominated by the Rayleigh-Jeans law where intensity is linear in temperature.

        In my case I’m producing an intensity of 100 kw focused on .01 m² for a power density of 10⁷ W/m². However to deduce the temperature of the radiator whose tail we’re seeing, wouldn’t we also need to know the spectral width of the radiation we’re observing? For a given temperature of radiator, the narrower the band the less power we should expect to see within that band, in proportion to its width, so conversely for a given power we should expect temperature to vary inversely with bandwidth.

        How were you able to infer a temperature of 3644 K without knowing the distribution over the spectrum of my 10⁷ W/m² power density? It would seem to me that the more spiky the distribution, the higher the temperature should be. To get your 3644 K result using the formula for brightness, I got a spectral width of dν = 8.9 × 10²⁴ which makes no sense at all. If I take dν = 100 MHz or 10% of the 1 GHz frequency I get a brightness temperature of T = .32 × 10²¹ K, which I could well believe in light of how brightness temperature is customarily explained.

        Clearly I’m not getting this brightness temperature stuff. Help, please!

      • Local violations of the rule that entropy always increases in systems left to themselves, like the heat pump, has a cost of increased entropy elsewhere in the system. For example the power station where the electricity is generated that powers your air conditioner and the heat dissipated in the the transmission lines.

        Quite right, that’s what I was referring to when I wrote “The possibility for such violations arises on account of thermal fluctuations: local buildup of heat can create the conditions for a violation, which fundamentally depends on the available energy.” Note the word “local.”

        Considering the universe as a whole, the proposition that its entropy is increasing is not objectionable. But that tells us nothing about entropy in Earth’s atmosphere, or the stock market, etc., which are intrinsically local. For the entropy of the whole universe to be decreasing there must be local regions where it is decreasing, presumably a majority of them. However one cannot argue from that to the conclusion that entropy is decreasing in all regions of the universe. The emergence of life on Earth is a counterexample to that.

      • Oops, replace the three occurrences of “decreasing” in the immediately preceding paragraph by “increasing.” (I think of entropy as neginformation and forget the sign occasionally.)

      • ” But that tells us nothing about entropy in Earth’s atmosphere”

        No but on average that hasn’t changed much the past million years or so. In fact when I did the calaculations a few years ago I conclude that as far as the entropy in this region of space is concerned the earth may as well not be here. Apart from the relatively tiny contribution due to waste heat from our machines using fossil or nuclear fuels. Couldn’t measure that in any case.

      • No but on average that hasn’t changed much the past million years or so.

        Sounds plausible if you don’t include the past century. I don’t have even a ballpark estimate of the change up or down for this century.

        Would you say the entropy of the atmosphere was the same at the (thermal) top and bottom of a Quaternary glacial cycle?

        Would you say the products of civilization exhibited an increase or a decrease in the entropy of the raw materials used in their construction as extracted from the ground?

      • Sounds plausible if you don’t include the past century.

        What is special about the past century?

        Was it hotter than the Holocene optimum, for instance, or are you referring to the debris that we have blasted into space?

      • There again the effective radiation temperature of the sun at earth’s orbit is ~393C

        That’s really interesting, I had no idea. I know what the effective temperature of the Sun means, but I have no idea what “at earth’s orbit” means. Do you have a reference for that concept?

        I’ve never encountered the temperature 393 °C in that context. Nor 393 K, nor 393 °F. Do you have a source for that?

        The m.p. of zinc is 419 °C. Are you claiming I can’t melt zinc with a magnifying glass focusing the sun’s rays? That would be fascinating.

      • The temperature of a black surface facing sun in space at earth orbit is 393K if the back of the surface is perfectly isolated.

        When you take into account the fact that the surface of a ball is 4 times that of a one-sided disk of the same diameter and the albedo of the earth you get the more familiar 255K:

        (4/0.7)^(1/4) * 393K = 254K.

      • The temperature of a black surface facing sun in space at earth orbit is 393K if the back of the surface is perfectly isolated.

        So can a magnifying glass increase that? And if so up to what temperature? Over 5000 K?

      • With a magnifying glass or a parabolic mirror you can indeed create the situation where the surface “sees” only the sun. It receives radiation only from the direction of the sun and radiates back only to the direction of sun. In free space this would give the surface the same temperature as the effective temperature of sun’s surface.

      • In free space this would give the surface the same temperature as the effective temperature of sun’s surface.

        Indeed. The only question is whether you can go above it. I offered Jan Pompe a proof that you can’t.

      • Edmund Scientific used to carry a 3’x3′ Fresnel lens. See this link for using it to melt aluminum. Unfortunately, ES don’t make them anymore. Seems you could build a heck of a solar heater or use to drive a ammonia refrigeration unit.

        http://www-personal.umich.edu/~bclee/lens.html

      • Pekka

        When you take into account the fact that the surface of a ball is 4 times that of a one-sided disk of the same diameter and the albedo of the earth you get the more familiar 255K:

        (4/0.7)^(1/4) * 393K = 254K.

        I know you are not an amateur but posts like this really do make you look like one.

        (4/0.7)^(1/4) * 393K = 608K

        With a solar constant 1376 W/m*2 the *equilibrium* temperature of a spherical object in that field is given by
        Te = (1367/(4*5.67E-8))^.25 = 279K and since it is the equilibrium temperature (i.e. the temperature the object will attain in a field of that strength once it stops changing) absorptivity and emissivity must be equal so it makes no difference what colour it is. In fact if we apply it to that familiar temperature where it is thought the earth has an absorptivity of .7 and an emissivity of 1 we get:

        T = 255 /.7^.25=279K

        Interesting isn’t it that the temperature it would be without GHGs is the sam as the effective BB temperature?
        http://nssdc.gsfc.nasa.gov/planetary/factsheet/earthfact.html
        Note that they use a slighly lower solar constant and a slightly higher albedo

      • Printing error. It should have been

        (4/0.7)^(-1/4) * 393K = 254K

      • Vaughan:

        There again the effective radiation temperature of the sun at earth’s orbit is ~393C

        That’s really interesting, I had no idea.

        That should read ~393K which is not quite accurate as I didn’t bother to calculate it but was going by memory any for a solar constant (Q) of ~ 1350 but an average of 1376 W/m^2 is probably fairer. The effective radiation temperature means an equivalent temperature for a blackbody to radiate at same rate as that solar constant. You don’t need a reference just calculate it.

        T = (Q/s)^25 = (1376/5.67E-8)^.25=394.7K

        Yes and ordinary flat collector can boil water in the tropics at the right time of day/year.

        You can melt platinum with a decent concentrator.

        I have seen claims (that I’m unsure about) of temperatures up to 38,000K for that one.

      • this landed in spam for some reason

      • Thanks Judith

      • I have seen claims (that I’m unsure about) of temperatures up to 38,000K for that one.

        With only insolation and perfect optics this is provably impossible. The question for 38,000 K or anything like it would be, which?

        Or maybe both. But provably not neither.

      • Vaughan

        With only insolation and perfect optics this is provably impossible.

        I agree it’s highly unlikely but feel free to prove it.

      • I agree it’s highly unlikely but feel free to prove it.

        No problemo.

        Perfect optics means that when the irradiated object looks in any direction, the sum of all the radiation sources from that direction must be weighted by coefficients summing to at most one. (This includes sources that have been combined by half-reflecting mirrors. If you have a different definition of “perfect optics”, out with it.)

        Now if all the radiant heat collected from the hottest object in our neighbourhood, namely the Sun, is focused on one point, then from the perspective of that point and with the perfect-optics assumption, from every direction it can see only a surface as hot as the Sun.

        So the hottest any point can get is when it is irradiated from all directions by clones of the Sun is the Sun itself. In effect the whole 4π steradians of the sky is filled with the Sun.

        But this can only heat that point to the temperature of the Sun. QED

        Were you persuaded? After all, in the area of oral and written communication such as conversation, dialog, rhetoric, etc., a proof is a persuasive perlocutionary speech act, which demonstrates the truth of a proposition. (FWIW I wrote more than 90% of that Wikipedia article on proof.)

      • Persuaded? No you contradict:

        For radiant heat it fails whenever the cooler object is at some distance from the hotter one and is radiating a large amount of sufficiently narrow-band radiant heat focused in a sufficiently narrow beam at the hotter object, which is radiating diffusely as a black body.

        Your definition of perfect optics is rather strange because most would go for no distortion etc.

      • No you contradict:
        “For radiant heat it fails whenever”

        How is that a contradiction? The proposition I claimed to prove specifically restricted itself to insolation. My counterexample that you’re referring to used radiation thousands of times cooler. I don’t see how to adapt my counterexample to the case of focused sunlight used to heat objects to much more than 5700 K. I tried, failed, analyzed the failure, and came up with the above proof that I was not going to be able to adapt my counterexample to insolation.

        Your definition of perfect optics is rather strange because most would go for no distortion etc.

        If that’s what DeWitt meant by “perfect optics” then call what I defined something else, it doesn’t matter what. I was jumping to the conclusion, perhaps erroneously, that he was referring to what I had in mind, that it is hard to see how to superimpose two parallel beams because the source of one will block the other. Half-mirrors don’t help there as they dilute both beams.

        Now if you could superimpose arbitrarily many parallel sunbeams so that all of them illuminated every point of the target, the target would see something brighter than the Sun when looking back along that composite beam. But I don’t see how to do that, so I made it an assumption that any such amplifying superposition was impossible.

        Further problems with the proof?

      • What I meant by perfect optics is no loss, i.e. either perfectly transparent or perfectly reflective with no aberration either geometric or chromatic. I agree that 38,000 K could only be achieved by non-thermal means. One way would be to generate electricity from sun light with photovoltaic cells and use the electricity to power a plasma generator or laser or something, maybe a Tokomak. You could potentially get to millions of degrees with that.

      • The magnifying glass does not amplify the light it concentrates it IOW reduces the entropy….

        I don’t think so. For perfect optics, entropy wouldn’t change because optics are reversible. But optics aren’t perfect and entropy will increase.

      • Using the magnifying glass reduces the entropy increase that occurs when the radiation gets absorbed. Thus it reduces entropy compared to the alternative, where the entropy increase is larger.

      • Your interpretation may be correct, but that is a somewhat tortured reading of the original statement. I’ve seen Jan Pompe take others to task for less.

      • DeWitt:

        I don’t think so. For perfect optics, entropy wouldn’t change because optics are reversible..

        So are perfect reverse cycle air conditioners, Stirling cycle engines, but such perfect devices do not exist. It’s a local undoing of the ever increasing entropy associated with expanding radiation field over distance. I agree however total entropy does not decrease with perfect optics.

        But optics aren’t perfect and entropy will increase.

        Absolutely but we still will have that local reversal.

        I’ve seen Jan Pompe take others to task for less.

        I’ve mellowed a little ;)

        Tortured expressions of the 2nd law are to be expected it can be a difficult concept to try to get across.

      • Tortured expressions of the 2nd law are to be expected it can be a difficult concept to try to get across.

        So was every failure of its communication a victim of torture?

      • Jan Pompe:
        You write: “Tortured expressions of the 2nd law are to be expected it can be a difficult concept to try to get across.”

        You are correct. It is really common that people make unjustified – or tortured – claims about the consequences of the second law.

        The second law and the related Kirchoff’s law guarantees that the solar radiation cannot heat a body to a higher temperature than that of sun’s surface, but there are many different ways of using solar radiation to heat bodies to a higher temperature than, say, 393K. It can be achieved by a magnifying glass, but it can also be achieved, if emissivity is lower for LW IR than it is for solar SW radiation. This is perhaps not true for simple materials, but it is true for such constructs as typical flat plate collectors, which take advantage of a form of greenhouse effect to reduce the LW emissivity of the construct.

        A colder body cannot heat a hotter one, but it can help the hotter to heat, when its role is to reduce heat losses from the hot body. The sun is providing the heat to the earth surface, but the colder atmosphere affects the net heat losses and in this way influences the actual temperature of the earth surface. At the microscopic level the net heat losses are always a difference between flows of energy outwards and towards the body. This is true for radiation, but it is true also for molecular kinetic energy.

      • Pekka

        if emissivity is lower for LW IR than it is for solar SW radiation.

        So energy can accumulate increasing temperature, creating a low entropy state until when?

        There are people who think the world absorbs as a grey body and emits as a black body and they see the world as having this effective temperature Black-body temperature (K) 254.3

        If however we adjust that “black body” temperature for an emissivity equal to it’s absorptivity which is 1-albedo = 1 -.31 =.69 (see Bond albedo in above link) we get

        254.3/.694^.25 = 278.62K

        for equilibrium temperature then
        (1367.6/(4*5.67E-80)^.25 =278.66K

        a more precise value for Stephan Boltzmann constant would bring the two values closer.

        Do you think it’s a coincidence that the temperature adjusted for emissivity is the same as the equilibrium temperature?

        Then check the other planets.

      • It is not a coincidence. The two values are equal by construction. The temperature 254.3K is actually calculated from these relations. It is nothing else than

        0.69^0.25*((1367.6/(4*5.67E-80)^0.25

        This temperature is the effective radiation temperature that makes outgoing LW radiation equal in energy to the incoming radiation that is not reflected.

        The temperature 254.3K has no other special significance. The surface is mostly warmer, as is the lower atmosphere. The top of the troposphere is colder. That particular temperature is an effective average as seen from space. Weighting must be done taking into account the Planck spectra and the observed intensities in order to get the correct result.

      • The two values are equal by constuction

        Of it can’t be otherwise. 254.2K is the what the temperature is assumed to be by someone measuring from space who is unaware that he’s not measuring the temperature of a black body or more precisely a black cavity. Once such a person is aware that it is really a grey bodies’ (coloured actually) temperature that he is measuring what must he do to correct that.

        Weighting must be done taking into account the Planck spectra and the observed intensities in order to get the correct result.

        Yes we convolve the computed spectra (in my case using HARTCODE on radiosonde ascent data with HITRAN 2008 data) with Placnk’s Law for black bodies and integrate over spectrum and your point is?

      • It is known that earth does not appear from space as an uniformly grey body but has a more complicated spectrum. Some wavelengths have the full intensity corresponding to the temperature of earth surface (something like 288K), others the intensity that corresponds to the top of troposphere or about 240K.

        Practically nothing corresponds to 254.3K except the total LW radiation energy.

      • 254.2K is the what the temperature is assumed to be by someone measuring from space who is unaware that he’s not measuring the temperature of a black body or more precisely a black cavity.

        Not to be picky, but wouldn’t it be slightly less on account of the side facing the Sun being hotter than the dark side? If Earth were spinning fast enough to make that difference less than 0.1 °C I would expect the planet to fly apart.

      • Yes we convolve the computed spectra (in my case using HARTCODE on radiosonde ascent data with HITRAN 2008 data) with Placnk’s Law for black bodies and integrate over spectrum

        Why are you convolving when you could be multiplying pointwise in the frequency domain? Much faster.

      • Vaughan:

        > People used to think there were just three states of matter, solid, liquid, and gas. But then they decided that plasmas weren’t really gases and declared them a fourth state of matter. >

        So are you proposing that the term greenhouse “gases” is a bit of a misnomer in this context and we should be talking about greenhouse “plasmas”?

        > Maybe this should be considered a fourth kind of heat transport? >

        You’re going to need a verb here. Is the heat “plasmatized” into the nitrogen and oxygen?

      • So are you proposing that the term greenhouse “gases” is a bit of a misnomer in this context and we should be talking about greenhouse “plasmas”?

        But they’re clearly gases, not plasmas.

        Is the heat “plasmatized” into the nitrogen and oxygen?

        Wouldn’t that assume we’d found some way of matching solid, liquid, and gas to radiation, conduction, and convection?

        But you’re right about naming, it’s a good question what to call it if and when someone decides to take the idea seriously.

      • Incidentally it’s worth pointing out that almost all of the kinetic energy acquired by a GHG molecule when it absorbs a photon influences only its vibration. This is because the energy/momentum ratio for photons is many thousands of times greater than for molecules, and both translation and rotation require momentum. As long as the molecule continues along a free path this energy may repartition itself between vibrational modes but can give nothing to translation (and little to rotation though I hesitate to say nothing given that moment of inertia can vary, though the effect may be so small as to be forbidden quantum mechanically).

        A corollary is that until a GHG molecule collides with another molecule, its manner of accumulating thermal energy by irradiation is as for a solid. This is because atoms in solids have no translational or rotational energy, relative to a frame fixed in the solid. Thus both solids and uncollided GHG molecules have in common that they can only warm vibrationally.

        (An unrelated corollary is that the ideal gas laws can break down very badly when the ratio of irradiation to pressure gets high enough, due to potentially unlimited imbalance between the degrees of freedom, but that’s a digression here.)

        The mean free time between collisions of air molecules of any kind at sea level is around 70/450 = .155 μs. It then becomes an interesting question how many photons might be exchanged between GHG molecules (of the same species since energy resonances between different species are weaker than between same species) between consecutive collisions.

        If much less than one on average, then this commonality with solids is inconsequential. If much more than one then it becomes very interesting, since influence on vibration in GHG molecules would then dominate influence on translation and rotation.

        I would guess that, at least for CO2 molecules, this would be principally via photon exchange between molecules a few centimeters apart. This distance is what I computed as the mean free path of photons emitted by CO2 molecules in air at sea level assuming a mixing ratio of 0.4‰. Photons emitted from Earth’s surface can have mean free paths (averaged over only those not escaping altogether from Earth otherwise the average will be infinite) on the order of kilometers depending on their wavelength.

        It seems to me that it should be possible to estimate the power density of exchange of photons between ¹²C¹⁶O₂ molecules (the principal species) in air at STP to within less than an order of magnitude, based simply on the HITRAN line strengths, the mixing ratio of CO2, and the CO2 forcing in W/m² (to estimate the total inbound flux at all wavelengths before it is reradiated at CO2 emission wavelengths).

        Knowing that each molecule collides 7 times a microsecond, and inferring the power density for the resulting phonon exchanges, one can then estimate the ratio of photon to phonon power in atmospheric thermodynamics.

        A similar question can be asked for solids. The conventional wisdom there seems to be that phonons totally dominate photons (by both number and power density) but I’ve never been able to find any discussion of why this should be or what the ratio is.

        I guess this would be the right thread on Climate Etc. to ask what’s been done to date on this and related questions. Pekka, your thoughts?

    • I appreciate the many responses to my question How is conduction involved? (For the record, the bit about “conduction (apparently not a big factor?)” was not Judith’s as reported by David Hagen but Michael Larkin’s.)

      Having mulled over the various responses, I’m now willing to grant a little more relevance to conduction. But only a little.

      The two main roles I see conduction playing in the atmosphere are

      1. It collaborates with convection to make the bottom kilometer or so of the atmosphere fluctuate diurnally.

      2. It filters out submillisecond transients in the temperature of the atmosphere.

      The basis mechanism for the first is that of the ground warming the bottom of the atmosphere, on the order of a few millimeters if that. (Bear in mind that a molecule heated via collision with the insolated ground will rise .05 mm or more above the ground before hitting its first air molecule.) This warmed air is quickly scooped up by convection, i.e. breeze, and mixed with the layers above. This continues throughout the day, with the bottom kilometer or two of the atmosphere warming up.

      At sunset the process reverses. The ground cools, a millimeter or so of air is cooled, is scooped up by the breeze, and is mixed with the layers above.

      Although radiation plays the dominant role in global warming, its day-to-day role is negligible. Even though the atmosphere’s thermal capacity is tiny compared to that of land and sea, the absorption of thermal (300 K) radiation by the atmosphere’s greenhouse gases is even smaller. Fluctuations of 30 °C or more in the surface temperature barely have time to register in the bulk of the tropopause between sunrise and sunset, above the bottom kilometer or so. We can see this in the radiosonde data and the satellite MSU data, but we can also calculate it. However we do it, it is always negligible, a tiny fraction of a degree. From day to day the bulk of the atmosphere simply does not fluctuate significantly in temperature.

      The second role of conduction is to smooth out rapid transitions in temperature throughout the atmosphere. If say a cc of air heats up, its molecules disperse sufficiently quickly as to disperse that heat. This happens by conduction.

      The transfer of heat from GHG molecules that willb brought up is included in this. What makes the GHG molecules special is that they’re contributing just their radiation-induced vibrational energy, since their translational and rotational energy is already in equilibrium with the non-GHGs.

      So from the point of view of the atmosphere as an ideal gas, they’re just taxing the GHGs the same as any other molecule. But from the GHG molecules’ point of view, they’re giving up their vibrational energy, which had been what set them apart from the other molecules. The other molecules don’t draw that distinction but the GHG molecules do.

      There is no coefficient of conduction for giving up vibrational energy, because it is an open question (until someone comes along and answers it) as to just how much heat your typical GHG molecule is packing in the form of vibrational energy when it collides with another molecule, GHG or not. If this turns out to be huge (and I’ll be happy to start a pool here) then whatever coefficient should be associated to the transfer of energy from a GHG molecule in a collision may be wildly different from the usual numbers we see for thermal conductivity of gases.

      In that case it would be unreasonable to call this kind of energy transfer “conduction” since it would be entirely unlike conduction quantitatively. It would need its own coefficient of whatever.

  110. Vaughan
    We have moved some way from the discussion on heat transfer in the atmosphere.
    I did not realise that some people believed that the Black-Scholes pricing theory found a way of avoiding the Second Law of Thermodynamics.
    However this is a rather unfortunate example as many Banks and Hedge Funds were wiped out because of the use of the Black-Scholes Equation.
    So it looks like the laws of Clausius should be required reading for Economists as well as Climate Scientists.
    Several sources point to this unfortunate dead end this is just one.
    http://www.applet-magic.com/ltcm.htm

  111. However this is a rather unfortunate example as many Banks and Hedge Funds were wiped out because of the use of the Black-Scholes Equation.
    So it looks like the laws of Clausius should be required reading for Economists as well as Climate Scientists.

    With this line of reasoning one could argue that because Einstein’s theory of relativity had undermined Newton’s laws of motion we need to revert to Ptolemy’s epicycles.

  112. Posted with apologies for the length. Sometimes the real world is too much for Twitter.

    Part I of III. Re G&T

    Bryan 12/11/10 6:24 am to A. Lacis: >>So it should be easy to pick one major defect in the G&T paper … .

    Bryan 12/13/10 7:42 am to Vaughan Pratt: >>Pick the first one you come to.

    Logically but not chronologically first, G&T write,

    >> But then three key questions will remain, even if the effect is claimed to serve only as a genuine trigger of a network of complex reactions:

    >>1. Is there a fundamental CO2 greenhouse effect in physics?

    >>2. If so, what is the fundamental physical principle behind this CO2 greenhouse effect?

    >>3. Is it physically correct to consider radiative heat transfer as the fundamental mechanism controlling the weather setting thermal conductivity and friction to zero?

    >> The aim of this paper is to give an affirmative negative answer to all of these questions rendering them rhetoric.

    Each of these three questions is, as Pekka Pirilä and David Wojick suggest, incoherent, and specifically that is for lack of definitions. Over two dozen times, G&T qualify nouns with the word “fundamental” – fundamental work, fundamental theory, fundamental observations, mathematical and physical fundamentals, fundamental CO2 greenhouse effect, etc. Each such naked qualification brings a critical read of the paper to a halt. And it appears in all three “fundamental questions”.

    As I noted earlier, IPCC asserts that the greenhouse effect, including CO2 as a GHG, acts as a blanket. IPCC continues that

    >> Without the natural greenhouse effect, the average temperature at Earth’s surface would be below the freezing point of water.

    G&T pretend to have covered “all past IPCC reports”, yet do not refute the blanket effect, or that CO2 is included, or that the greenhouse effect is what liquefies water on the surface of Earth. G&T have not answered Question 1.

    G&T answered Question 2 as if the “if so” clause wasn’t there. Since their “no” to Question 1 is in dispute, so Question 2 is operative. Accordingly, here is the physical principle behind the greenhouse effect. Radiation passes from the surface through the atmosphere and on into deep space. The atmosphere attenuates the intensity, and supports a temperature drop from the bottom, called the surface node, to the top, called the Top of the Atmosphere node. This is the thermodynamic analog of Ohm’s Law. The atmosphere is an impedance to the energy flow. The temperatures of the nodes depend on the value of the impedance because the nodes are linked by other impedances to sources and sinks, i.e., the Sun, the deep ocean, and deep space. G&T recognize an impedance in waveguides, but do not mention it for longwaves. Question 2 has an affirmative answer.

    G&T’s question 3, regardless of its particular restrictions, has no answer because science does not restrict the choice of parameters or scales in scientific models. The real world has no parameters, no coordinate systems, no rates or ratios, no values, no scales, no units, no equations, and no graphs, no conjectures or hypotheses or theories or laws, all the stuff of models. Models are manmade from the partial projection of the real world the reaches our senses and instruments. Modelers build à priori models from analytical representations of the real world, à posteriori models from measurements, and hybrids of the two.

    This thread provides a keen example of competing modeler’s choices, blending measurements and physical representations. Beer’s Law models radiation by its total intensity, filtered by an absorbing gas, and based on statistical and physical considerations. Radiation Transfer models the spectrum of that same radiation, relying on measurements and estimates from them of various parameters relating to the atmosphere and the nature of absorption and emission. Which model is correct depends on which makes predictions validated by experiment. If, as some investigators suggest, the sum of the spectral components from radiation transfer theory do not satisfy Beer’s Law, then confidence in radiation transfer theory, the question de jour, is severely challenged.

  113. Part II of III. Re climate modeling

    Climate is easy to model. Model 0: the global average surface temperature anomaly is a constant at 4.54ºC and the CO2 concentration is constant at 233.4 ppm with one standard deviation errors of 2.89ºC and 28.6 ppm, based on Vostok ice core reductions. This would be an excellent model of Earth for a scientist on a planet orbiting some distant star. It points out that the quality and validity of a model depends on the tolerances established for its predictions.

    Model 1: Alternatively, a model comprising a pair of synchronized sawtooth waves, with separately optimized rise and fall rates but a common pitch matched to the observed, linearly shifting peaks at Vostok (about 24.9 + 95.3*n kyrs), produces a temperature varying between 10 and 3ºC, sigma = 2.79ºC, and CO2 between 185 and 285 ppm, sigma = 21.0ºC. This model suggests that the present day temperature has about 1ºC to 3ºC to go to match the maximum warm state measured over the last half million years. That is an on-going process that IPCC zeros to initialize its GCMs, and a natural trend which IPCC attributes to man. Sample by sample analysis of the two records confirms that CO2 lags temperature by 1073 years (IPCC estimated 600 ± 400 years, TAR, p. 137), but further that CO2 follows the complement of the solubility curve. See Figure 7, (damaged images under repair), The Acquittal of Carbon Dioxide, http://www.rocketscientistsjournal.com .

    Vaughn Pratt has posted repeatedly about a pre-industrial CO2 level of 280 ppm, and about a Hofmann doubling time to the present 390 ppm. Indeed, that is how IPCC graphed CO2. AR4, SPM.1, p. 3; AR4, Figure 6.4, p. 448. The pre-industrial values are determined from ice cores. AR4, SPM, Introduction, p. 2. However, the ice core data are not directly or properly comparable to modern records. The MLO record is made from air samples made in one minute (manual mode) or less (automatic mode). So the CO2 measurement is the average in one minute or less. Ice core data goes through firn closure, requiring several decades to about 1 to 1.5 millennium, depending on pressure and snow fall accumulation. Ice core records are heavily smoothed by a mechanical low pass filter. An event like the MLO pulse of 100 ppm in 50 years first would be attenuated by a factor of about 3000 to 4000, and second the chances that it would be sampled is less than 50 over 1464, the average interval between Vostok ice core samples. In summary, the MLO record would be comparable to that at Vostok if the MLO record were low pass filtered somewhere between 30 years to 1500 years, perhaps IPCC’s 600 year figure, and then sampled every 1463 years. Comparing these records directly is not competent. IPCC needed to convey its conclusion that man was causing the warming, so it simply glued modern records onto proxy reductions to create a family of hockey sticks. Its theory is proof by the fanciful axiom of the unprecedented.

    Also note that IPCC gave Figures SPM.1 and 6.4 two ordinates. For CO2, the left hand coordinate is CO2 concentration in ppm, and the right hand ordinate is CO2 radiative forcing in Wm^-2. For SPM.1, the right hand ordinate is linearly related to CO2 concentration. (It measures RF = 4.605 Wm^-2 + 0.01653 CO2 Wm^-2/ppm). On 8/24/06 in the Second Order Draft review for the SPM, Vincent Gray said (pettily and sarcastically, eh, tallbloke?) “I had always thought that radiative forcing was a function of the logarithm of concentration”.

    In fact, IPCC Figure 6.4 makes RF = 5.347 ln (C/280.02 ppm). The editor said, “The reviewer is correct for CO2”, but SPM.1 was not repaired. Instead, IPCC inserted “Note that for CO2, RF increases logarithmically with mixing ratio” in AR4, §2.3.1, p. 140. This logarithmic conjecture appears nowhere else in AR4, was not submitted for review, and appears in the Report with no references. Similarly, the claim was made just once in the TAR, adding that the logarithmic effect was due to “the band’s wings”. TAR, ¶1.2.3 Extreme Events, p. 93. Again none of this has citations. Adding band wings, a conjecture, does not validate logarithmic dependence, a conjecture. It expands on and narrows the logarithmic model but leaves it at most a conjecture.

    Arrhenius may have been the accidental originator of the logarithmic conjecture. In 1896 he wrote,

    >> Thus if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression. This rule – which naturally holds good only in the part investigated – will be useful for the following summary estimations. Arrhenius, S., “On the influence of Carbonic Acid in the Air upon the Temperature of the Ground,” Phil. Mag. & J. of Science, April, 1896, p. 267.

    If he had known that CO2 lags temperature, he wouldn’t have erred on causality.

    Still, Arrhenius was talking about the relation between CO2 and temperature, not between CO2 and radiative forcing, IPCC’s novel modeling paradigm. Beer’s Law, which predates Arrhenius’s work, says that RF should be a decaying exponential (i.e., saturating) in the concentration of a GHG. That RF(CO2) is logarithmic remains a conjecture that fails under Beer’s Law. That it is logarithmic because of hypothetical band wings is still a failed conjecture, but it is at least more testable. That it is logarithmic based on radiative transfer modeling appears to say something about modeling assumptions and but nothing about physics. Models don’t create data. See post re Myhre et al. converting models to data, above.

    IPCC created a bridge between T(ln(CO2)), seen to be the Arrhenius model, and its RF(ln(CO2) conjecture, by declaring that T2x = F2x/α, where T2x is the equilibrium climate sensitivity response to the forcing. TAR, §9.2.1, pp. 532-3. The parameter F2x is the “radiative forcing for doubled atmospheric CO2 concentration”, and being a forcing is the independent variable. The problems with this model include the following: (1) the climate is never in equilibrium, (2) α is not a constant, (3) α depends on feedbacks IPCC ignores, including the most powerful feedback in climate, cloud albedo, and (4) IPCC models vary widely, making 0.883 ≤ α ≤ 2.11 (calculated from TAR, Table 9.A1, p. 579). So even if radiative transfer theory could provide F2x accurately and with great precision, it is irrelevant to the desired, first order climate effect of Global Average Surface Temperature, GAST.

    • “Vincent Gray said (pettily and sarcastically, eh, tallbloke?) “I had always thought that radiative forcing was a function of the logarithm of concentration”.”

      Heh, I’ll listen to the opinion of anyone who spits coal dust in the eyes of Wang and Jones, sarcasm and all.

      http://pajamasmedia.com/blog/vincent-gray-on-climategate-there-was-proof-of-fraud-all-along-pjm-exclusive/

    • If [Arrhenius] had known that CO2 lags temperature, he wouldn’t have erred on causality.

      Wouldn’t it depend on which was driving which?

      If deglaciations are triggered by Milankovitch events, in particular the Earth approaching the Sun, then CO2 can indeed be expected to lag temperature, because in that situation temperature is obviously the cause.

      Given the unprecedented quantities of anthropogenic CO2 now very suddenly being emitted (suddenly in comparison not only to the 800-year variation in temporal separation of the ice core CO2 and temperature records but even to the 70-year resolution of that data!), and the absence of any other good reason for the temperature to suddenly have shot up 0.6 °C over the last half century, the ice cores cannot be relied on as conclusive proof that CO2 must always lag warming.

      With the exception of Lindzen and Michaels (and for all I know our host, whose position I can’t quite figure out), the scientists on the Congressional panel the other week, along with most actively publishing climate scientists, believe that the current correlation between rising temperature and rising CO2 should be attributed to the causal influence of the latter, not the former.

      In which case one might expect temperature to lag CO2.

      But by how much?

      Not that my opinion should be taken seriously, since I’m not in this field, but I’ve calculated by two independent methods that, for the current warming episode, nature is lagging CO2 by roughly three decades. For one of those methods “nature” means the cleaning up nature seems to be doing after us, for the other it means how hot and bothered nature is getting over having to do so, in °C.

      Now the IPCC has arbitrarily defined “transient climate response” to be the impact of CO2 rising at 1% per annum in 20 years time.

      Given that the Keeling curve gives us a good idea of its present rate of rise per year, namely about 0.5%, as well as suggesting it will hit 1% about mid-century and thereafter rise above it, there is no need to have an arbitrary 1% in that definition, we can just assume the observed or predicted rate of rise at any given time.

      Which leaves the question, why is 20 years interesting, or even reasonable? Let me suggest a couple of answers.

      The temporal resolution of methods applicable to modern times is less than a month (the data tends to be distributed monthly). That applicable to the ice core data is at best 70 years.

      Hence we are in a unique position today to measure the transient climate response time for the prevailing rate of rise of CO2, as a function of future thermal impact of current CO2 rising. We ought to be able to do this to far greater accuracy than anything we could hope to extract from ice core data.

      Using QR-factorization implemented via Gram-Schmidt to tease apart the respective contributions of CO2 and the Atlantic Multidecadal Oscillation to the global HADCRUT3 monthly temperature data since 1850, I’ve estimated the transient climate response for any delay between 0 and 50 years. (More than that becomes unreliable because there was so little anthropogenic CO2 more than half a century ago.) Here’s some representative values. The first column is delay in years, the second is °C per doubling of CO2.

      0 … 1.80
      10 … 2.18
      24 … 2.85
      27 … 3.02
      30 … 3.21
      33 … 3.40
      40 … 3.91
      45 … 4.31
      50 … 4.76

      The figures for delays in the range 24-33 years are of particular interest for at least three reasons. First, the Arrhenius logarithmic law for dependence of temperature on CO2, when composed with the Hofmann raised-exponential law for dependence of CO2 on time, turns out to yield the best fit to the observed temperature rise when the delay is taken to be 32.5 years. Computers give that sort of accuracy gratuitously, in this case I’m disinclined to take anything like that accuracy seriously and am happy to say “a delay of around three decades.”

      Second, nature seems to be sopping up some 50-60% of our CO2 contribution. If 50%, then we can infer immediately, and without a calculator, from the 32.5-year doubling time for anthropogenic CO2 in Hofmann’s formula that nature is 32.5 years behind us in her cleanup efforts. If 60% then nature is 32.5*lb(1/.6) = 24 years behind us (lb is log base 2). Take your pick. This btw is using the composite Arrhenius-Hofmann log-of-raised-exponential law alone without reference to observed temperature; whether temperature influences the cleanup rate is an interesting question.

      Third, this result is in the right ballpark for what climate science generally feels “ought to be” the climate sensitivity (as if there were a unique or even well-defined such thing). I didn’t plan things that way but was happy to see it turn out that way. (Being disruptively inclined I might have been even happier to have it turn out differently.)

      So we have two time constants, one observed from the CO2 record alone, the other in combination with the temperature record, both giving a time on on the order of 30 years for nature to lag CO2, for respectively clean-up and temperature.

      As I said, the ice core data is only good for a 70-year resolution. We are in a unique position today to use modern technology, applied directly to the atmosphere instead of indirectly to ancient ice cores, to hypothesize lag times between CO2 and temperature and test those hypotheses against competing hypotheses.

      A major part of science resides in the comparison of hypotheses, and the eventual selection of those hypotheses that will some day wind up in textbooks.

      • Vaughan Pratt | December 16, 2010 at 8:26 pm | Reply
        If [Arrhenius] had known that CO2 lags temperature, he wouldn’t have erred on causality.

        Wouldn’t it depend on which was driving which?

        If deglaciations are triggered by Milankovitch events, in particular the Earth approaching the Sun, then CO2 can indeed be expected to lag temperature, because in that situation temperature is obviously the cause.

        Temperature is the cause of temperature? What sort of tautological nonsense is this Vaughan?

      • Temperature is the cause of temperature? What sort of tautological nonsense is this Vaughan?

        Please don’t put words in my mouth, tallbloke. When have I ever said anything remotely like that?

      • In the quote immediately above:
        You said:
        “If deglaciations are triggered by Milankovitch events, in particular the Earth approaching the Sun, then CO2 can indeed be expected to lag temperature, because in that situation temperature is obviously the cause.”

        Temperature is the measure of something, not the cause of anything.

      • In the quote immediately above:
        You said:
        “If deglaciations are triggered by Milankovitch events, in particular the Earth approaching the Sun, then CO2 can indeed be expected to lag temperature, because in that situation temperature is obviously the cause.”

        I indeed said (and meant) that. I most certainly did not “temperature causes temperature.” Nor does it follow from what I said.

        Temperature is the measure of something, not the cause of anything.

        If there are two giant cauldrons of water, one at 20 °C and one at 99 °C, and I put a live lobster in the first, that should not cause its death. But if I put it in the second that would certainly cause its death.

        So what would you say was the cause of its death if not the temperature?

        I sense one of these little semantics debates about to start up, like the one used earlier to justify denying that heat is energy.

        Chemical reactions are governed in part by temperature. That is a causal influence. Your saying it isn’t doesn’t change that fact.

      • I sense one of these little semantics debates about to start up, like the one used earlier to justify denying that heat is energy.

        one day you might understand the difference between heat and energy but I suspect you might be getting a little old for that.

    • Jeff Glassman 12/16/10 at 6:12 pm,

      P.S.

      When reporting that the right hand ordinate, RF, was linearly related to C, the CO2 concentration, I neglected to say that the linear fit had a fine rms error of 0.027 Wm^-2. That is actually a good approximation to the far better fit to the log(C/C0), base arbitrary, and C0 = 280 ppm. The better equation is RF = 5.35*ln(C/C0), which has a 1 σ error = 4.6E-4 Wm^-2.

      That is the same relationship IPCC used for Figure 6.4, and is the first equation given in TAR, Table 6.2, p. 358. No repair of Figure SPM.1 was appropriate, nor would have been resolvable under either scaling. Vincent Gray had no complaint, and the editor’s response was misleading: “The reviewer is correct for CO2. For CH4 and N20 the RF is also non-linear in concentration which is why the right hand scales are nonlinear”. He might have responded, “And so it is in Figure SPM.1(a) for CO2.”

  114. P.S.

    I used non-breaking hyphens for the minus signs at -4.54ºC, -10ºC, and -4.605 WM^02. As a convenience, the web host just left them out.

  115. Part III of III. Re feedback

    The impotence of radiative transfer is not inherent to the theory, but is due to climate feedback, a question posed to me by Gordon Robertson on 12/9/10, above.

    Andy Lacis, writing with James Hansen in 1984, in a paper often cited by IPCC, introduced “the net feedback factor, f” by setting &DeltaTeq = f*ΔT0 (Eq. (4)),

    >>where &DeltaTeq is the equilibrium change of global mean surface air temperature and ΔT0 is the change of surface temperature that would be required to restore radiative equilibrium if no feedbacks occurred.

    >>We use procedures and terminology of feedback studies in electronics (Bode, 1945) to help analyze the contributions of different feedback processes. Hansen, J, A. Lacis, et al., Climate Sensitivity: Analysis of Feedback Mechanisms, Geophysical Monograph 19, AGU, pp. 130-163, 1984.

    So historically, IPCC adopted the term and concept of feedback from control system theory to create a hallmark of its climate analysis. Climatologists were free under the principles of science to define feedback in any manner they chose, but they chose the method from control system theory. However, IPCC lost both the meaning and the significance.

    Feedback in systems science is a sample of a displacement, energy, material, or information in response to an input parameter, returned physically from within a system in loop fashion to augment or alter the system’s input parameters. IPCC instead modeled feedback loops as hypothetical relations between correlated but disconnected parameters, in flow models with neither an input nor an output, and with missing physical signals. See TAR, Figures 7.4 (p. 439), 7.6 (p. 445), 7.7 (p. 448), and 7.8 (p. 454).

    Hansen, with Lacis, conclude:

    >>The temperature increase believed to have occurred in the past 130 years (approximately 0.5 °C) is also found to imply a climate sensitivity of 2.5-5 °C for doubled CO2 (f = 2-4), if (1) the temperature increase is due to the added greenhouse gases, (2) the 1850 CO2 abundance was 270±10 ppm, …

    >>These analyses indicate that f is substantially greater than unity on all time scales. Our best estimate for the current climate due to processes operating on the 10-100 year time scale is f = 2-4, corresponding to a climate sensitivity of 2.5-5ºC for doubled CO2. The physical process contributing the greatest uncertainty to f on this time scale appears to be the cloud feedback.

    Thus Hansen and Lacis computed the ratio of temperature to CO2 and inferred a cause and effect. They assumed away the Sun, assumed that greenhouse gases could cause the temperature increase observed in the closed-loop real world, and mixed data from ice core proxies with actual measurements. Still, they identified the major shortcoming: cloud feedback.

    IPCC agrees:

    >>The response of cloud cover to increasing greenhouse gases currently represents the largest uncertainty in model predictions of climate sensitivity. 4AR, ¶3.4.3 Clouds, p. 275.

    Note that IPCC here focuses on cloud cover as the significant parameter. It discusses cloud effects extensively in its reports, especially specific cloud albedo (albedo per unit area), but nowhere does IPCC multiply specific cloud albedo by cloudiness, a climate variable. And IPCC concludes,

    >> Therefore, cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates. AR4, ¶8.6.3.2 Clouds, p. 636.

    The value of f could be as low as 0.1 and be lost in the noise of the state-of-the-art in cloud albedo estimating (roughly 0.30 ± 0.03), and according to Lindzen on ERBE data, above, it is likely 0.14.

    IPCC needs the logarithmic dependence for two reasons: it does not saturate, and it amplifies the greenhouse effect. Both results are crucial to IPCC’s entire AGW theorizing. And this critical logarithmic sub-model is unsupported.

    Finally, Model SGW: A model for Earth’s surface temperature computed from IPCC’s preferred solar model matches IPCC’s entire estimate thermometer records with an accuracy of 0.11ºC, about equivalent to IPCC’s optimally smoothed estimate. See SGW, http://www.rocketscientistsjournal.com.

    Consequently, GAST can be estimated accurately from a sanctified, independent source, and with no reliance on radiative transfer. Modeling with radiative transfer fails to take into account that GAST does not determine a unique atmospheric structure, even under the severe assumptions included in thermodynamic equilibrium, i.e., chemical equilibrium, mechanical equilibrium, and thermal equilibrium, including the easily falsified assumption that atmospheric CO2 is long-lived and hence well-mixed, and the assumptions of particular lapse rates for temperature, humidity, and CO2 concentration and of cloud cover. Radiative transfer is microparameter noise in a mesoparameter or macroparameter (thermodynamic) model.

    Feedback for Judith Curry. Dr. Curry introduced this topic, “Confidence in radiative transfer models”, saying “The calculation of atmospheric radiative fluxes is central to any argument related to the atmospheric greenhouse/Tyndall gas effect.” Radiative flux is central to IPCC’s greenhouse effect, process 3, next:

    >>There are three fundamental ways to change the radiation balance of the Earth: 1) by changing the incoming solar radiation (e.g., by changes in Earth’s orbit or in the Sun itself); 2) by changing the fraction of solar radiation that is reflected (called ‘albedo’; e.g., by changes in cloud cover, atmospheric particles or vegetation); and 3) by altering the longwave radiation from Earth back towards space (e.g., by changing greenhouse gas concentrations). Climate, in turn, responds directly to such changes, as well as indirectly, through a variety of feedback mechanisms. IPCC, FAQ 1.1, p. 96.

    However, radiative transfer modeling is not unique to the outgoing radiative flux. Secondly, and most importantly, the feedbacks IPCC cites as an afterthought and fails to mechanize, operate to change albedo (process 2) aggravating insolation at the surface (process 1) and mitigating the greenhouse effect (process 3). Study of the greenhouse effect on climate will always be inconclusive with the most important branch of the hydrological cycle open loop, as in the GCMs.

    Radiative transfer theory is important to understanding the relative importance of the GHGs, and is worthy of continuing research to improve confidence among academics. But beyond that, it is a curiosity because it is mitigated and irrelevant to the struggle to emulate climate in the first order.

    • IPCC needs the logarithmic dependence for two reasons: it does not saturate,

      And a linear dependence would?

      I believe you have that backwards. A logarithmic dependence takes a lot more CO2 than a linear one.

      it amplifies the greenhouse effect.

      Again, backwards. A linear dependence would amplify it even more.

      And this critical logarithmic sub-model is unsupported.

      Unsupported by skeptics, supported by theory, confirmed by observation.

    • Climatologists were free under the principles of science to define feedback in any manner they chose, but they chose the method from control system theory.

      And this is somehow a problem? In my discipline, computer science, we take the attitude that if it’s not broken, don’t fix it, and if it is, we can fix it ourselves, thank you very much.

      Almost all of control theory we are happy to take as is (though as one of the two control theorists working on the winning team in the DARPA Grand Challenge race in 2005, see under Software > Control, we had no compunctions about throwing out any part of control theory we felt was broken).

      So is there a problem? And if so what is it, and how do you propose to fix it?

      However, IPCC lost both the meaning and the significance.

      You’re implying it was broken in the case of climate, which you argued as follows.

      IPCC instead modeled feedback loops as hypothetical relations between correlated but disconnected parameters, in flow models with neither an input nor an output, and with missing physical signals. See TAR, Figures 7.4 (p. 439), 7.6 (p. 445), 7.7 (p. 448), and 7.8 (p. 454).

      Before we get into anything technical, already I’m having more basic problems. Figure 7.4 in my copy of the current IPCC report is on p. 516, yours is on p. 439. WUWT?

      If one of us is working with superseded materials then we’d be talking at cross purposes. Let’s straighten this out before taking this any further.

  116. Vaughn Pratt 12/16/10 6:12 pm,

    1. Causality is philosophically, or in some meta-science sense, always temporal. An effect can never precede its cause. Preposterous is an ending that comes before its beginning.

    2. You seem to hypothesize that sometimes CO2 changes comes before temperature changes, and sometimes afterwards. This seems to indicate that they are uncorrelated, and independent.

    3. IPCC’s current model is that CO2 may not initiate warming, but instead it amplifies it. But what are the mechanisms? If a Milankovitch cycle initiates warming by increasing insolation (as it should), then what releases the CO2? Was it dammed up and the dam broke? How much pent up CO2 is available? If the release of CO2 is by temperature, then why doesn’t a temperature increase from any cause release more CO2? (It does by Henry’s Law.) But does increased CO2 cause warming? Likely not, unless the closed loop gain just happens to be less than one. If the closed loop gain were greater than one, then the climate would self-destruct. The idea that the numbers would just happen to fall between 0 and 1 seems improbable, but worthy of academic (not funded by the government) study.

    4. Science admits à posteriori modeling. These models can be validated by showing a large number of degrees of freedom required to represent the effect are handled by a small number of parameters extracted from the cause. They can be validated by postulating an analytical cause and effect that satisfies the laws of physics and has predictive power, then showing the predictions satisfied from an independent data set. The model that relies on temperature increasing and CO2 increasing in some analytic fashion, like a power law, exponential or logarithmic, or some orthonormal series of functions, is merely correlation. An axiom , I suppose, of science is that correlation is not cause and effect; the opposite is a logical fallacy. Your model needs more, including the temporal relation between CO2 and temperature. The analytic functional relationship is unlikely to have that power to resolve which came first.

    5. I don’t think the ice core data are good for a 70-year resolution. The average spacing at Vostok, for example, is 1463 years between samples. The time for the firn to close can be as large as 1500 years, causing the data to be low pass filtered with an extreme time constant.

    6. Science involves models of the real world that progress from conjectures to hypotheses to theories and to laws. Conjectures are incompatible with none of the data in the model’s domain, and violate no Law or principle of science. Hypotheses fit all the data, and predict some novel, nontrivial outcome. Theories are hypotheses in which at least one nontrivial outcome has been validated by experiment or discovery. Theories compete for elegance (Occam’s Razor, the Law of Parsimony). Laws are theories in which all possible outcomes of the model have been validated. At least that is part of my tested metamodel for what I have observed to be science.

    • 1. Causality is philosophically, or in some meta-science sense, always temporal. An effect can never precede its cause

      Quite so. One can best lead a horse to water with something you and the horse have in common.

      2. You seem to hypothesize that sometimes CO2 changes comes before temperature changes, and sometimes afterwards. This seems to indicate that they are uncorrelated, and independent.

      With that line of reasoning one can prove that because men chase women, and sometimes the other way round, they must therefore be uncorrelated, and independent.

      CO2 and temperature both drive each other, that’s what a positive feedback means.

      Without positive feedback in the mating game, 1 + 1 = 2. That equation is at the heart of independence. In geodesic theory 1 + 1 = ℤ is the geodesic space of all affine integers. In the logician’s category Set, 1 + 1 = 2, reflecting that the elements of a set are independent. In most categories used in real mathematics however, 1 + 1 is something else, reflecting that elements are in general not independent. Poincaré rightly observed that we would eventually recover from set theory. (More precisely, “Point set topology is a disease from which the human race will soon recover.”) (Judy, I hope you haven’t tuned out of this relatively arcane thread, with your ripening foundational interests this should be your cup of tea, coffee, whatever.)

      This horse is not drinking yet. Not even tea.

      3. IPCC’s current model is that CO2 may not initiate warming, but instead it amplifies it

      I’d love to see the page number for that. It would like Obama agreeing not to tax the rich. I hadn’t realized the IPCC had knuckled under the same way.

      Still not drinking.

      4. Science admits à posteriori modeling.

      Monsieur, I drink to zat. Santé!

      That said, Arrhenius’s foresight in his a priori modeling is something to marvel at. He was half a century ahead of his time!

      5. I don’t think the ice core data are good for a 70-year resolution. The average spacing at Vostok, for example, is 1463 years between samples.

      I didn’t want to go out on a limb and claim they were that bad. I’m more than happy to have you undermine your source of data backing up your theory of what caused what. Even 800 years completely removes your basis for claiming CO2 lags temperature.

      But you somehow seem to have quenched my thirst. WUWT?

      6. Science involves models of the real world that progress from conjectures to hypotheses to theories and to laws.

      Let’s all drink to that. Cheers!

      • Crikey Vaughan I think you have jumped the shark with this one

        With that line of reasoning one can prove that because men chase women, and sometimes the other way round, they must therefore be uncorrelated, and independent.

        Whether one or the other chases says nothing about dependence on each other just that who chases who is not correlated with gender.

      • Whether one or the other chases says nothing about dependence on each other

        That’s a sufficiently novel theory that you ought to be able to write a book about it. Assuming 20% of the planet’s population are prudes you’d have a billion dollar audience right there, putting you ahead of J.K. Rowling.

      • That either chases the other shows dependence but that both do shows who chase who it’s not correlated with or dependent on gender. If you want to claim correlation you really need to come up with the appropriate statistics.

      • That either chases the other shows dependence but that both do shows who chase who it’s not correlated with or dependent on gender.

        The 21st century welcomes you with open arms. God bless whoever led you out of the black forest of 20th century thought.

      • The point being that there is no dependency on the choice of CO2 or temperature when saying each causes the other. They are mutually reinforcing. The only difference is in who makes the first move.

        With Milankovitch-triggered deglaciations, temperature made the first move. With AGW CO2 made the first move.

        Who would you say say made the first move in each of these famous love story books/movies?

        Gone with the wind (1936/399)

        Meet me in St Louis (1944)

        The Thorn Birds (1977/83)

      • With Milankovitch-triggered deglaciations, temperature made the first move. With AGW CO2 made the first move.

        Unfortunately the uniqueness theorem for Milankovitch-triggered glaciations ( the inverse problem) is um not unique.

        The temporal coincidence of glacial epochs on the Earth and Mars during the Quaternary and latest Amazonian would suggest a coupled system linking both [Sagan, C., Young, A.T., 1973. Nature 243, 459].

      • Interesting, thanks Mak.

      • The temporal coincidence of glacial epochs on the Earth and Mars during the Quaternary and latest Amazonian would suggest a coupled system linking both

        Is there a one-paragraph summary explaining how this works?

        If not it’s probably wrong. Two paragraphs is enough room to hide a fallacy, one is harder.

      • The abstract is here

        http://www.nature.com/nature/journal/v243/n5408/abs/243459a0.html

        There is a sting in the “tale” ie the obvious corollary to Ziph’s law.

      • Does that still hold up now that the most likely explanation for low solar neutrino emission is that they have mass and oscillate?

      • Does that still hold up now that the most likely explanation for low solar neutrino emission is that they have mass and oscillate?

        That bears little relationship to the problem,all the information is there on that page.

      • <blockquote.The point being that there is no dependency on the choice of CO2 or temperature when saying each causes the other. They are mutually reinforcing. The only difference is in who makes the first move.

        Then you will be able to quantify the effect of the positive feedback once the CO2 starts rising or falling that should be obvious in say the the Vostok ice core data.

        A couple of graphs
        http://www.palisad.com/co2/slides/img5.gif
        http://www.palisad.com/co2/slides/img7.gif

        The second one gives finer resolution but a shorter time
        but a comparison with some stomatal CO2 data might be helpful. Unfortunately this one compares with C dome rather than Vostok

      • Then you will be able to quantify the effect of the positive feedback once the CO2 starts rising or falling that should be obvious in say the the Vostok ice core data.

        Thanks for that, very interesting data.

        So your second graph seems to show CO2 pulling temperature down, or am I reading it wrong?

      • So your second graph seems to show CO2 pulling temperature down, or am I reading it wrong.

        If correlation was causation it could be right but since it isn’t it would be a brave interpretation. If you look at the first graph you will see that temperature generally starts declining while CO2 rises. I don’t think that this is anything different just that the increasing eccentricity is increasing more slowly etc.

        If you want more detail I believe an astronomer by the name of Jan Hollan has done some work on it.

      • BTW I don’t as a rule learn my science, logic or ethics from tinsel town. In fact I’m not all that keen on them for entertainment and I have not seen any of those movies.

      • Ballet? Opera? Concerts? Musicals? Asperger’s?

      • Opera but prefer to be in the orchestra (I used to play flute).

      • This thread is getting really long, do you want a new thread on this topic? i haven’t been able to keep up, let me know if you have a specific suggestion

      • Gone with the Earth, Wind and Fire?

      • This thread is getting really long,

        Any idea why?

        And are there two major themes that could be teased apart into two threads?

        My take on it is that, at least for us geeks posting here, the whole question of whether CO2 is making a difference turns on the sort of issues that are being raised here. In which case it might be hard to separate.

        Sort of like the Swiss trying to divide World War II into two parts for easier management of the bank accounts.

      • To put my point more plainly: when you have the circumstances for a positive feedback,whether CO2 and temperature or males and females, you don’t have to show dominance of one over the other in order to show a dependence.

      • Vaughan: The male and female are dependent on each other for reproduction (I do believe some women are trying to fix this) in terms of causality one is not going to be dominant (try to anthropomorphize this?) one is going to be dependent while the other independent a claim that it is one then the other breaks any possibilty of correlation in a way that a continuous theory of feedback does not.

        (Judy, other blogs let you preview your comment before posting it. Why does WordPress not offer this feature? It puts a terrible demand on proof reading. Anyone else running into this problem with this blog?)

        You betcha!

      • The male and female are dependent on each other for reproduction in terms of causality one is not going to be dominant one is going to be dependent while the other independent a claim that it is one then the other breaks any possibilty of correlation in a way that a continuous theory of feedback does not.

        Sorry, not following. Are you saying there’s no positive feedback in reproduction? Biology thrives on feedbacks, certainly for individuals, but even for species, see Bernard Crespi’s very nice article on feedbacks in evolution at http://www.sfu.ca/biology/faculty/crespi/pdfs/80-CrespiTREE2004.pdf .

      • To put my point more plainly: when you have the circumstances for a positive feedback,whether CO2 and temperature or males and females, you don’t have to show dominance of one over the other in order to show a dependence.

        (Judy, other blogs let you preview your comment before posting it. Why does WordPress not offer this feature? It puts a terrible demand on proof reading. Anyone else running into this problem with this blog?)

      • Editing your long comments in a real text editor (Textmate, Scrivener, vim, etc.), really, really helps.

      • I haven’t found much benefit from editing other than in place (though I do save manually from time to time). Vim doesn’t handle Unicode gracefully enough, does Textmate or Scrivener do any better?

        What I have found very helpful is to copy the text to where I can view it with a browser before posting. I have a vim macro that inserts paragraph tags between two consecutive linefeeds and that’s about all that’s needed. (People may have noticed that I make heavy use of Unicode for irrationals like √2 and π, and the ¹²C¹&#8310O₂ species of CO2.)

        I don’t claim to have the optimal setup and am always happy to be pointed in better directions. The ability to preview would be a godsend.

      • I unintentionally proved my point with the
        ¹²C¹⁶O₂ example. I left out the semicolon after the superscript 6 and Chrome figured out what I meant and put it in while WordPress didn’t. So when I tested it with Chrome it worked, but it failed when I posted it.

        This would not have happened with a preview capability.

  117. Tallbloke 12/16/10 7:39 pm

    If you’re (cough, cough) looking for an argument, you better pick a (cough) different subject.

    Especially with respect to Phil Jones. But I wondered why whack Wang?

    Then I noticed from your link that Vincent Gray was talking about a different Wang than Y-M Wang, who seems to be a standout in climate science. He has made a monumental contribution to climate science by his modeling of the Sun, and published without the compulsory obsequious concessions to AGW and its grand Pooh-Bahs.

    • He has made a monumental contribution to climate science by his modeling of the Sun, and published without the compulsory obsequious concessions to AGW and its grand Pooh-Bahs.

      Only skeptics fantasize about connections between the Sun and Earth’s climate. If Wang didn’t then he’s clearly not a skeptic.

      • Once scientists get away from their fixation with the Sunspot Number (SSN) and the Total Solar Irradiance (TSI), then the Sun-Earth link climate and otherwise (very important the otherwise bit) all will be crystal clear.
        Compare details on top map (Hudson Bay – Siberia) here
        http://data.giss.nasa.gov/gistemp/2010november/fig1.gif
        and polar map here
        http://www.vukcevic.talktalk.net/NFC1.htm
        I am sure you got the idea.

      • Milivoje, I get the idea for the second map, which is extremely interesting (“exciting” as they say here in the US), though I saw it last week and had the same positive reaction. However would you mind explaining the connection with the first map, which I’m not understanding at all.

      • Now I have all information required (with some more relevant data from NASA) I will update the above web page in next few days. Will email you a copy in advance.

      • Oh, and one other thing. All this is good for the AMO, but why do you think the Sun is irrelevant? The HADCRUT3 global record shows significant swings correlated with both the AMO (65.5 year cycle) and the Sun (around a decade, i.e. much faster). Do you have a non-Sun theory of 10-year oscillations?

      • Not exactly, I meant TSI and SSN have small identifiable effect. There is also magnetic connection (solar storms) gradually destabilising the the Arctic GMF but barely perceptible in the Antarctica.

      • Vaughan Pratt | December 16, 2010 at 11:52 pm

        Only skeptics fantasize about connections between the Sun and Earth’s climate.

        Only AGW fanatics think their aren’t any.

    • No, not looking for an argument. I’m happy to read and learn on this subject. Vince Gray opines about a large variety of subjects, and I’m happy to read his stuff too.

  118. Vaughan Pratt 12/16/10 9:35 pm

    VP asked, >>And a linear dependence would? I believe you have that backwards. A logarithmic dependence takes a log more CO2 than a linear one.

    Why do you think I have that backwards, and not IPCC? IPCC wanted to model climate by a climate sensitivity parameter, a constant that occurs for any constant multiple of a GHG concentration. As I explained earlier, that is the assumption of a dependence on the logarithm of the concentration. In urging that radiative forcing produces that logarithmic result, it said,

    >>It is because of these [band wings] effects of partial saturation that the radiative forcing is NOT PROPORTIONAL TO THE INCREASE IN THE CARBON DIOXIDE CONCENTRATION but shows a logarithmic dependence. Every further doubling adds an additional 4 Wm-2 to the radiative forcing. Caps added, TAR ¶1.2.3 Extreme Events, p. 93.

    IPCC here is setting up a straw man, an “extreme event” of a different sort. Who claimed the dependence was linear? That leads to more preposterous results, such as the transmissivity of a gas increasing with increasing concentration! The only answer appears to be the IPCC author responsible for the linear dependence of RF on CO2 concentration shown in Figure SPM.1, the figure that prompted Vincent Gray’s query, as discussed above. IPCC’s defense here was to make itself look conservative (not politically, of course, VP) and hence scientific.

    For people a little familiar with EM absorption in a gas, the expected RF absorption would have followed Beer’s Law, in which saturation occurs in the total intensity, i.e., summing over all spectral components. I.e., t(n1 + n2) = t(n1)*t(n2), where t is the transmissivity and n1 and n2 are numbers of absorbers. This is another functional equation, and in this case the solution is an exponential dependence.

    Those people armed with Beer’s Law are still wondering about the results claimed with radiative transfer and the hypothetical band wings you seem to be claiming are “confirmed by observation”. Now it’s your turn to point to the citation and page where band wings are confirmed by observation. Pierrehumbert, a sometimes credible IPCC author, could use that information for his exposition on the subject in which he says the wings are “synthesized” and “supplemented by theoretical calculations” (by which he means real calculations using lines hypothesized to exist to make the radiative transfer model turn out logarithmic.) Pierrehumbert, RT, online text, Figures 4.7 and 4.12, pp. 105 and 110, respectively.

    • Pierrehumbert, a sometimes credible IPCC author, could use that information for his exposition on the subject in which he says the wings are “synthesized” and “supplemented by theoretical calculations” (by which he means real calculations using lines hypothesized to exist to make the radiative transfer model turn out logarithmic.)

      Ok, I’m really not following here. Are you saying that the radiative transfer model is not logarithmic?

      So if not logarithmic, then what is it? And how do you reconcile this with the fact that the data shows it to be logarithmic?

      One sure-fire way to distinguish climate scientists from climate deniers is to see whether they’re proposing a theory or shooting one down.

      What climate deniers have failed to grasp is that science makes progress by competing one hypothesis against another. Climate deniers hardly ever advance a competitive hypothesis, they put all their effort into demolishing the established science the way G&T do throughout their paper.

      Climate deniers are nihilists, through and through.

      • Richard Holle says:
        December 17, 2010 at 10:28 pm

        Just an update; On the Null Hypothesis by comparison?
        Below are some of my latest thoughts on what is driving the weather and climate.

        All of the universe affects the rest of it, it all sits in a bowl of gravitational and magnetically driven mass of ions and regular atoms, that respond to the basic physics detailing the “normal rules or laws”. To think that there are voltages or ions that move with out magnetic fields attached violates first principals. The magnetically permeable inductive components of planetary bodies are susceptible to Ohms laws, and power equations apply to the full spectrum of from DC to most energetic particle seen.

        So we should be able to figure forces at work when planets have synod conjunctions, by determining the shifts of flux of the magnetic fields, with the shifting density and speed of the solar wind. When the Ulysses satellite was on polar orbit of the sun “they were amazed that the patterns usually seen in the solar wind were still there, but also much stronger than they expected by several orders of magnitude.” To me this means that the main crux of magnetic connections between the planets is in the normal distribution of concentrations at the poles/apexes of lab magnets and the large sweeping fields are weakest along the circumference, neutral current sheet, or equatorial regions, and also not only flowing with the neutral sheet of the solar wind but focus concentrations down onto the poles of the planets, as evidenced by the polar Auroral displays from the much larger loops further off of the ecliptic plane.

        The galactic magnet fields are also influenced by basic rules of action as well, which leads me to the conclusion that the interactions of the composite system from the rotation of the Galaxy, and the declinational movement of the solar system in that larger frame of reference, as well as the density waves that propagate around driving the spiral arm flux variances give rise to the longer cyclic term climatology of the Earth. Some have been found, other underlying cycles that as yet we do not have their specific drivers identified. (back to this point later)

        The heliopause seems to have auroral knotted bands (recently spotted ribbons of ion activity) on its leading side as it progresses through the interstellar gases and dust clouds, the solar system passes through in its travels. I think that this is due to the conductance of the galactic fields into or through the heliopause, coupling through the polar regions of the sun and planets, at near equilibrium, or the balance felt as steering currents in the slow transition of the orbital slowing and swaying of the solar system as it winds its way through the gravitational and radiation gauntlet, shoved around ever so slowly by the rest of the individual stars.

        So then as a result the makeup of the planetary interaction periods have become some what stable, and have formed harmonic coupled interactions between themselves, and the non-random long term slower periods. Not much is said about the tilt of the magnetic poles, of most of the planets and the sun from their spin axes. I think even this has something to add about long term climate effects. In the common hospital use of MRI scanners, the magnetic induction pulses are used to flip atomic spin axes in line with the dense fields momentarily formed with pulse current on, and watching the return to ambient spin axes when current goes off. (back to this point later) If people have learned to control the effects would not they also occur in nature if they are so predictable? If you apply the calculations with the right power increase needed to satisfy the balance of the equation, the same effects should occur with reference to stars and planets.

        If all of the planets and the sun are running along, in near balance with changes in outlying fluxes upon the solar system, disruptions in the periodic patterns should be minimal, with much greater stability being found in the harmonic patterns in the interactions between the planets of the solar system, as a result milder climate with less wild extremes would dominate at times of stability.

        Currently the magnetic poles of the sun are running ~12 degrees off of its vertical axes of rotation, with a period of rotation of 27.32 days, as a result the Earth and Moon themselves move above and below the ecliptic plane alternately, while the system barycenter scribes a smooth ellipse responding to the gravitational and tidal tugs of the outer planets as we pass them almost every 12 months plus a few days. The resultant periodic 27.32 day flux of the polarity of the solar wind as it passes the Earth creates and drives the declinational swings North and South in the two bodies, as a giant pulsed oscillator circuit, dampened by the tidal drag of the fluidity of the various parts of the Earth, small solid core, outer liquid core, fluid mantel, and fragmented floating crust, that is itself creeping along tectonically in response to the dance of the combination of the additions of the other planetary tidal, gravitational, and electromagnetic induction fluxes that keep the inner fluids warm.

        The further off of vertical, and/or the stronger the total magnetic flux of the sun’s magnetic poles, the more energy available to be driven into the lunar declinational cycle balanced by the tidal dampening into the Earth, hence the greater the solar magnetic impulse input the greater the resultant tectonic turmoil, the more extreme the weather and climate. The weaker the magnetic fields of the sun relative to the near DC fields of the galactic background levels, and the more vertical the magnetic fields of the sun the less energy gets driven into the lunar declinational movement and resultant tidal dampening energy into the Earth.

        As the spin axes and magnetic axes of the sun approach straight on alignment, the whole declinational drive component of the Moon orbital dynamic decreases, to maybe as little as a degrees either side of the ecliptic plane, changing to a more synergistic combination of the solar and lunar tidal effects at an angle of 23.5 +/_.5 referenced to the equator, keeping the atmospheric global circulation in the kind of high turbulence blocking pattern, sort of weather we have been having the past two years and the next two as well. When continued past the normal length of time (about 3 years on the down and up side) in the 18.6 year variation of the mechanism of transport of equatorial heat towards the poles, stalled in the most active section of atmospheric lunar tidal effects, coupled in sync to the solar tides as well, the long term trend then becomes a constant la nina, and an ice age sets in.

        Just as in MRI scanning the initial pulsed spin flip is nearly instantaneous, and does not seem to affect the covalent bonds the atoms are part of, so maybe the solar magnetic orientation to polar axes of rotation, flip is hardly noticeable over 100 years or less, just as the wandering of the Earth’s magnetic field pole positions are hardly noticed by the public. The ongoing dampening of the tidal movement of the lunar declinational extent at culmination would regulate the dropping rate due to actual amount of tidal dampening load transferred to the Earth. As the declination off of the ecliptic plane drive energy lessens and becomes slowly coupled out by tidal inter action, and the Lunar orbital diameter expanded to compensate slightly. This would explain the rapid onset of ice ages, and then the re-flip to off axes solar magnetic polar alignment, renew the declinational driver system again and cause the pulsation type exit usually seen from ice ages.

        The short term inter ice age, realistic application of these ideas is in the much more recent history (due to short instrument records) of the past three to five maybe (Ulric Lyons says 10 cycles works best because it = the 178.8 year Landschmidt(sp) cycle period.) Can be assembled in composite maps that use the 6558 day period of 240 declinational periods that shows analog synchronization of the inner planet harmonic effects on the weather, from just the past three cycles as seen on the daily maps here.

        http://www.aerology.com/national.aspx

        The problem left is that the outer planet have a set of harmonics of their own that induce the 178.8 years envelope on the 18.6 year mn cycle pattern that have in turn a finer 27.32 day oscillation imposed, so the complete long period of compounded modulation is as Ulric Lyons suggests 178.8 years long as Landschimdt (sp) was on about with the effects of the outer planetary returns driving the solar sunspot cycles due to SS Barycenter displacement due to Uranus Neptune synod conjunctions. The available data base gets extremely thin out 178.8 year ago. Due to data limitations, I have so far stayed with just the last three cycles of 6558 days or ~18.3 years.

        On April 20th of 1993 we had the most recent synod conjunction of Neptune and Uranus, which the Earth passed on July 12th of 1993, presenting as an epic precipitation surge globally with heavy rains through the summer and massive flooding of several river system around the world. It is my contention that the increase in magnetic couplings through the polar magnetic field connections induces a homopolar generator charge increase at these times and a quick global discharge just after synod conjunction. The results of these increases in pole to equator charge increases drives positive ions off of the sea surface along the ITCZ, where by mutual static repulsion of the condensation nuclei inhibits cloud formation and precipitation, and at the same time allows more SW radiation to reach the tropical sea and land surfaces promoting rapid warming driving ENSO extremes, with the rapid precipitation that results on the global discharge side, post synod conjunction, also leaving clearer skies for additional warming after the flooding subsides.

        The lunar declination phase of the 18.6 year mn cycle was in an increasing through 23.5 degree culmination angle at the same time, being in phase with the temperature increases. By early 2005 the declinational angle at culminations was at its peak extreme, and the distance between Uranus and Neptune was separating again to about 29 days apart August 8th of 2005 for synod of Earth and Neptune and September 1st of 2005 for synod conjunction of Earth and Uranus. The Southeast gulf coast was ravaged by Katrina and Rita as a direct result of these influences. Combining with the 27.32 day period lunar declinational tides culminations they rode in on, to produce the storm intensity that resulted.

        As the outer planets Neptune and Uranus continued to separate and the declinational angle shifted past peak angle at culmination the resultant peak warming period shifted further into the late Summer and now is in the Fall in 2010. The reason I think the last season 2010 was so active but not as powerful in ACE production as 2005 was due to the addition of Jupiter in Synod conjunction on April 3rd in 2005 kicking things off, and on the 21st of September 2010 with Uranus on the same day, creating a late fast finish in 2010. But having a half hearted start of a season in 2010 as a result of the difference.

        Over all the whole period of the close Neptune and Uranus synods in the mid to late summer allowed the extra clearing of clouds and resultant heating the last 15 years of the SST and ENSO intensity periods, CO2 just was in the air along for the ride. This is all part of the 60 year patterns in the weather cycles, and can be explained as such. Now that the outer planet synod conjunctions of the Earth with Neptune and Uranus are moving into the fall and early winter, we can expect them to produce the increased snowfall events and cold polar blasts being seen in both hemispheres.

        With the investigation of these methods of predicting the extreme effects of the weather patterns they produce, long range forecasts for both weather and climate will become possible. I am betting my life saving and the rest of the creative efforts of my life time on it.

        ^ This new stuff I have been keeping to myself mostly, the rest of the inner planet and lunar interactions is posted to my research blog side of the http://www.aerology.com site.

        Just a coffee induced ramble here pick at it some just a compressed un referenced set of thoughts I’ve had lately.

        Richard Holle, still expanding and organizing better……

  119. Jan Pompe 12/17/19 1:03 am

    What Vaughan Pratt is trying to say is that atmospheric CO2 and temperature are in a volatile relationship, where each is trying to satisfy its own needs and desires. Sometimes they are in sync, sometimes out.

    Modeling by analogy is tricky. It rarely works.

    • Thank you Jeff,

      I think that he is trying to shoot himself in the foot.

      I just got an email from Ferenc where he commented on this thread said he couldn’t understand how people can solve the boundary value problem without understanding the physical laws that govern the fluxes. Which leads me to your post above.

      Some time ago I came to the opinion that the Arrhenius’ et al claim that there is a logarithmic relation between radiative forcing (erk) and CO2 concentration was due to a bit of dodgy curve fitting. The logarithm curve doesn’t look much different to exponential asymptotic curve (e.g 1-e^t) but it doesn’t have an upper bound. IOW it can lead to impossibly high values for radiative forcing theoretically and the only bound there is, if one remembers, is the concentration can’t be more than 100%. This does not allow for the fact that the maximum amount that can absorbed, and which must be compensated by higher temperatures, more convection to maintain a balance of fluxes, is the available incoming flux. Any equation purporting to relate CO2 to radiative forcing really aught to have a term for so that with infinite optical depth all available flux, and not more, is absorbed.

      I think the Schwarzschild radiation transfer equations which build on Beer’s law fills the bill quite nicely.

      • I just got an email from Ferenc

        You mean he’s given up trying to defend himself on his own without benefit of counsel? That would be very sensible had he picked a good mouthpiece. Some of us aren’t so sure about that.

        Ferenc was half right in that he computed the PE of the atmosphere on the nose. It was only its KE that he was off by a factor of 5 on. So far you haven’t convinced me that you’re right even half the time.

        You could do worse than have Ferenc as your mouthpiece.

        If I seem unkind it is only in kind.

      • You mean he’s given up trying to defend himself on his own without benefit of counsel?

        Naaaah He’s never bothered much with blogs etc pokes his head in from time to time if prodded by one or more of his friends but other wise has other things to get on with.

        It was only its KE that he was off by a factor of 5 on.

        There is only one degree of freedom of interest here and that is the vertical direction. Rotational and lateral KE of no consequence.

        BTW you have not convinced me that you are right at all except for the occasional fluke. Maybe if you stuck to computer science.

      • There is only one degree of freedom of interest here and that is the vertical direction. Rotational and lateral KE of no consequence.

        So if four other people were withdrawing from your bank account each at the same rate as you, you wouldn’t mind?

      • So if four other people were withdrawing from your bank account each at the same rate as you, you wouldn’t mind?

        Relevance please? It’s the vertical component that places pressure on the surface (how much greater than the weight is it?) it is also the component that provides the convection which is in relates to a higher kinetic energy than that which is only due to temperature. Convection does not give any additional lateral KE, irrelevant advection does that, similar goes for rotational KE.

      • Relevance please?

        Molecular collisions. They withdraw energy from the vertical DOF and redistribute it evenly to all 5 DOFs. Redistributions are performed every 115 nanoseconds or so. If you ignore this process you get the wrong result for calculations such as KE of the atmosphere, as FM did recently. Talking about the vertical component of KE makes no sense because unlike velocity KE is a scalar, not a vector.

        But I’m surprised you had to ask.

        it is also the component that provides the convection which is in relates to a higher kinetic energy than that which is only due to temperature. Convection does not give any additional lateral KE, irrelevant advection does that, similar goes for rotational KE.

        Convection does not give any discernible additional KE, not even vertically. Air molecules at STP move at an RMS velocity of 517 m/s. Convection at say 1 m/s adds (1/517)² = .0004% to that.

        We seem to have remarkably different understandings of atmospheric mechanics.

      • They withdraw energy from the vertical DOF and redistribute it evenly to all 5 DOFs.

        Oh? Is that why a parcel of air moving upward eventually slows to a stop and starts accelerating downward oh wait doesn’t that damp downward motion too?

        I gather you have limited experience with vectors. If you did you would realize that lateral components of particle impacts have net effect in the vertical direction or for that matter orthogonal lateral directions either.

      • BTW you have not convinced me that you are right at all except for the occasional fluke. Maybe if you stuck to computer science.

        So far I’ve been unable to figure out what you should stick to. Given your conviction that G&T are headed for a Nobel prize, I’m mystified why anyone would pay you to play around with Hitran data.

      • Do you even know what the units that HITRAN line strengths are given in mean? That confuses the heck out of most people, so if you have that under your belt I’m impressed.

      • Just noticed this sitting here and not submitted must have been sleepy.

        Do you even know what the units that HITRAN line strengths are given in mean? That confuses the heck out of most people

        Not surprising I’d expect most lay people would be wondering why it doesn’t contain a unit for energy absorbed over distance for concentration instead of distance^-1 molecule^-1, or more specifically cm^-1/(molecule * cm^-2).

        Physicists generally don’t have a problem because Dimensional Analysis is I believe an essential component of any physics course.

      • BTW you have not convinced me that you are right at all except for the occasional fluke.

        My apologies, I mistook you for someone who used the same system of logic as I do. As you’ve pointed out yourself, that was clearly a mistake on my part. I won’t make it again.

      • “My apologies, I mistook you for someone who used the same system of logic as I do. As you’ve pointed out yourself, that was clearly a mistake on my part. I won’t make it again.”

        Well I for one will not used (if I can help it) half baked analogies whereas you seem to favour them. For another I don’t think “truth” is pluralistic, tolerant or democratic nor can we expect it to be.

  120. Vaughan Pratt 12/16/10 10:53 pm

    >>>>IPCC’s current model is that CO2 may not initiate warming, but instead it [CO2] amplifies it [warming].

    >>I’d love to see the page number for that. It would [be] like Obama agreeing not to tax the rich. I hadn’t realized the IPCC had knuckled under the same way.

    IPCC said,

    >>It is very likely that glacial-interglacial CO2 variations have strongly amplified climate variations, but it is unlikely that CO2 variations have triggered the end of glacial periods. Antarctic temperature started to rise several centuries before atmospheric CO2 during past glacial terminations. AR4, Ch. 6, Executive Summary, p. 435.

    Note that IPCC makes the temperature at Vostok peculiar to the Antarctic but CO2 remains unqualified, and presumably global. The reverse is likely.

    >> Although it is not their primary cause, atmospheric carbon dioxide (CO2) also plays an important role in the ice ages. Antarctic ice core data show that CO2 concentration is low in the cold glacial times (~190 ppm), and high in the warm interglacials (~280 ppm); atmospheric CO2 follows temperature changes in Antarctica with a lag of some hundreds of years. Because the climate changes at the beginning and end of ice ages take several thousand years, most of these changes are affected by a positive CO2 feedback; that is, a small initial cooling due to the Milankovitch cycles is subsequently amplified as the CO2 concentration falls. Model simulations of ice age climate (see discussion in Section 6.4.1) yield realistic results only if the role of CO2 is accounted for. AR4, FAQ 6.1, p. 449.

    Note “temperature changes in the Antarctic” again! Further note that IPCC is explicit about CO2 being a positive feedback, but gives as an example the Milankovitch cooling, not warming. Yet just above this citation, IPCC says,

    >> Starting with the ice ages that have come and GONE in regular cycles for the past nearly three million years, there is strong evidence that these are linked to regular variations in the Earth’s orbit around the Sun, the so-called Milankovitch cycles (Figure 1). … There is still some discussion about how exactly this starts and ENDS ICE AGES, … . Caps added, Id.

    So Milankovitch cycles can warm, and CO2 is a positive feedback.

    Or, is this like your man-woman model for CO2 and temperature? CO2 is a positive feedback to cooling but not to warming? Fickle, eh?

    • Boy, you’re just like G&T. You quote other people’s stuff that I fully agree with and then proceed to shoot it down with logic I can’t follow.

      Let’s shoot all the logicians and bring on the clowns, why don’t we?

      You quoted the IPCC as saying, “It is very likely that glacial-interglacial CO2 variations have strongly amplified climate variations, but it is unlikely that CO2 variations have triggered the end of glacial periods. Antarctic temperature started to rise several centuries before atmospheric CO2 during past glacial terminations.”

      I would have been thrilled to have written that myself, it’s completely consistent with my understanding of what happened back then. You seem to think it’s insane. I think you’re insane.

      • I fully agree with and then proceed to shoot it down with logic I can’t follow.

        You keep reminding me that there is a good reason not to bother with wikipedia on logic.

      • You keep reminding me that there is a good reason not to bother with wikipedia on logic.

        You keep reminding me why it’s a waste of time for the Republican and Democratic parties to try to engage. Not that they do, they gave up on that several years ago.

  121. Vaughan Pratt 12/17/10 12:36 am

    >> (Judy, other blogs let you preview your comment before posting it. Why does WordPress not offer this feature? It puts a terrible demand on proof reading. Anyone else running into this problem with this blog?)

    Indeed! Also a spell checker would help, and the posts should appear in chronological order.

    Vaughan Pratt 12/16/10 11:52 pm

    >>Only skeptics fantasize about connections between the Sun and Earth’s climate. If Wang didn’t then he’s clearly not a skeptic.

    Your first sentence is true provided by skeptics you include scientists. In the second sentence, though, you use skeptic in the sense of a heretic, a non-believer in the AGW dogma. Science is not about belief systems or the feelings of scientists.

    Wang, et al. (2005) substantially reduced TSI compared to previous models. IPCC was delighted! It had urged that the radiation forcing from solar variations were too weak to be significant. Now it had even stronger evidence for that conjecture. Instead, Wang’s new model accounts for Earth’s global average surface temperature with one standard deviation accuracy of 0.11ºC, a variation about the same as IPCC’s smoothed estimator. That model is SGW, http://www.rocketscientistsjournal.com. The model requires only a handful of parameters to model a 150 year complex temperature structure, but requires an amplifier in the climate. Cloud albedo provides just that amplifier.

    If Wang had developed and included the SGW model, his paper would not have been accepted for publication in any IPCC-accepted, peer-reviewed journals.

    Vaughan Pratt 12/16/10 11:24 pm

    >>Before we get into anything technical, already I’m having more basic problems. Figure 7.4 in my copy of the current IPCC report is on p. 516, yours is on p. 439. WUWT?

    >>If one of us is working with superseded materials then we’d be talking at cross purposes. Let’s straighten this out before taking this any further.

    The reference was specific to the TAR, not “the current IPCC report”. Figure 7.4 of AR4, which is quite irrelevant, is indeed on p. 516. But Figure 7.4 of the TAR is on p. 439, as promised.

  122. Jan Pompe 12/17/10 8:59 am

    One little correction: Arrhenius’ claim is to a dependence between TEMPERATURE, not radiative forcing, and the log of CO2 concentration. Linking the two, temperature and radiative forcing, requires IPCC’s α coefficient, as discussed above. That coefficient is not constant, is not uniformly applied in IPCC models, and must be reckoned closed loop, especially with cloud albedo. This is the core of the problem with radiative transfer and the topic here: getting radiative forcing right, the presumption with radiative transfer, doesn’t solve the problem because RF has to be linked back to climate, i.e., global average surface temperature.

    You’re right about fitting the logarithm to the Beer’s Law result. Both are smooth and convex, so the log will fit nicely enough in a narrow region. When you say the logarithm has no upper bound, that is the same as saying it does not saturate. The logarithm is an approximation in a narrow region, but it quickly leads to impossible results not found with Beer’s Law. IPCC uses the approximation without ever investigating its validity or limits of applicability, and without citations.

    • One little correction: Arrhenius’ claim is to a dependence between TEMPERATURE, not radiative forcing, and the log of CO2 concentration

      Good to know Jeff and I don’t disagree on everything. Arrhenius was right and the IPCC is wrong. (More precisely, someone in the IPCC, since the whole IPCC does not exactly speak with one voice, however hard it might try.)

      Jeff’s emphasis on Beer’s Law is spot on. I’ve been working on this lately, it will be interesting to see how close Jeff’s results and mine are.

  123. Pekka:

    It is known that earth does not appear from space as an uniformly grey body but has a more complicated spectrum.

    yes indeed I couldn’t agree more for example here is one I did from data from a radiosonde sent up in the Antarctic using HARTCODE and Octave. I really am quite familiar with all this. It is a one line command to integrate the spectral radiance and obtain from that an equivalent grey body emissivity and it isn’t going to be 1 like a the black body curve with which it was convolved.

    Practically nothing corresponds to 254.3K except the total LW radiation energy.

    Yes if the earth was black but as you so kindly pointed out it isn’t so in fact it needs to be hotter to radiate the required energy to balance the incoming.

    • I should note that the image linked is upward looking it’s the only one I uploaded to photobucket and I don’t have access to my desktop computer where others reside.

  124. Jeff Glassman,

    Still, Arrhenius was talking about the relation between CO2 and temperature, not between CO2 and radiative forcing, IPCC’s novel modeling paradigm. Beer’s Law, which predates Arrhenius’s work, says that RF should be a decaying exponential (i.e., saturating) in the concentration of a GHG. That RF(CO2) is logarithmic remains a conjecture that fails under Beer’s Law. That it is logarithmic because of hypothetical band wings is still a failed conjecture, but it is at least more testable.

    Hypothetical band wings??? See this graph for calculated spectra (spectralcalc.com) for the 667 cm-1 band at different path lengths. There’s nothing hypothetical about the existence of band wings or the lack of saturation in them. You might also try reading A First Course in Atmospheric Radiation (Second Edition) by Grant W. Petty, particularly Chapter 10 Broadband Fluxes and Heating Rates in the Cloud-Free Atmosphere. You can access most of the relevant material by using the look inside the book feature here. Start on page 289.

    Beer’s Law does not apply because the incident radiation has a much broader bandwidth than the absorbing transition. This is really elementary stuff, btw.

  125. Jeff Glassman,

    However, the ice core data are not directly or properly comparable to modern records.

    That may be true for the Vostok and other central Antarctic cores (Dome C, e.g.) where the snow accumulation rate is low. It’s not true for cores from sites with much, much faster accumulation, for example Law Dome near the coast of Antarctica. The DE08 and DE08-2 cores overlap MLO and are in good agreement with MLO ( graph here MLO yellow line) when gas age is used. The DSS core from Law Dome goes back to 1006 A.D. and clearly shows a pre-industrial (1006-1796 A.D. gas age) nearly constant level of CO2 of 280 ppmv with an s.d. of 2.7 ppm.

  126. DeWitt Payne 12/17/10 4:02

    What is truly basic, even before the science, is reading and comprehending the written words.

    I did not say that “the existence of band wings” was hypothetical. Nor did I say that saturation was lacking in any part of the CO2 absorption spectrum.

    I quoted IPCC for its claim about band wings, partial saturation, and logarithmic dependence, for which it had no supporting references. I cited Pierrehumbert, an IPCC author, for what he might contribute, including his qualifications as to the origin of the wings that cause the (approximate) logarithmic effect.

    The problem is far more complex than a first year, introductory text in atmospheric radiation. Pierrehumbert brings the issues to the front, right up to IPCC, the source now of AGW. Previously I referred to Pierrehumbert’s (RTP’s) on-line text on Planetary Climate, referencing the derivation of the logarithm dependence, not in Petty, and shown in RTP’s Figure 4.12, p. 110, and discussed there. He developed that figure from his Figure 4.7, p. 105, where he qualifies the validity of the band wings. His Figure 4.7 appears to be taken from the same source as Petty’s Figure 9.10(b), p. 271. Thus my citations to RTP are the link between your first year text and the climate problem at hand.

    My complaint is clear. The logarithm does not saturate. CO2 absorption should saturate.

    You said,

    >>Beer’s Law does not apply because” blah, blah, blah.

    I searched Petty as you suggested for Beer’s Law, getting 15 hits. Petty also refers to Beer’s Law as the extinction law, and refers to extinction 72 times. I scanned through all of these citations that Amazon allows, finding repeated citations where Beer’s Law applies immediately or in the limit. I found no citation where Beer’s Law does not apply.

    I searched Petty for logarithmic, getting 5 hits. These mostly dealt with graphical scales. None referred to absorption being proportional, even approximately, to the logarithm. Your reference does not support your claim.

    You provided a link to a chart with six insets, showing the progressive decrease of transmittance of a narrow band of absorption lines as the path length increases, source spectralcalc.com. This is a demonstration of Lambert’s Law. Beer’s Law relates instead to dependence on the concentration of the gas. The fact that this narrow band has wings does not show the origin of those wings, and especially not that they are modeled based on measurements. This is true of Spectral Calc as it is of HITRAN and MODTRAN. These are simulations, and the results are the assumptions of the programmers.

    I discovered in Petty the now classic Keeling Curve. Petty, Fig. 7.5, p. 177. Petty calls this “Measurement of atmospheric carbon dioxide concentration at Mauna Loa Observatory, Hawaii.” I suspect that this curve does not comprise measurements. It appears to consist instead of a reconstituted secular concentration plus a reconstituted seasonal effect, probably well-based on measurements. The important point is understanding what is a measurement and what is not. If Petty says somewhere that the band wings are measured, that, too, would have to be challenged.

    If you have a technical point to make you need to state it completely, including references to the tough spots, but without requiring the reader to do your research or complete your thoughts. That, by the way, is basic stuff.

    IPCC reports that a climate crisis exists, predicting an impending catastrophe. IPCC bases that on climate modeling in which the absorption of CO2 does not saturate. IPCC’s entire model rests on the existence of a climate sensitivity parameter, which is equivalent to the logarithmic assumption. Any doubling of CO2 produces the same radiative forcing increment. IPCC further ties radiative forcing from CO2 absorption to a temperature increase to the “constant” α. As discussed at length on this thread, &alpha: is not constant, not consistent in the GCMs, and is dependent on the closed loop gain, which IPCC does not take into account.

    IPCC says,

    >>The RF from CO2 and that from the other LLGHGs have a high level of scientific understanding (Section 2.9, Table 2.11). Note that the uncertainty in RF is almost entirely due to radiative transfer assumptions and not mixing ratio estimates… . AR4, ¶2.3.1, p. 140.

    In the final analysis, IPCC’s problem is to link an increment in temperature to an increment in radiative forcing, predicted as much as possible by radiative transfer, using a hypothetical model for CO2 absorption. The story breaks down long before reaching a first year text on atmospheric radiation. CO2 is a short lived, lagging indicator of global temperature, which is driven by surface insolation, regulated by cloud albedo in the warm state, and locked by surface albedo in the cold state.

    • Beer’s Law applies only to monochromatic incident radiation.

      It [Beer’s Law] states that the intensity of a beam of monochromatic incident radiation falls off exponentially as it traverses a uniform medium.

      It’s also valid in the limiting case where the absorptivity isn’t a function of wavelength over the bandwidth of the incident radiation. The medium isn’t uniform, the incident radiation isn’t monochromatic or uniform either and the absorptivity varies strongly with wavelength over the bandwidth of the incident radiation. Beer’s Law is not valid for broad band atmospheric radiation.

      You have to use line-by-line or band models derived from line-by-line calculations to calculate total absorption. Line by line programs using the HITRAN database and band model programs like MODTRAN have been validated against observed atmospheric spectra. See for example here vs here and here. MODTRAN isn’t some academic exercise either. It was developed by the Air Force for use in remote imaging and weapons development (heat seeking missiles and stealth aircraft for example). The ARM project using instruments like the AERI FT-IR spectrophotometer measures high resolution atmospheric emission spectra every day at multiple sites around the world. I’m pretty sure someone would have noticed if there were a large discrepancy between calculated and observed spectra. Many if not most of the line parameters in the HITRAN database have been calculated ab initio. The molecules in question are small and the physics of the various rotational and vibrational states and their transitions are well understood. The level of understanding of clear sky atmospheric radiation is on the same order as for NMR and x-ray crystallography.

      The physics of IR radiative transfer are, btw, no different than the physics used for estimating temperature profiles from atmospheric microwave emission measured by satellite as done by UAH and RSS. Do you throw those out too?

      • Just out of curiosity, DeWitt, do you know whether the exponential decline in CO2 absorption coefficients observed as one moves from the center of the 15 um absorption band into the wings is an accident of the distribution of quantum transitions in CO2, or is the exponential character something that can be deduced simply from the nature of the dipole that is generated in conjunction with photon absorption and the allowable vibrational and rotational transitions that can be excited?

        Could one simply look at the CO2 molecule and predict that result or must it be determined purely empirically by the spectroscopists?

  127. DeWitt Payne 12/17/10 11:30 am

    Thanks for the comment on the comparability of ice core data and modern records. The Law Dome ice cores do reduce the significance of two causes of that incompatibility. The low pass filter effect is much less because the closure time is about 30 to 40 years, instead of 30 years to 1500 years or so. Second, the sample period is much shorter, on the order of 10 years, instead of 1463 years. Nevertheless, problems and inconsistencies remain.

    When IPCC says,

    >>The concentration of CO2 is now known accurately for the past 650,000 years from Antarctic ice cores. During this time, CO2 concentration varied between a low of 180 ppm during cold glacial times and a high of 300 ppm during warm interglacials. Over the past century, it rapidly increased well out of this range, and is now 379 ppm (see Chapter 2). AR4, FAQ 6.2, p. 465.

    and

    >> The present CO2 concentration has not been exceeded during the past 420,000 years and likely not during the past 20 million years. TAR, Summary for Policymakers, p. 7.

    it is referencing Vostok. The two spans of 420,000 and 650,000 years are the periods of the first two major Vostok analyses.

    When IPCC says,

    >> An expanding set of palaeo-atmospheric data from air trapped in ice over hundreds of millennia provide a context for the increase in CO2 concentrations during the Industrial Era. TAR, Technical Summary, p. 39.

    it is again referencing Vostok, this time to say it is the basis for its initializing CO2 concentration in its models.

    The ice core data should differ from the MLO data because of the low pass filter effect, and because of the sample interval. It should also be different because ice core data are collected from within the oceanic sinks for CO2, while MLO sits inside the plume of outgassing from the Equatorial Pacific.

    So based on closure time and sample interval, the Vostok record should not blend smoothly into the modern record as IPCC portrays in its graphics and text. The fact that Law Dome might match MLO well might be understandable, but it should not confirm that Vostok proxies indeed match the data from instruments.

    When confirmation appears to violate physics, the time has come for skepticism. It’s time to check the graphics, the smoothing, the calibrations, the reductions, and any assumptions – and, of course, the physics.

  128. DeWitt Payne 12/17/10 8:44 pm

    >>Beer’s Law applies only to monochromatic incident radiation.

    >>>>It [Beer’s Law] states that the intensity of a beam of monochromatic incident radiation falls off exponentially as it traverses a uniform medium.

    You don’t tell us this citation is from Petty’s introduction to atmospheric radiation (p. 78), and that Petty appears to be both an AGW supporter and a radiative transfer advocate. In that IPCC environment, the investigators NEED Beer’s Law not to apply, so they ASSERT that it is a law about monochromatic radiation. You need an independent citation.

    The derivation of Beer’s Law is simple and straightforward. I provided it on 12/15/10 at 7:36 am. It is an observation about the decline in radiation intensity, transmissivity, as it passes through a medium. It should work for a white source. It makes no assumption about wavelength or bandwidth. They are different matters arising from spectral analysis.

    Where do you think the restriction to monochrome radiation arises?

    As I said previously, the fact that models and simulations produce a certain result, even a consistent result, says something about the programmers, not nature.

    I am throwing nothing out except radiation attenuated in proportion to the logarithm of gas concentration when it is not an approximation over a narrow region of concentration. I rely on Beer’s Law for the fact that the absorption should exhibit saturation. You throw out Beer’s Law to preserve an unsupported conjecture about a logarithmic dependence.

    But why don’t we just agree to disagree in light of the facts (1) that radiative transfer can’t predict CO2 radiative forcing, and (2) that CO2 radiative forcing isn’t predictive of climate anyway?

    Come to think of it, maybe radiative transfer fails, according to IPCC, because, unknown to IPCC, it fails to get Beer’s Law right!

    • This is why Plato suggested democracy fails…

    • Beer’s law is valid only, when the absorption is equally strong for all radiation considered. Thus your derivation must contain an absorption coefficient that is constant for all radiation.

      The strength of absorption does, however, depend on wavelength. Therefore Beer’s law is not, in general, valid for other than monochromatic radiation.

      It is not correct to replace a varying absorption coefficient by its average. Doing so gives wrong results.

      • It is not correct to replace a varying absorption coefficient by its average. Doing so gives wrong results.

        Indeed. The functions f_a(x) = exp(ax) are all linearly independent whence it makes no sense to average them. Their linear independence is the basis for the Laplace transform (in both relevant senses of “basis”).

        You have no alternative but to analyze each frequency separately, and combine the forcings afterwards.

        You also need to analyze each pressure separately, otherwise you will grossly overestimate absorption at low pressures, as others have pointed out. Use the formula, Jeff. (Use the force, Luke.)

        Also note that lapse rates vary by a factor of 2 or more as a function of all three of latitude, altitude, and time of year. Anything not taking that into account can only get to within a factor of 1.5-2 of what you’d get taking them into consideration.

        Also don’t ignore “trace” gases, some of which can have unexpectedly strong lines.

  129. Vaughan Pratt on December 17, 2010 at 4:17 pm writes:

    How is that a contradiction?

    Easy

    For radiant heat it fails whenever the cooler object is at some distance from the hotter one and is radiating a large amount of sufficiently narrow-band radiant heat focused in a sufficiently narrow beam at the hotter object

    Perhaps you are confusing colour temperature with the colour a black object would be at that temperature, well sorry, Wien’s displacement law does not work for monochromatic or narrow-band sources. The temperature can be determined from the flux density, so you suggest that focusing the energy in a narrow beam can cause the object to warm a hotter object radiatively. Then later you state:

    But this can only heat that point to the temperature of the Sun. QED

    Both can’t be true either you can make a the radiation hotter than the source by concentration or you can’t.

  130. Whatever side of the argument you are on you must admire the gallantry of Vaughan Pratt .
    Time after time he attacks the G&T paper.
    Picking up the occasional phrase or word he tries to use it to invalidate their conclusions.
    Its not for want of trying that he has so far failed.
    Contrast this with Pekka Pirilä , chris colose and A Lacis.
    All three share the same point of view about the G&T paper .
    They are generally not shy in posting yet they refuse to go into any detail on the G&T paper.
    They seem to be hiding behind the ‘Phil Jones’ defence;
    I’m not going to tell you as you would just try to prove me wrong!

  131. Vaughan Pratt 12/18/10 1L50 am,

    You are operating from the basis of “stuff that I fully agree with” and “my understanding of what happened back then”. That response is an error that can be traced to your misconception expressed in your denial that “Science is not about belief systems of the feelings of scientists.”

    I gave you a reference to a report by page number, and you looked up the page number in a different report. As you say, “Before we get into anything technical, already I’m having more basic problems.” 12/16/10 11:24 am. Indeed! When told you’re looking in the wrong report (12/17/10 8:52 am), you say “I defer to your expertise”. 12/18/10 2:03 am. Looking in the right reference is scarcely a matter of expertise.

    On 12/17/10 9:30 am, I gave you three citations from IPCC, annotated to indicate the points I though significant to answer your question about where you, too, could read about IPCC’s conjecture that CO2 amplifies warming without causing it. You picked out one of the three, ignored its notes, and leapt to the conclusion that fit your preconceived notion.

    Readers here might be surprised to learn that Vaughan Pratt’s Master’s Theses is “Translation of English into Logical Expressions”. Seriously!

    You asked, >>So why should one prefer the TAR to AR4? And is this just your opinion or is it more widely held?

    First, recognize that IPCC owns the AGW scare. It didn’t initiate the scam, but amplified it. It has written a series of reports, complete with supplementary reports. You can see a full list on line at http://www.ipcc.ch/publications_and_data/publications_and_data_reports.shtml . IPCC replaced its First Assessment Report 1990 (FAR) and its Second 1995 (SAR) in their entirety with the Third 2001 (TAR). The FAR and SAR are of historical interest, and have just recently been made available on line. The Fourth Assessment Report 2007 (AR4) augments, updates, and supplements the TAR, but does not replace it. This is not just an opinion. You can read the scope of AR4 chapter by chapter. More generally, the AR4 panel is distinguished from the earlier reports in that “This report should not repeat matters already assessed in previous IPCC assessment and Special Reports.” http://docs.google.com/viewer?a=v&q=cache:ciKx8XCIeAcJ:www.aiaccproject.org/meetings/Buenos_Aires_04/DAY1/Aug_24Canziani.ppt+%22Fourth+Assessment+Report%22+introduction+%22third+assessment+report%22+OR+TAR&hl=en&gl=us&pid=bl&srcid=ADGEESjOIAGEZbuRaGbcfvSxbBbbnDHkBSX8L9ZRsRFLwIvxH6SSH-sjsSQdViE9iAfynkXuZdQgAUl4ctGKmdiFld8g4CM7ByRKUPoGmakf6GKbrOGPYuPpQCLHdLiwusOd_OuJESBR&sig=AHIEtbTyw4ufrL4QMb4jJJrI7E-UbGzPfw .

    Vaughan Pratt is a member of the American Mathematical Society and the Mathematical Association of America. That reminds me of Plato’s observation: “I have hardly ever known a mathematician who was capable of reasoning.” Plato was on to something. Mathematics is not science, but a language of science. A bachelors in math is a poor start for someone who wants to practice science. Read about Gavin Schmidt, for example. Gavin Schmidt on the Acquittal of CO2, http://www.rocketscientistsjournal.com .

    Vaughan Pratt considers himself a computer scientist, having received his bachelor’s and master’s in Australia. http://boole.stanford.edu/pratt.html . He is a member of the Stanford Computer Science department, and as a courtesy, a member of its Electrical Engineering Department. Now I don’t know about the Aussie computer science curriculum, but the US has two distinctly different versions of CS. One is a department in a School of Engineering, and the other is a standalone department or school. The former emphasizes broad training in physics, where one might learn what science is by osmosis. (The philosophy of science is taught in philosophy departments, which have no idea what science is.) The independent CS schools replace the intensive physics courses with training in compilers, computer languages, operating systems, and such. As a result a student in the latter gets short shrift in science. Could that have been the case with Dr. Pratt?

    But Dr. Pratt’s problem is not just with science, nor even with logic. He is having difficulty following a discussion. He is not assembling everything said before jumping to conclusions. He needs to write less and read more.

    So, when pinned down, Dr. Pratt resorts to insults: “I think you’re insane”. Whether an insult or an ad hominem, and whether excusable or not considering his background, it remains a breach of etiquette and decorum here.

    • So, when pinned down, Dr. Pratt resorts to insults: “I think you’re insane”. Whether an insult or an ad hominem, and whether excusable or not considering his background, it remains a breach of etiquette and decorum here.

      I would have said putting words in my mouth was a breach of etiquette. Whom did I accuse of insanity?

      The deniers on this forum
      are the ones who lack decorum.

      When pinned down I identify the precise point in your argument where its logic breaks down. I would appreciate it if deniers would do the same thing, but instead they just wave their hands vaguely and accuse me of whatever comes into their mind at the moment while dismissing my reasoning equally vaguely. They see no difference between insults and pointing out logical flaws. It’s really pathetic.

      Deniers might not be insane but they sure can’t reason their way out of a paper bag.

      • Vaughan Pratt 1/11/11 at 4:37 am, on Confidence … thread

        You asked, >>Whom did I accuse of insanity?

        Me. 12/18/10, 1:50 am, above.

        You write, >>When pinned down I identify the precise point in your argument where its logic breaks down.

        Previously, you wrote, >>I don’t see Jeff Glassman as a spoiler, what I don’t know is where he sits relative to the threshold of manageability. On the one hand he makes elementary mistakes, on the other these may be sufficiently common mistakes as to be worth addressing. 12/21/10, 10:44 pm.

        I responded, >>You can apologize by doing me, and the readers here, the courtesy of listing categorically those elementary mistakes of mine that you perceive, providing explicit references to each. Leave nothing out. Let’s be finished with this despicable tactic. 11:49 pm.

        I’m still waiting for your apology and list of specifics. A list of one would be a welcome start.

        The conclusion I draw is you will be pinned down only when you say you are pinned down.

      • Jeff, the precise point in your argument where its logic breaks down is when you infer from the statement (not just made by me, btw) “you make elementary errors” to the statement “you are insane.” This may follow by your reasoning, but to the best of my knowledge it is not considered a sound inference by the mental health profession.

        Though I must say that in the informal nonmedical sense of “insane” (as in “that’s insane”), a lot of the arguing on this list is just as insanely illogical.

        I should not have to say this, but in view of the circumstances I will: it does not follow that someone who utters an insane statement is himself insane. One’s dealings with one’s tax adviser might be perfectly sane even while one is speculating insanely to the adviser about global warming.

        It also does not follow that if one person affirms global warming and the other denies, then one of them must be uttering an insanity. If one took say 1000 ppmv CO2 as a minimal requirement for global warming then it would be perfectly consistent for an Earthling to deny global warming and a Martian to affirm it. It would not however be consistent for a Yank and a Finn to have opposite opinions on the question, since “global warming” is by definition global.

        But this indicates a new and as far as I know largely untapped source of new logic for deniers. One could admit almost all of the known physics of global warming, including that temperature is going through the roof, but deny that it is global.

        If Finns assert warming and Yanks deny it, why it’s clear that the problem is in Finland. The high standard of living and close quarters of Finland is evidently creating the circumstances the Finns are complaining about. The clean wide open spaces of America present no problem here, warming is not an American problem, it’s only a problem in those countries where it is observed. Come to America, look up into our crystal clear skies (on a nice sunny day) and see how refreshingly free America is from CO2.

        It would be about as logical as any of the many other arguments denying global warming. Maybe we could have a competition for the most logical, the most plausible, the most convincing argument against warming of any kind on this planet.

      • Jeff, the precise point in your argument where its logic breaks down is when you infer from the statement (not just made by me, btw) “you make elementary errors” to the statement “you are insane.”

        Vaughan Pratt on December 18, 2010 at 1:50 am wrote in response to Jeff Glassman:

        I would have been thrilled to have written that myself, it’s completely consistent with my understanding of what happened back then. You seem to think it’s insane. I think you’re insane. [added emphasis]

        It’s explicit no need to infer anything.

      • Touché, Jan. :) Good memory.

        Not that there’s much new here. My friends have been telling me lately that I’m insane. “What are you doing posting on Climate Etc?” they say. “You must be insane.”

        All the world is mad save thee and me, Jan, and even thee’s a little strange sometimes.

      • Jan Pompe 1/11/11 at 9:01 pm, on Confidence … thread

        You respond to Vaughan Pratt quite correctly, >>It’s explicit no need to infer anything.”

        Parse VP’s statement: “the precise point in your argument where its logic breaks down is when you infer from the statement (not just made by me, btw) ‘you make elementary errors’ to the statement ‘you are insane.'” Note he did not say infer anything.

        One can’t be sure he even meant the word “infer”. Imply is often confused with infer, but that word doesn’t work here, either.

      • One can’t be sure he even meant the word “infer”. Imply is often confused with infer

        Jeff, I acknowledge your expertise in logic and bow to it. Your second sentence is absolutely correct. As a fellow logician I have been making the same point myself for four decades now and am delighted to see it getting some recognition in subjects as far removed from logic as climate science.

      • Vaughan Pratt 1/11/11 at 2:13 pm, on Confidence … thread

        Time out. Let’s review. You posted two topics that have no earthly connection: (1) you feigned not to know whom you called insane, and (2) how great you argue when pinned down. 4:37 am.

        I responded categorically, keeping the topics separate, each under its own heading. 1:44 pm.

        Now you write, >>Jeff, the precise point in your argument where its logic breaks down is when you infer from the statement (not just made by me, btw) “you make elementary errors” to the statement “you are insane.”

        First, I made no argument to have suffered its breakdown.

        Second, you compound your discourtesy, refusing to provide even one example of my “elementary errors”, by saying other people agree with you. How many people agree with you is irrelevant. I’d bet that Ted Kaczynski would agree with Jared Loughner, too.

        Third, I did not confound your imaginary elementary errors observation with your insane insult.

        Fourth, you did not use the word “infer” correctly. It requires a conclusion; it’s not “from … to …”, but “from … that… “.

        Fifth, with that start, the rest of your post is incomprehensible.

        You’re lucky that playing solitaire does not require a full deck. Statler Bros., “Flowers on the Wall”, 1966.

      • Second, you compound your discourtesy, refusing to provide even one example of my “elementary errors”, by saying other people agree with you.

        Unfortunately for your argument, as often happens you have this backwards. They were not agreeing with me, they were complaining about your errors with no reference to me whatsoever. I only came later, and I only pointed out that people had been pointing out elementary errors to you. Those people even said themselves that your errors were elementary, without any prompting from me. If anything I was agreeing with them. I don’t see why you’re singling out such a secondary critic as me when you have all those primary critics you ought to be answering to.

        Third, I did not confound your imaginary elementary errors observation with your insane insult.

        Good thing you have Jan Pompe around to remind you how that went. I still have this vague sense that I can’t quite put my finger on, that there’s something insane about this entire discussion, but I’ll let that pass.

        Fourth, you did not use the word “infer” correctly. It requires a conclusion; it’s not “from … to …”, but “from … that… “.

        No disagreement there.

        Fifth, with that start, the rest of your post is incomprehensible.

        Why am I not surprised?

      • Vaughan Pratt

        Whats with this “denier” crap.

        I could easily call you a denier.

        You deny the Clausius statement that heat only flows from a higher to a lower temperature.
        You deny the work of the great experimental physicist Wood who proved that the radiative effects of CO2 and H2O at atmospheric temperatures are so small that they can be ignored for most purposes.

        Where does name calling get us?

        The obnoxious connotations of that word cannot have escaped you!
        Judith Curry only asks that we discuss the science in a frank but respectful way.

      • I deny that Pluto is a planet. I am a planet-Pluto denier.

        As far as Wood’s experiment goes, you seem to be picturing it as something like the Michelson-Morley experiment, carried out with the usual care of a major physics experiment. All Wood did was hastily measure “the temperature inside the box” and report it was 55 °C with no numerical data whatsoever about the experimental apparatus. He also had to fudge it by putting a sheet of glass over the salt without investigating whether or not that would make the salt work like glass. Bryan, it wasn’t even a real experiment, Wood even said as much in his tiny 1.5 page note describing what he did, saying “I do not pretend to have gone deeply into the matter.” He could not have made this more obvious in the casual manner in which he documented it. In the November issue of the same journal Wood did have a much more seriously worked out paper, 11 pages long, which too turned out to be wrong (noted optics physicist Sir Arthur Schuster pointed out that Wood had failed to distinguish between coherent and incoherent radiation).

        If Wood had investigated further he would have realized that the concept of “temperature inside the box” is very badly defined, just like climate sensitivity. It is heavily dependent on the precise position of the thermometer, which Wood did not notice because he did not spend any significant time on making that measurement. He should have smelled a rat with his first reading of 65 °C, which he attributed to salt passing more insolation than glass (which is false) instead of the more likely reason that he inadvertently put the thermometer in a hotter part of the box without realizing temperature varies greatly inside a hotbox. Putting glass over the salt only made his experiment less meaningful, not more, because that gave the salt box a glass window. Astronomer Charles Abbot, then director of the Smithsonian Astronomical Observatory, later Secretary of the Smithsonian, published an article critical of Wood’s conclusions in the same journal five months later and the world proceeded to forget the whole thing for over half a century before someone stumbled over Wood’s paper in the 1980s and said “Hey, lookit this neat experiment.”

        Wood had many excellent results, both before and after 1909, he just didn’t bowl 300 every time.

        Regarding the term “climate denier,” if you’re not one then I wasn’t referring to you.

        But in that case you can’t speak for those who do deny climate change. As far as appropriateness of the term is concerned, may I refer you to minute 55:00 of Lindzen’s presentation at the Cooler Heads event. Lindzen says “To the extent possible I am a climate denier.”

        Since Professor Lindzen has been a noted authority on climate for decades, I consider his judgment on the matter sound, and obnoxious to no one that agrees with Lindzen on this question. If you’re not in Lindzen’s camp then I don’t see what you have to complain about.

      • Vaughan Pratt

        …..”As far as Wood’s experiment goes, you seem to be picturing it as something like the Michelson-Morley experiment, carried out with the usual care of a major physics experiment.”…..

        Wood did all that was required to prove that;
        1. A glasshouse heats up by stopping convection rather than confining radiation.
        2. The radiative effects of CO2 and H2O are so small at atmospheric temperatures that they can be ignored for most purposes.

        I’m sure that if R W Wood had known that the “greenhouse theory” was going to confuse people into thinking that CO2 is a pollutant he would have gone into the matter in more detail.

      • Wood did all that was required

        One cannot say this of an experiment that commits elementary errors in experimental method. Wood did this in spades.

        I’m sure that if R W Wood had known that the “greenhouse theory” was going to confuse people into thinking that CO2 is a pollutant he would have gone into the matter in more detail.

        My position exactly. At least we agree on one thing.

      • One cannot say this of an experiment that commits elementary errors in experimental method. Wood did this in spades.

        Has anyone repeated his experiment without the errors?
        If not, one has to wonder why – especially as it’s quite easy and inexpensive.
        And I have to agree that he did show that a glasshouse heats up by stopping convection rather than confining radiation. Anyone who has ever opened a window or vent at the top of a greenhouse knows that.

      • It is certain that both factors contribute. It is not likely that any scientific articles have been published just to prove the importance of both mechanisms, but there are certainly related studies as many solar heat collectors are based on same effects.

        The importance of stopping the outgoing LW radiation is mentioned in this description of flat plate collectors first, while stopping convection is mentioned only as an additional benefit.

        I am sure that many articles have been written on finding optimal materials, which are maximally transparent for the incoming radiation and maximally efficient in stopping the outgoing radiation have, but I leave the search of such articles to those more interested in the details.

      • Yes, both factors contribute. However, what Wood set out to disprove was the prevailing view at the time which held that the blocking of radiation was the principal player in the greenhouse effect.
        While he didn’t quantify the effects of radiative blocking, he did show it to be no more than a very minor factor – which is what he set out to do.

      • The quantitative importance of blocking radiation by the glass windows depends on details of the situation. The influence is proportional to the temperature difference of the window and the interior surfaces that are radiating.

        In some situations internal convection makes this temperature difference small. Then the radiative effects are small.

        In other situations the internal convection is not as efficient and the temperature difference grows. This effect is likely to be important, e.g., during the night under clear sky.

        Greenhouses are often made of plastics. Some plastics are rather good absorbers of IR, but ordinary polyethylene is not. Special polyethylene is manufactured for greenhouses by adding suitable additives. Stopping IR is the only reason for this and it increases costs, but still it is often worthwhile. This is one commercial proof for the fact that stopping IR is also significant.

      • Pekka Pirilä

        …….”Special polyethylene is manufactured for greenhouses by adding suitable additives. Stopping IR is the only reason for this and it increases costs, but still it is often worthwhile. This is one commercial proof for the fact that stopping IR is also significant.”….

        The point of the experiment below was to find if IR blockers improved the “greenhouse effect”, sadly for believes in AGW, it did not!

        http://www.hort.cornell.edu/hightunnel/about/research/general/penn_state_plastic_study.pdf

      • Bryan,
        When I wrote that message, I had not read the article, but I have already commented it.

        You are right that in the conditions of that study they did not see any effect. That tells that in many cases the effect is insignificant, but it is easy to understand afterwards. I have, however, commented the importance of internal convection several times earlier. Thus my explanation is not fully after-the-fact.

        Whether the effect on IR radiation is important under different conditions, is another issue. My belief is that it is important in many better insulated greenhouses here in Finland. They have often double-layer bubble plastic (EVA) as the covering material and may have even real double glass or polycarbonate windows.

      • Peter317

        G&T tested out part of R W Wood’s experiment for themselves in their paper.
        Of more interest in the experiment below which reaffirms Wood’s conclusions although the authors perhaps had never heard of R W Wood.

        The aim of their paper was to improve a modern greenhouse.

        Polyethylene is transparent to IR , a bit like the Rocksalt that Wood used.
        It would seem reasonable then, that if there were such a thing as a “greenhouse” effect that this material would be less suitable as no greenhouse effect would be possible.
        Hence the addition of IR blocking additives in an attempt to improve the performance of polyethylene .
        This effectively turned the polyethylene into “glass”, that is transparent to solar radiation but opaque to the IR part of this radiation.
        The results of the test show that the use of IR blocking makes no difference to the polyethylene.
        Thus it seems reasonable to conclude that the “greenhouse effect” is so small as to be ignored for most practical purposes.
        Another odd result shown up by the experiment is that on some cold winter night’s the temperature inside the polytunnel dropped below the ambient temperature outside the polytunnel.
        I interpret this as showing that the circulating air helps us retain a higher near surface temperature more so than radiation from the cold night sky.
        http://www.hort.cornell.edu/hightunnel/about/research/general/penn_state_plastic_study.pdf

      • Vaughan Pratt
        Incidentally Pekka Pirilä seemed to think that you had made a magor step forward in defense of the traditional greenhouse theory by conducting an experiment that refuted Wood.
        However he did not have a source to back up his claim.
        Perhaps you could point me to your article and I will give it a critical review.
        In the meantime you might like to look at the experiment posted below.
        These folk just wanted to grow bigger peppers but incidentally supported Woods conclusions about the radiative effects of atmospheric gases.

        http://www.hort.cornell.edu/hightunnel/about/research/general/penn_state_plastic_study.pdf

      • This week I’m working on an article for a robotics conference. There are also a couple of other papers ahead of my write-up of my attempted duplication of the Wood experiment.

        If in the meantime you’re interested in duplicating it yourself I can point you at sources for rock salt windows and suitable digital temperature sensors and microcomputers for collecting data from them. To date I have many thousands of temperature readings from a range of positions in the glass and salt boxes, but I need to work on improving the measurement reliability, which Wood paid no attention to whatsoever, not to mention working with far more primitive equipment (this was 1909, recall).

        If you would prefer to have others do this work then you will need to be patient. The world is not at your beck and call if you aren’t at theirs.

      • Vaughan Pratt

        I’m sorry to hear that your previous experiment to test the conclusions of R W Wood was a failure.
        Pekka Pirilä must also be sorry as it formed part of his belief in exaggerated radiative properties of CO2.

        As you know polyethylene is transparent to IR , a bit like the Rocksalt that Wood used.

        It would seem reasonable then that if there were such a thing as a “greenhouse” effect that this material would be less suitable as no greenhouse effect would be possible.
        Hence the addition of IR blocking additives in an attempt to improve the performance of polyethylene .

        The results of the test show that the use of IR blocking makes no difference to the polyethylene.

        Thus it seems reasonable to conclude that the “greenhouse effect” is so small as to be ignored for most practical purposes.

      • Thus it seems reasonable to conclude that the “greenhouse effect” is so small as to be ignored for most practical purposes.

        Maybe not ignored, but manageable. Consider ABL, which uses a near-infrared laser to put energy on target after passing through significant lengths of this dreaded photon gobbler known as the atmosphere.

      • Using the term “manageable” seems completely correct.

      • Just a brief note about polyethylene greenhouses and “the greenhouse effect”. Greenhouses conserve heat by impeding convection, and so the IR absorbing properties of the cover are largely irrelevant. One might ask the question, “Wouldn’t IR opacity also contribute appreciably to heat retention if the atmospheric greenhouse effect is significant? The answer is “no”. The reason is that the downwelling IR from immediately above the greenhouse is almost identical to the upwelling IR reaching the greenhouse ceiling – it is only over the height of the entire atmosphere that the atmospheric greenhouse effect exerts substantial warming.

        Regarding nighttime cooling, it is correct that advection of warm air from outside a region via wind can ameliorate the cooling due to loss of heat from the surface. If air circulation is impeded, the cooling will be greater.

      • As an aside to my above comment, if a greenhouse were built at a very high altitude, where atmospheric optical thickness is small, and if the air inside the greenhouse were pressurized to surface levels, within-greenhouse IR would become very relevant. IR escape from within the warmed greenhouse would be significant, because the surrounding air would lack the capacity to back radiate almost all the escaping IR. At the Earth’s surface, an increase in photons traveling upward will be balanced by an almost commensurate increase in those traveling downward, with only a small differential.

        This principle is related in part to why high clouds tend to exert a warming influence, whereas low clouds predominately cool based on their light scattering properties. At low altitudes, the greenhouse effects of cloud water adds little to the strong greenhouse effect of the surrounding cloud-free air. At high altitudes, cloud greenhouse effects greatly exceed that of the air they replace. They trap and release heat at correspondingly greater rates.

      • Fred,
        The effects that you describe are important even at low altitudes for example here in Finland during nights of clear sky and low humidity. Under those conditions the temperature of open surfaces may drop many degrees below its value at few tens of meters from surface. There is then an inversion in lowest atmosphere and the temperature of the surfaces is even lower than air close to the surface.

      • That’s a good point, Pekka. Also relevant to these discussions – I would add that if reducing polyethyelene IR opacity is going to make any meaningful difference, it would most likely be at night, when outside cooling is the dominant phenomenon. During sun-warmed daytime temperatures, the outside air and the nearby surfaces will be radiating IR into the greenhouse at higher levels.

      • Fred and others who still follow this old chain,

        This discussion on the properties of actual greenhouses may be marginally significant in discussion on, how good analog the real greenhouse is to the “greenhouse effect” caused by greenhouse gases. Otherwise all these issues are essentially irrelevant for the further understanding of the greenhouse effect.

        Similar effects are present also in the atmosphere where convection maintains the actual lapse rate, but this is included in all serious considerations. Therefore the detailed understanding of real greenhouses is just a curiosity. It just reminds that the convection is indeed important.

      • Fred Moolten

        …….” I would add that if reducing polyethyelene IR opacity is going to make any meaningful difference, it would most likely be at night, when outside cooling is the dominant phenomenon.”…..

        In fact they did not find any “greenhouse effect” at night inside the polytunnel..

        http://www.hort.cornell.edu/hightunnel/about/research/general/penn_state_plastic_study.pdf

      • Further addendum (a few hours after other comments in this exchange) –

        I would say that the relative unimportance of IR shielding applies mainly to greenhouses not supplemented with auxiliary internal heating devices to raise their temperature well beyond what sunlight alone would do. If the internal/external temperature differential is made high enough, IR escape would be likely to increase heat loss significantly. In the Penn State study Bryan refers to, such devices were apparently not employed. However even there, some reduced nighttime heat loss was probably achieved with shielding, although the effect was minor.

      • Fred Moolten
        …..” The reason is that the downwelling IR from immediately above the greenhouse is almost identical to the upwelling IR reaching the greenhouse ceiling – it is only over the height of the entire atmosphere that the atmospheric greenhouse effect exerts substantial warming.”…….

        I agree with you, however most believers in AGW have a simplified picture of the greenhouse gases causing significant thermalisation at near Earth surface leading directly to significantly increased air temperatures.
        You seem to have the same view of the significance of Woods experiment as myself.
        Regarding nighttime cooling, there again we have a similar interpretation of the situation.

      • I’m sorry to hear that your previous experiment to test the conclusions of R W Wood was a failure.

        I would be too if it were. I have no idea what I said that made you think either that it was “previous” (it is ongoing) or a “failure” (where did I say that?).

        All measurements I’ve made to date show far more heat trapped by glass than salt, even when the measurements vary slightly from box to box as a result of hard-to-control variations in construction. Wood only had one box per window and therefore had no way to attach error bars to his experiments. I’m using multiple boxes for each type of window in order to estimate the variance in the readings. This is tedious and the experiment is competing for my time with other projects.

        Posting here has been one project but I’m afraid I’m going to have to give this up in order to free up more time to get some useful work done. Affirmers and deniers simply talk past each other without hearing each other, as you’ve just very nicely demonstrated with your “sorry to hear it was a failure,” whence posting here cannot reasonably be considered useful work.

      • Ok, I read the Rasmussen-Orzolek study. I see where they didn’t see much advantage to attempts to increase the IR blocking of the films. I infer from this that the films were already blocking as much IR as they could without these additives, which therefore couldn’t enhance warming much further. I would have predicted this myself.

        Or are you saying that they had films that blocked no IR at all without the additives? If so I’d love to get my hands on such films, they’d be a lot cheaper than salt windows for duplicating Wood’s experiment!

        You may not realize it but films that don’t block IR are the exception, not the rule.

      • Ordinary polyethylene is almost transparent to IR and it is certainly common.

        It is obvious that the IR transmittance of the material is of little significance, if the window material has the same temperature as the surface whose radiation is being blocked. This is certainly a natural situation in many greenhouses due to effective convection induced by temperature differences related to variation in solar radiation. caused by various obstacles. The influence may grow only when the internal convection is weak.

        In the Rasmussen – Orzolek study, the greenhouse could not prevent cooling of the tube to essentially the ambient temperature. In a better insulated greenhouse, the effect of IR during the clear nights is expected to be larger.

      • Certainly 15 micron thick plastic wrap (what Americans call Saran wrap and Australians Glad wrap) is readily available (as close as your kitchen) and largely transparent to IR. I took advantage of this in some earlier Wood-style experiments I did back in 2009 using Saran wrap Premium (the polyethylene variety of Saran wrap) as a poor-man’s salt window, described at

        http://thue.stanford.edu/WoodExpt

        This is far more primitive than my current setup, requiring manual recording of temperatures. On the other hand it has three times as many thermometers per box as Wood’s experiment, allowing a determination of how much temperature can vary within the box, a point Wood failed to consider.

        However Saran wrap is unsuitable for fabricating greenhouses, which is what G&T claimed to be investigating. If you need sheets twenty times as thick as Saran wrap to withstand strong winds, and if the transmissivity of Saran wrap at a given frequency excluding reflection is 0.9, which is fine for a Woods-type experiment, its transmissivity at this greater thickness will be 0.12, which is quite different.

      • As far as direction of heat goes, what is true is the net flow of heat is from the higher to the lower temperature. It is false to say that no heat flows from the lower to the higher temperature: with radiation the flow is bidirectional. If you fail to recognize this, you cannot correctly compute the net flow of heat between two objects at different temperatures, because you must take into account the heat flowing from the cold to the hot one. There is no other correct way to do it.

      • Vaughan Pratt

        ….”As far as direction of heat goes, what is true is the net flow of heat is from the higher to the lower temperature.”……

        There is NO NETT FLOW OF HEAT.
        Heat has the thermodynamic capacity to do WORK.
        No heat flows from a colder to a hotter surface.
        Get a thermodynamics textbook and have a look at the Carnot Cycle.
        You really are a very slow learner!

      • You really are a very slow learner!

        Back in 1910 Planck considered Einstein a slow learner. I’m considerably slower than Einstein so you may have a point there.

        Would you consider yourself more in Planck’s league or Einstein’s? Seriously, let me know!

        Get a thermodynamics textbook and have a look at the Carnot Cycle.

        Bryan, allow me to introduce you to the concept of radiation, which may well be new to you in view of your comments. Radiation is energy in the electromagnetic spectrum. You may not be familiar with most of it because you can only see a single octave of it, between 400 and 700 nanometers. You may have heard of xrays, ultraviolet, infrared, and microwaves, but those are all invisible to the human eye. Since seeing is believing you may therefore well not believe in more than that single octave of the EM spectrum that we can all see.

        Given your state of understanding of physics I’m not sure where to begin in trying to persuade you of the existence of the rest of the EM spectrum. Any insights from you in that regard about what you know and don’t know would therefore be very helpful.

        Another possibility is that you do believe in the EM spectrum, but you are firmly convinced that the Carnot Cycle has something to do with that spectrum. In that case I would be fascinated to hear more about that connection.

        A third possibility is that you think radiation has nothing to do with heat. If so I would love to know how you arrived at that conclusion.

        There are yet more possibilities, such as dementia, or denial, or schizophrenia, or voices, etc. The list is seemingly endless as to ways in which one can fail a physics exam. It may well be the case that every one of them is represented on Judith’s blog. It sure feels that way.

      • Vaughan Pratt

        …….”Bryan, allow me to introduce you to the concept of radiation, which may well be new to you in view of your comments. Radiation is energy in the electromagnetic spectrum.”……..

        You really will need to brush up on your history of science.
        You seem to think that Clausius was unaware of radiation in
        1865 when he proposed the second law.

        Yet this had been investigated much earlier;

        Prevost’s “theory of exchanges” 1791 was the explanation of the earlier Pictets experiment.
        adsabs.harvard.edu/abs/1985AmJPh..53..737E

      • You really will need to brush up on your history of science.
        You seem to think that Clausius was unaware of radiation in
        1865 when he proposed the second law.

        I didn’t say that. De Saussure was aware of radiation in 1767. I only said Clausius was unaware of Stefan’s law. Before Stefan what you say now was quite right then. After Stefan it became wrong, because Stefan showed that you had to subtract from . Apparently you’re not aware of this.

      • Judith, would PLEASE encourage your admins to get a previewer installed on Climate Etc?

        Here’s how the 2nd last sentence of the preceding comment should have read.

        After Stefan it became wrong, because Stefan showed that you had to subtract (the flow from cold to hot) from (the flow from hot to cold).

      • I have been trying, it is apparently not simple on wordpress.com. apparently to use the plug in requires that you have the expensive version of wordpress (something like $15K/yr)/ apart from the cost (and apparently only works on certain browsers), the expensive version doesn’t allow some other desirable features. I have asked my support person to check into this, but with the holidays and university being closed, not making much progress. The alternative is switching to wordpress.org, which is undesirable in other contexts. will provide an update hopefully today

      • It is false to say that no heat flows from the lower to the higher temperature: with radiation the flow is bidirectional

        Had you said, “It is false to say that no energy flows from the lower to the higher temperature”, you might have a lot more people agreeing with you.

      • So are you saying that heat is not a form of energy? Note the “with radiation” in that sentence; I’m only claiming this for radiant heat, not for other forms of heat.

        Or don’t you believe there’s such a thing as “radiant heat?”

        What’s your position on the G&T paper? Do you support it like Bryan or object to it like Willis Eschenbach?

      • No, I’m just saying that less people would disagree with you if you presented your arguments in less confusing terms.
        Most people interpret ‘heat’ as, or something like, a manifestation of energy. It’s a potentially ambiguous term, whereas energy flow is comparatively well-understood. For example, most people realise that shining a flashlight at the sun doesn’t mean that the light from your flashlight doesn’t flow towards the sun simply because the sun is brighter – chances are that one or two photons from your flashlight might actually hit the sun.

        I don’t see what my opinion of the G&T paper has to do with anything but, as you asked, I’ve not read more than a few pages of it before deciding that I didn’t want to spend any more time trying to wade through molasses. So no, I neither agree or disagree with what I haven’t read – however experience tells me that any paper which apparently sets out to be confusing is more likely to be wrong than right.

      • Incidentally Clausius formulated the 2nd law in 1850. The Stefan law showing how to compute the net flow in the radiative case by subtracting the flow from cold to hot from the flow from hot to cold was not discovered until 1879. Clausius had no way of knowing that both directions had to be taken into account in the case of radiative exchange.

      • Vaughan Pratt |

        …….”Incidentally Clausius formulated the 2nd law in 1850. The Stefan law showing how to compute the net flow in the radiative case by subtracting the flow from cold to hot from the flow from hot to cold was not discovered until 1879.”……
        You seem to think that Clausius was not aware of radiation.
        In fact he tested transmission via mirrors and lenses before he stated confidently that heat only flows from a higher to a lower temperature.
        You seem to be the only one on the planet who thinks that Stefan’s Law contradicts the Second Law of Thermodynamics.
        Do you have any support for this strange idea?

      • You seem to be the only one on the planet who thinks that Stefan’s Law contradicts the Second Law of Thermodynamics.

        If you checked all 6.7 billion people then I’m very impressed, congratulations.

        In the event that you lack Santa-Claus powers, then of the ones you did check, did any of them have any background whatsoever in thermodynamics? If so how much?

        What is your background in thermodynamics? Do you have a degree in it?

      • …crap. … Where does name calling get us?

        I was wondering the same thing.

    • A bachelors in math is a poor start for someone who wants to practice science.

      My feeling exactly, which is why I did an an honours degree in physics (from the University of Sydney). That was a five year program in which the fourth year was an honours degree in pure mathematics, which I only did because I felt instinctively that physicists needed more mathematics than could be taught in a four-year program.

      I also was president of an MIT-based company in the 1970s, Research & Consulting Inc., that consulted for IBM, the Navy, etc. I helped cofound a computer company in 1982 (Sun Microsystems), and ran another for the past decade, Tiqit Computers Inc., which built several computers, most recently for USN SOF if you happen to know that acronym (at some point I’ll write up what we did for them but SOF appreciates a little delay there). I would not have called any of these companies particularly math-centric.

      What’s your degree in, Jeff?

      • Vaughan Pratt 1/12/11 at 12:53 am,

        PhD, UCLA, Systems Science, Communication & Information Theory, Applied Mathematics, Engineering Computer Science. Dissertation: Efficient Processing of Electroencephalographic Data.

        MS, UCLA, Engineering Computer Science, Applied Mathematics. Thesis: Effects of Digital Computer Parallelism in Solving for the Roots of a Polynomial.

        BS, UCLA, Engineering. Specialties: Engineering Physics, Electronics, Applied Math.

      • Excellent! Your undergraduate engineering degree looks like it should have given you some competence in physics.

      • Vaughan,
        While you are clearly bright, do you think that if we look at your CV and see this:
        M.Sc. Thesis: Translation of English into Logical Expressions, Sydney University, May 1970.
        Ph.D. Thesis: Shellsort and Sorting Networks, Stanford University, January 1972.
        Would we conclude you have a lot of physics tucked away in all of that computer science and linguistic analysis? By the way, I do like your weight loss advice. I am going to read it further.

      • My bad.
        Your CV does not do you justice.
        But thanks for the weight/exercise advice.

      • While you are clearly bright, do you think that if we look at your CV and see this:

        Hunter, just because you can’t talk and chew gum at the same time doesn’t mean no one else can either.

  132. Drat! Some html symbols got dropped or misinterpretted in my last.

    I’m looking forward to the preview function.

  133. Pekka Pirilä 12/18/10 5:06 am

    You say,

    >>Beer’s law is valid only, when the absorption is equally strong for all radiation considered. Thus your derivation must contain an absorption coefficient that is constant for all radiation.

    >>The strength of absorption does, however, depend on wavelength. Therefore Beer’s law is not, in general, valid for other than monochromatic radiation.

    >>It is not correct to replace a varying absorption coefficient by its average. Doing so gives wrong results.

    That does appear to be the conventional wisdom. It implies that the formulation of Beer’s Law,

    t(n1+n2) = t(n1)*t(n2)

    discussed above is not valid. It means that the transmissivity through two sequential gas filters is not the product of the separate transmissivity under certain wavelength criteria or for certain gases, perhaps. Can you site experimental evidence of that outcome, especially for CO2?

    The AGW scare has created tremendous pressure on investigators to validate its underlying presumption that absorption depends on the logarithm of gas concentration. Consequently, we must view all conclusions to that effect not based on experiment with skepticism reaching the level of doubt.

    I suspect that when you say using Beer’s Law for broadband EM produces the “wrong results” you mean with respect to demonstrating the logarithmic presumption. What is needed is evidence for the right result. In particular, show from radiative transfer in particular that the sum of the intensity is logarithmic in gas concentration over a broad range of concentrations, complete with its accuracy. Does that exist?

    • There are two completely separate ways in which the absorption coefficient may vary. I was not careful enough to say that. You refer to the case, where the radiation passes in sequence through different materials. In that case the law is valid.

      The second type of difference corresponds to the variablility of the absorption coefficient for different wavelengths and my statement was about this case, which is the reason for its failure for non-monochromatic radiation.

      Actually you seem to know so much that it is very difficult for me to believe that you didn’t know this all the time and only pretend that you don’t understand why Beer’s law fails.

      Earlier in this chain (I think in this chain, but may be in another) there is discussion about, what is required for the approximately logarithmic relationship between concentration and absorption. That relationship is not based on any simple law, but is rather an empirical observation that can be understood based on the statistical distribution of absorption lines of various strengths.

  134. Vaughan Pratt 12/18/10 5:21 am,

    Dr. Pratt says, >> One sure-fire way to distinguish climate scientists from climate deniers is to see whether they’re proposing a theory or shooting one down.

    Not so. Climate scientists worthy of the name exhibit the virtue of skepticism. Climate deniers express heresy against a belief system. The latter come from a branch of religion, not science.

    He continues, >>What climate deniers have failed to grasp is that science makes progress by competing one hypothesis against another.

    Believers and deniers compete one dogma, or the negation of a dogma, against another.

    What Dr. Pratt did not grasp is that science progresses by validating the predictions of hypotheses by experiment. When a non-trivial prediction of a hypothesis is validated, the hypothesis rises to the level of a theory. (If a model lacks a prediction, it is at best but a conjecture.) Theories compete, never hypotheses. Radiative transfer is a theory. When a theory has been validated in all its possible consequences, it rises to the level of a law, e.g., Beer’s Law.

    Sometimes conditions are found under which a law is not valid. The result is that the domain of the law is restricted. So it is with Newton’s Laws under relativity and quantum theories. Theories, by definition mesoscale, will have exceptions on their microscale or macroscale. So it, restriction, will be for Beer’s Law under the logarithmic dependence presumption for CO2 absorption, once that logarithmic dependence is validated experimentally. The lack of experimental validation for the logarithmic model is sufficient all by itself (there are many others) to render the AGW model less than a hypothesis.

  135. Vaughan Pratt 12/18/10 5:30 am,

    Dr. Pratt says, >>Arrhenius was right.

    Arrhenius may have been, and presumably was, right in fitting a logarithm to the data available in his time. However, he had no experimental basis for his temporal conclusion that a rise in temperature FOLLOWS a rise in carbonic acid. He did have the emerging greenhouse conjecture, but that limits his results to at best a conjecture.

    The results at Vostok make Arrhenius’s conclusion invalid, but does not render him wrong.

    He was wrong not to qualify his conclusion. As a result his work has wrongly been taken as confirmation of the greenhouse conjecture.

    • As a result [Arrhenius’s] work has wrongly been taken as confirmation of the greenhouse conjecture.

      What? By whom?

      • Vaughan Pratt 1/11/11 at 4:42 am, on Confidence … thread

        Quoting me (12/18/10, 5:30 am), you wrote >>>> As a result [Arrhenius’s] work has wrongly been taken as confirmation of the greenhouse conjecture.

        >>What? By whom?

        Your “what” is difficult to parse (e.g., was it an exclamation, denoting incredulity?), but one explicit answer is in my post of 12/18/10, 11:08 am.

        As to your “By whom?”, try this:

        >> In 1895, Svante Arrhenius (1896) followed with a climate prediction based on greenhouse gases, suggesting that a 40% increase or decrease in the atmospheric abundance of the trace gas CO2 might trigger the glacial advances and retreats. One hundred years later, it would be found that CO2 did indeed vary by this amount between glacial and interglacial periods. AR4, ¶1.4.1 The Earth’s Greenhouse Effect, p. 105.

        (That paraphrasing of Arrhenius is not accurate. What he claimed was not the trigger of glacial epochs IPCC describes, but rather an arithmetic increase or decrease in temperature from a geometric increase in CO2.) For a full citation from Arrhenius (1896), set in a larger context, see my response to Pekka Perilä, Radiative transfer discussion thread, 1/11/11, 10:56 am.

      • As AR4 says, Arrhenius made a prediction in 1896. The confirmation came one hundred years later, according to AR4. Arrhenius didn’t confirm anything about global warming and no one I know of claims he did.

      • Late development – proof of the hoax, a recently discovered graph, thought to be in the hand of Arrhenius, hidden in a wall of his laboratory.

        The Accidental Futurist

        Green – the chilling temperature reality of his life until 1895.
        Blue – temperature recovery from the so-called LIA
        Aqua – the so-called MIA (Micro Ice Age)
        Purple – temperature recovery from so-called MIA
        Rust – so-called Na-nana-na, nana, nana, nana Woooo-eee! Optimum (final goodbye to Cco2GW)

      • in the hand of Arrhenius, hidden in a wall of his laboratory.

        Funny place to hide his hand. Or did it hide itself? Creepy either way.

        Well, whatever. Svante Arrhenius shall be known henceforth as the Meteorological Nostradamus.

        Artists, give up your canvases and palettes. A new art form awaits you: WoodForTrees.org! Baffle your friends with your trend lines of many colours. Win arguments and stump the experts. Act now and save.

      • (One advantage of Christianity over Islam is that the comedians have greater life expectancies. But that’s only because we no longer have the Spanish Inquisition, whose killing power I’d bet on against Islam any day.)

      • I will take that bet.

      • Thank you for the link.

      • They should build a bot ;-)

      • Vaughan Pratt 1/11/11 at 8:40 pm, on Confidence … thread

        You posted this:>>As AR4 says, Arrhenius made a prediction in 1896. The confirmation came one hundred years later, according to AR4. Arrhenius didn’t confirm anything about global warming and no one I know of claims he did.

        The first sentence is accurate, attributed, as you did, to AR4. However, what IPCC said about Arrhenius’s work in 1896 was wrong. As I explained, he relied on measurements, which he detailed, to conclude that a geometric increase in CO2 produced an arithmetic increase in global Temperature. He wrote,

        >>If the quantity of carbonic acid decreases from 1 to 0.67, the fall of temperature is nearly the same as the increase if this quantity augments to 1.5. And to get a new increase of this order of magnitude (3º.4 [sic]), it will be necessary to alter the quantity of carbonic acid till it reaches a value nearly midway between 2 and 2.5. Arrhenius, 1896, p. 265. That is a range of -33% to + 50%, close enough to IPCC’s “40% increase or decrease”. Later when Arrhenius tried to account for ice ages, he estimated “that the temperature at that time must have been 4º – 5º C. lower than at present.” Arrhenius wrote,

        >>How much must the carbonic acid vary according to our figures, in order that the temperature should attain the same values as in the Tertiary and Ice ages respectively? A simple calculation shows that the temperature would rise about 8º to 9º C., if the carbonic acid increased 2.5 to 3 times its present value. In order to get the temperature of the ice age between the 40th and 50th parallels, the carbonic acid in the air should sink to 0.62 – 0.55 of its present value (lowering of temperature 4º – 5º C.). … But in both these cases I incline to think that the secondary action (see p. 257) due to the regress or the progress of the snow-covering would play the most important role.” Id., pp. 268-269.

        His distinction between the Tertiary and Ice ages is not clear, and IPCC has little to say about the former. Arrhenius’s increases for 8º to 9ºC (about 8.5º C), corresponding to a factor, k, of about 2.75, but is he implying that that is necessary for an ice age recovery? He says to get an increase of 3º – 4º C (presumably, and an average of 3.5º C) in mid-latitudes, the CO2 would have to rise by a factor of about 2.25. Further, he claims that a reduction of about 4.5ºC would require a factor of about 0.585. The range of 8ºC – 9ºC for k between 2.5 and 3 produces a sensitivity of 8.55 ± 1.27. Similarly for T in (-4, -5) with k in (0.62, 0.55), the central sensitivity is 8.365 ± 0.002, well within the range of the other.

        However, the third range, interpreted to be 3º – 4ºC for k in (2, 2.5), produces a sensitivity of 4.3 (3.27, 5.77), which is not compatible with the two other ranges. Perhaps the parenthetical note of (3º.4) is not a typographical error of 3º – 4º C as presumed, or perhaps Arrhenius applied some other correction for the mid-latitudes.

        Whatever Arrhenius’s intent, he correctly qualified his calculation saying that it was over powered by albedo, specifically, snow albedo. He wasn’t aware that snow albedo is only fractionally important, being eclipsed as it is by cloud albedo. Arrhenius discriminated nicely between a glacial era and a genial era, a term that has not survived. Snow albedo dominates in the glacial era, cloud albedo in the genial era. IPCC has yet to recognize either, and especially for the present genial era, not cloud feedback (in the scientific sense, not in its radiative forcing paradigm).

        But back to Vaughan Pratt’s comment, his second sentence is false. His reference to “The confirmation” is ambiguous. Does it refer to what Arrhenius observed, temperature depends on the logarithm of CO2 concentration? That has never been observed. Does “The confirmation” refer to what IPCC observed, that the CO2 concentration increased by about 40% “between glacial and interglacial periods”? That cannot confirm because that is not what Arrhenius said, or even what can be inferred from his paper. Or, does “The confirmation” refer to the greenhouse effect. That was IPCC’s claim, as discussed presently.

        Vaughan Pratt’s third sentence is nonsense. No one said that Arrhenius confirmed anything. Furthermore, confirming applies to predictions, inferred by climatologists from Arrhenius’s conclusions, but not to his original analysis of his data. Vaughan Pratt’s syntax seems to connect Arrhenius to a confirmation a century after his work. Of course, Arrhenius was by then dead.

        I referred Vaughan Pratt to my post on the Radiative transfer discussion thread, 1/11/11, 10:56 am. Apparently, he never read it. It included IPCC’s confirmation statement, here abbreviated,

        >>We know that the greenhouse effect works in practice for several reasons. … Thirdly, measurements from ice cores going back 160,000 years show that the Earth’s temperature closely paralleled the amount of carbon dioxide and methane in the atmosphere (see Figure 2) Although we do not know the details of cause and effect, calculations indicate that changes in these greenhouse gases were part, but not all, of the reason for the large (5-7°C) global temperature swings between ice ages and interglacial periods. FAR, Climate Change, 1990, p. xiv.

        That is an incompetent statement of evidence in confirmation of what Arrhenius actually said. He claimed CO2 was the cause and Temperature rise the effect, which has become the classic greenhouse effect. IPCC claimed that much of Arrhenius was confirmed. That, however, developed not to be true.

        The incompetent part is that IPCC graphed Temperature at 6.11 ºC/inch (as captured on my graphics app) and CO2 at 59.87 ppmv/inch, arbitrary scale factors selected to make the graphs appear “parallel”. This seems intended to convince readers of a cause and effect relationship. It doesn’t, as convincing as the waveforms might be. It doesn’t substitute for correlation, which itself is a bad substitute for the correlation function. Only the latter, in one form or another, can confirm a distinction between cause and effect. I would have preferred to think of IPCC’s position as scientific ineptitude rather than outright fraud, but the evidence today heavily favors the latter.

      • Let’s not get bogged down in language questions. It sounds like we agree that no one has ever claimed that the work of Arrhenius, consisting of predictions obtained by his conjectural method of extrapolating from observations, constituted confirmation of the effect of CO2 on temperature, whether by observation of modern warming or by consideration of ice core data.

        It doesn’t substitute for correlation, which itself is a bad substitute for the correlation function. Only the latter, in one form or another, can confirm a distinction between cause and effect.

        Then you should correct the relevant Wikipedia article, whose first sentence says “A correlation function is the correlation between random variables at two different points in space or time.” According to you a correlation function is not a correlation, therefore the first sentence of the article is wrong.

        But if your real question is whether the logarithmic aspect of the dependence of CO2 on surface temperature has ever been statistically confirmed by observation of Earth’s surface temperature in conjunction with CO2, whether modern or paleoclimatological, then I don’t know. Can anyone here answer that question with any publication demonstrating that the observed dependence is statistically more likely to be logarithmic than some other function such as linear or square root or cube root?

        Vaughan Pratt’s third sentence is nonsense.

        I nonconcur. In nineteenth century drawing rooms, when it became clear that your conversation partner was uttering arrant nonsense, instead of blurting out the obvious it was the custom to say something like “You may have misunderstood me,” or even more politely, “I may be misunderstanding you,” or if you don’t like yielding that much ground then “we may be misunderstanding each other.”

        Were Judith’s denizens (and I don’t mean to exclude myself) to bottle up their real feelings and conduct themselves in this quaintly archaic manner, it might bring a note of calm sobriety to this capable society and a measure of decorum to this scientific forum.

        In that spirit I hereby apologize for and withdraw any accusations of insanity I may have made in the past, whether in jest or in earnest, by removing the letter “s.” And in case that’s still not going far enough, I will be happy to also remove the “in,” as in “Are you sure you’re entirely ane?” (That was only rhetorical, Jeff.)

      • Or more grammatically and more politely, “Are you sure that’s entirely ane?”

  136. Jeff,
    geological records indicate to researchers such as Scotese that co2 levels may have been around 20 times higher about 550m years ago when surface temperature was about 8C higher than now.

    Can we make use of that research to make a best guess of where on the curve towards ‘saturation’ of the GHE we may be? Or would our uncertainty regarding the effect on surface temperature of a doubling of pre-industrial levels of co2 preclude that possibility?

    ~8000ppm would be around 5 doublings of pre-industrial levels, so if it were an even effect so far as that level, that would suggest the per doubling addition to surface temp would be around 1.6C.

    Is this a reasonable ball park position to take in our view, or is it completely worthless conjecture?

  137. Jeff,
    geological records indicate to researchers such as Scotese that co2 levels may have been around 20 times higher about 550m years ago when surface temperature was about 8C higher than now.

    Can we make use of that research to make a best guess of where on the curve towards ‘saturation’ of the GHE we may be? Or would our uncertainty regarding the effect on surface temperature of a doubling of pre-industrial levels of co2 preclude that possibility?

    ~8000ppm would be around 5 doublings of pre-industrial levels, so if it were an even effect so far as that level, that would suggest the per doubling addition to surface temp would be around 1.6C.

    Is this a reasonable ball park position to take in your view, or is it completely worthless conjecture?

  138. richardholle 12/18/10 5:44 am,

    The progress of science would never have occurred but for the ability, or by dint of foreshortened horizons, to limit the scale of models. In modeling climate, IPCC gets lost in the microscale of the structure of the atmosphere, of aerosols and condensation, of ocean chemistry, and of radiation absorption. At the same time, being distracted, it misrepresents mesoparameters like cloudiness, thermodynamic equilibrium, and the laws of solubility, and it gets the hydrological and carbon cycles quite wrong.

    Your model would open the door in climate modeling to the macroscale. That might be fruitful in one regard, discovering why the Milankovitch cycle conjecture fails. Conceivable, it might also be important for modeling Earth in its cold state, locked down by surface albedo, and dependent on its internal thermal energy, on minute solar absorption, and on tiny orbital forces. At present, however, when IPCC has been unable to model climate in the first order, it is a wild distraction.

    Science succeeds and rewards parsimony, known colloquially as Occam’s Razor. The trick is to get rid of extraneous parameters, not to add more.

    • Science succeeds and rewards parsimony, known colloquially as Occam’s Razor. The trick is to get rid of extraneous parameters, not to add more.

      Agreed. Do you have a figure for the optimal number of parameters for models of global warming? This might be the most interesting parameter of all!

      • Vaughan Pratt 1/12/11 at 5:01 am,

        You asked, >>Do you have a figure for the optimal number of parameters for models of global warming? This might be the most interesting parameter of all!

        I’m not sure of the word optimal, but the answer depends on the time interval modeled and the accuracy demands. Here’s a minimal example valid for the last half million years:

        Earth’s climate varies in a pattern between -9ºC ± 1ºC and +3ºC ± 1ºC, with a period of roughly 100 kyears, and currently is near its maximum.

        For a complex example, see SGW, rocketscientistsjournal.com. There you’ll find an estimate of Earth’s Global Average Surface Temperature over the last 150 years, the extent of the instrument record, based on the best solar model available, with an accuracy of 0.11ºC, 1 &sigma, compared to the HadCRUT3 data using 5 parameters, or within 0.13ºC using 3 parameters. As shown there, this model produces a temperature history comparable to IPCC’s smoothed history. It is predictive to about 50 years, using reasonable bands for extreme Sun changes.

        The SGW model does not rely on the greenhouse effect, and should dispel reliance on it for first order forecasting of climate. It shifts the climate problem from Earthly parameters to predicting the Sun, from AGW to SGW. The model requires amplification of solar intensity by a rapid positive feedback, and for that it draws on cloud albedo. It predicts that cloud cover responds rapidly to solar intensity, an effect familiar in the daily burn-off. The power in the model arises from how few the number of parameters are to model accurately Earth’s temperature history, which otherwise should entail scores of degrees of freedom for its representation, as in its Fourier expansion and filtering.

  139. Jeff Glassman,

    I found one of my old college textbooks: Quantitative Chemistry (Preliminary Edition), Jürg Waser, W.A. Benjamin, Inc. New York, 1961.

    Chapter 11 Colorimetric Analysis, p155
    The Laws of Boguer, Lambert and Beer
    Consider a beam of monochromatic radiation passing through a homogeneous solution…

    Try to poison that well.

  140. tallbloke 12/18/10 11:23 am

    Nice question. Let me speculate with you.

    Atmospheric CO2 is today in saturation. It is not saturated, and never can be, not even in a narrow band. It saturates exponentially (negative, of course) everywhere, so never achieves its limit anywhere. We see in the band around 14.7 that absorption appears to be 100% (it’s not, but that will have to do until we have ordinate data on a logarithmic scale). As a result, CO2 is well into saturation, and has a puny role as a GHG. It’s concentration in the atmosphere is a consequence today of solubility in the ocean reservoir.

    So with concentrations 20 times as great, it should be slam dunk, GHG-negligible. More negligible than negligible.

    This changes your question to a matter of whether solubility at a temperature anomaly of 8ºC could account for 20 times higher CO2 concentration. I don’t think the matter is that simple, using today’s parameter values. Half a gigayear ago, the ocean would have been 20 times as laden with CO2 in the surface layer compared to today. In today’s model, or at least mine, for the thermohaline circulation, the ocean absorbs CO2 everywhere (gyres are noise) as sea water migrates from the Equator to the poles, cooling along the way. Then it returns CO2 to atmosphere at the Equator from deep waters perhaps a millennium old. This is the solubility pump driven by the Ekman pump. We might assume that the rich CO2 was a transient load from volcanoes, which was then slowly sequestered in the ocean. David Archer gives the slowest sequestration a time constant of 35,000 years. That’s a lot of THC cycles. It might be much slower if we assume that the THC retains its integrity at depth, and so add a diffusion lag out of the THC into the surrounding deep waters.

    We don’t know that the THC today doesn’t switch from one flow pattern to another to create major climate events. Continental drift complicates any conjecture about the pattern 500 million years. Nevertheless, the solar and rotational forces, coupled with azimuthal cooling and CO2 absorption that create the major currents today were likely to have created something analogous at anytime.

    You ask about the five-fold doubling effect to 8,000 ppm. My suggestion for a model is that the doubling, which is application of the logarithm dependence, has no effect because the atmosphere would be deep in saturation. My guess is that the temperature was 8ºC warmer because the Sun was hotter.

    • Thanks Jeff, lots of things to think about in your reply. Ferdinand Englebeen, who I regard as one of the more carefully researching lukewarmers, says that solubility can’t account for atmospheric level changes. I can’t remember his figure or how he derived it, something like 35ppm/degree C. I suspect though, that this doesn’t take into account how much warmer the deeper parts of the oceans were aeons ago.

      The sun was cooler a couple of billion years ago, and this is the ‘faint sun paradox’: warmer but stable Earth, cooler sun. Albedo would make a difference, and I’m not convinced warmer means cloudier necessarily. The speed of the hydrological cycle may have all sorts of effects on raining out cloud condensation nuclei.

      Although a lot of people have a prejudice about James Lovelock’s book ‘Gaia: A New Look at Life on Earth’, it has some important insights into the way the makeup of the atmosphere is controlled by the biosphere. Not enough attention is paid to this I think.

  141. DeWitt Payne 12/18/10 12:06 pm

    Treasure those old text books! Store them under glass. They are pre-pollution from the AGW dogma.

    A monochrome source is essential for colorimetric analysis, the measurement of radiation through a chemical. It is used expressly to look for wavelength dependence for the Beer-Lambert Law. Excellent. So Waser, et al. would “consider a beam of monochromatic radiation”. That sounds like a thought experiment. What did they conclude from their consideration?

    Whatever their conclusions, were they supported empirically?

    Or, are we still unter Wasser on that?

    • This law [exponential decay of intensity with mass path] holds not only for electromagnetic radiation but other kinds as well, such as sound radiation and radiations consisting of particles like neutrons or electrons, as long as the wavelength range of the radiation is small enough so that the absorbancy index a is a constant. It holds for homogeneous liquids, gases and solids.

      I’m now going to follow Fred Moolten’s example and give up. Too bad killfile, and CA Assistant for that matter, don’t work here.

  142. DeWitt – I stopped trying to convince Jeff Glassman a while back about the fallacies in his analysis, and so I wouldn’t ordinarily comment further. However, I thought it would be interesting to analyze why one can’t expect two different wavelengths with significantly different absorption coefficients to yield, in combination, an exponential extinction curve when traveling through the same medium.

    I’m using here the notation that IN/INo = e^-kL, where IN is intensity remaining after path-length L is traveled. For convenience, I’ll also take relationships of a constant nature and combine them into a single constant with a different name, just to simplify. For an exponential decay, all components of the declining function (in this case intensity at different wavelengths) must decline at the same fractional rate, so that their ratio remains constant. Otherwise, changes that occur over a distance delta L at one point will not be matched by the same fractional decline when delta L occurs at a different point.

    Consider a particular value of L traveled by two wavelengths, and for convenience, we can have them start with the same intensity before any absorption occurs – INo. From the above equation, we find that lnIN = INo – kL.

    Now let’s separate the intensities of the two components into IN-1 and IN-2, characterized by constants k1 and k2. We get the following ratio of their natural logarithms:
    ln(IN-1/IN-2) = (lnINo – k1L)/(lnNo – k2L). For exponential decay, we require this to be constant, to which we will assign the value C, so that C equals the value of the ratio on the right.

    Rearranging, we find that lnINo(C -1) = L(Ck2 – k1). The left hand side is a constant, which we will rename B, and the term in parentheses on the right is also a constant we will call D. We find then that B = LD. This demonstrates that the equation can be satisfied only by a single value of L, and that the ratio of the two beams changes over the course of a path, invalidating an exponential decay function.

    The above is abstract, but can be tested with an illustration. Consider two beams, with one characterized by a half-extinction path-length twice the other. For example, one can decay to half strength over 80 cm and the other over 40 cm. When both have traveled 80 cm, the first is at 50 percent intensity. At that same distance, the second has traveled two half-distances and is at 25 percent intensity. The combined intensities, which started at 200 percent of either alone, are now at 75 percent, a 62 percent attenuation. Now let the two beams travel another identical distance of 80 cm. The combined intensities are now 25 + 6.2 percent for a total of 31.2 percent. This decline, from the 75 percent level, is only a 58 percent attenuation. Further travel will further reduce the contribution of the fast-decaying component again demonstrating the inconstancy of their ratios.

    • I remember discussions at Climate Audit with someone (screen name cba) who was trying to write his own line-by-line program. He was calculating emission at high altitude that was too high because the line width became too narrow for his wavelength resolution. The method for calculating absorptivity/emissivity that worked well at low altitude with broad lines didn’t work for narrow lines.

      I don’t expect to convince Jeff. Mostly it’s for the lurkers. But there’s a limit to my patience. I went into industrial R&D rather than academic for good reason.

    • For clarity, in my third paragraph, I forgot to put the ln notation in front of INo. However, I used the correct notation subsequently, and the results aren’t changed.

  143. Fred Moolten 12/18/10 3:09 pm,

    Isn’t it a lot easier to prevail in an argument by putting words in your opponent’s mouth? And by not addressing him directly?

    I did not say that “two different wavelengths with significantly different absorption coefficients [will] yield, in combination, an exponential extinction curve”, or anything so silly. What I said was that Beer’s Law predicts an exponential extinction, and that it is derived without regard to any frequency decomposition.

    What is silly in your straw man is that you beg the question by restricting your remarks to significant differences. I presume a significant difference exists when your little analysis disproving Beer’s Law works, and not significant when it doesn’t.

    Also what I said that Beer’s Law predicts saturation. I don’t much care whether the sum of all the spectral components are in sync or not, the CO2 effect will exhibit saturation. In fact, we can see that is underway in the present concentration. Of course, the sum of a bunch of decaying exponentials will not yield a decaying exponential, but in the end one will dominate, and the process will approach an asymptote.

    The logarithmic dependence model doesn’t do that. It never saturates. Whatever the radiation forcing you wish to use, the logarithmic model provides a corresponding CO2 saturation. It can even provide more than a million parts per million. It can predict an RF greater than the ratio of the sum of all the absorption bands of CO2 to any encompassing bandwidth. It can compute a concentration for a radiating forcing greater than the total OLR. Something is wrong with the logarithmic dependence model, to say nothing of the lack of experimental validation, that needs fixing. For lack of saturation, the logarithmic model is silly.

    A core problem, one that goes to the heart of scientific literacy, is the failure of proponents to put accuracy bounds on their logarithmic model.

    • Jeff, you seem to be confusing the atmosphere with an isothermal gas. The log law is not only a consequence of saturation, but also of the atmospheric temperature profile. As you add CO2, saturation occurs at higher (colder) levels, causing outgoing longwave to decrease. It turns out this decrease approximates a logarithm of CO2 content over a wide range of relevant amounts especially between 100 and 1000 ppm.

    • Beer’s Law predicts saturation.

      Jeff, you wrote this back on Dec. 18 (5:13 pm), which I’m realizing belatedly was an intellectually active period on Climate Etc. not unlike Greece between 500 and 200 BC. I may have been a bit out of it back then due to the onset of Christmas or some deadline or something.

      Let me see if I can convince you that Beer’s Law does not predict saturation.

      First let me clear up a linguistic confusion about the concept of line spectra. A gas such as CO2 or methane has an associated line spectrum showing the points in the spectrum corresponding to peaks (not delta functions) in the absorption of radiation at a given wavelength.

      Earth’s atmosphere consists of a score or more of such radiation-absorbing gases, each with its own line spectrum. Collectively these are the line spectra that govern Earth’s greenhouse effect.

      With that straightened out, let’s move on to whether Beer’s law predicts saturation in the case of the single line spectrum for CO2.

      Fix some pressure, whether that at sea level, the troposphere, or higher. At that pressure, for any strength of absorption as given by the HITRAN table for ¹²C¹⁶O₂ (the dominant species of CO2), ranging over many orders of magnitude, you can find hundreds of thousands of points in that single line spectrum that exactly match that strength.

      Beer’s Law associates to any given wavelength an optical thickness (aka depth but thickness conveys the concept better) of the entire atmosphere defined as the negation of the natural logarithm of the proportion of photons of that wavelength escaping from the Earth’s surface to outer space. (The choice of base e here is arbitrary: base 10 as used in the definition of optical density, or 2 as used in the doubling concept, would lead to the same conclusion.)

      This proportion can be seen to depend on the level of CO2. As the proportion increases the fraction of escaping photons decreases, and the optical thickness of the total atmosphere increases accordingly.

      An optical thickness of 0 means all photons of that wavelength are escaping to outer space. And thickness ∞ means none are escaping.

      Optical thickness 1 at a given wavelength is when the Earth’s surface temperature (assumed logarithmically dependent on thickness) is changing fastest with changing optical thickness. This therefore constitutes the knee of the temperature curve for that wavelength. Although it is changing continuously there, it does not make a big difference to consider that thickness simply switches from 0 to ∞ at the precise point where it is 1.

      We can therefore say, for a given CO2 level, which wavelengths transmit (thickness 0), and which block (thickness ∞). This “squaring up” of the dependence on wavelength of a given level of CO2 suddenly makes things much simpler.

      We can now ask what is the impact of doubling CO2 on the fraction of wavelengths that are blocked.

      This fraction continues to increase as CO2 is doubled. Because there are hundreds of thousands of lines in the CO2 line spectrum, varying over a wide range of strengths, and with each line itself varying with distance from its center (an effect that also depends on pressure on account of pressure broadening), this fraction has no limit to its increase.

      Now you might reasonably point out that the fraction 1 has to be the limit. This is almost the situation on Venus (0.97 to be precise). However this limit is never quite achieved (even on Venus). Instead enough of the spectrum is blocked with each doubling of CO2 to add a fixed amount to the surface temperature. If the blocked fraction ever reached 1 then the temperature would rise to that of the Sun, 5700 K. That would be so many doublings that CO2 would be 100% of the atmosphere. At 97% (the situation on Venus) the surface temperature would be something like that of Venus, not of the Sun. (Although sulfur dioxide kicks in on Venus to raise the temperature there even further.)

      So the fractions of blocked wavelengths encountered with increasing CO2 must always remain well below 1 for any climate we’re likely to encounter that does not kill off all life on Earth as we know it. And therefore saturation cannot possibly be achieved until we are well beyond even a Venus-like situation.

      Which not even the Democrats are suggesting is in our crystal ball.

      • Thinko (typo?) in the above: “As the proportion increases” –> “As the level increases”

      • Oops, ignore what I said about 0.97 on Venus. I was confusing CO2 level with the atmospheric window: even with 100% CO2 a significant fraction of the spectrum can remain unblocked. At 100% one asks not about the level of CO2 but about the mass of the atmosphere. Mars also has a largely CO2 atmosphere, but far thinner than Venus’s and hence trapping far less thermal radiation from the surface of Mars. CO2 level is far from the only indicator.

        I don’t bowl 300 every time. Who do you know who does? Newton and Einstein didn’t, so why should we mere minions be expected to?

      • Vaughan Pratt 1/12/11 at 2:27 am,

        1. Preliminarily, let me settle the unsubstantiated claims by yourself, and others, e.g., Pekka Pirilä, DeWitt Payne, Andy Lacis, Miklos Zagoni, Fred Moolten, that Beer’s Law has been invalidated by Radiation Transfer theory, or that Beer’s Law applies only to monochromatic radiation.

        First, Beer’s Law is a physical consequence of experimentation with gas filters with any spectrum of light. It is a necessary consequence for stacked gas filters to be independent of the order of stacking.

        Second, the derivation of Beer’s Law requires no reference to frequency. One cannot insert frequency or wavenumber subscripts ex post facto into expressions of any law and retain its legitimacy.

        Third, no one has been able to site a reference for the origin of the notion that Beer’s Law is for monochrome light. It matters not how often a fallacious belief appears in print. After all, AGW is supported by tens of thousands of refereed papers, and it is false and a fraud. Think again about Ted Kaczynski supporting Jared

        Fourth, no one has been able to site a reference for the origin of the notion that Beer’s Law has been invalidated.

        Fourth, the news that Beer’s Law has been invalidated has not reached Laurence Rothman, director of the HITRAN program at Harvard-Smithsonian Center for Astrophysics and Editor-in-Chief of the Journal of Quantitative Spectroscopy and Radiative Transfer, for he wrote,

        >>HITRAN has traditionally supplied the necessary input for the molecular absorption part of the total attenuation in Lambert-Beer’s law calculations. Rothman, L. S., JavaHAWKS Manual, November, 2004, pp. 3-4

        unless he has a peculiar fetish for claiming credit for nonsensical calculations.

        2. I can assure you that the MODTRAN output, for example, to the total attenuation can be easily decomposed into a set of as little as four Beer’s Law responses. I have hopes of publishing this result on my blog before too long. Two of these components seem unlikely to have been measured, and certainly not measured in situ.

        3. Contrary to your statement, Beer’s Law does indeed predict saturation. The question is how much saturation would exist in the atmosphere.

        4. Your clarification of line spectra vs. line spectrum is meaningless. Spectra is merely the plural of spectrum. A spectrum is the frequency distribution of electromagnetic radiation. A spectrum is non-decreasing with increasing frequency (decreasing wave number), and can contain no lines. At its maximum, it is the total absorption or radiation. A spectrum in theory can contain step increases, at which point a line exists in its differential, the spectral density. In everyday jargon, the names are used mixed and inaccurately, and that is reflected in the posts on these threads. However, good analyses can be sorted from careless ones by looking at the spectral density ordinate labels. Good ones will read power, or equivalently power density (e.g., Wm^-2), per wavelength or per wavenumber.

        Mathematics contains a theorem that an à posteriori estimate of a probability distribution will converge as the number of samples increases. No such theorem exists, and I speculate can’t exist, for the probability density. I also speculate that this mathematical phenomenon, which arises in real analysis, applies beyond probability theory to any kind of distribution and its density, including EM. I will assume so, until proved wrong. A consequence is that direct measurement of densities is dangerous.

        No method exists to measure a line. Any attempt to measure a line directly results in the convolution of the Fourier transform of the aperture. The effect cannot be reversed. As a result, I am skeptical about whether total intensity estimates, the maximum of the spectra, is accurate based on empirical spectral density estimates. A problem also arises in attempting to measure a step rise directly from the spectrum, a rise from which a line might be estimated.

        5. You say, >>So the fractions of blocked wavelengths encountered with increasing CO2 must always remain well below 1 for any climate … .

        Good. What is that percentage? That’s what Beer’s Law predicts.

        7. You say, >>This fraction continues to increase as CO2 is doubled.

        This is the logarithmic assumption, and begs the question of its validity, and the physics of its validity. Why doesn’t the logarithmic assumption gracefully approach your fractions limit? Actually, that is Beer’s Law. The absorption never actually saturates. Saturation is an asymptote.

        8. You say, >>If the blocked fraction ever reached 1 then the temperature would rise to that of the Sun, 5700K.

        What’s your reference for that remarkable conclusion? I’ve read estimates ranging around 318 ºC, a 30º C rise for an opaque atmosphere, the greenhouse effect in saturation. That’s my recollection, but I’ve always had an issue with the different supporting models. Maybe it’s the logarithmic assumption that gets you to 5,700K (it would) and not Beer’s Law, which provides for saturation a physically realizable levels.

      • P.S.

        That was Jared Loughner whose last name was magically obliterated in ¶5.

      • 4. Your clarification of line spectra vs. line spectrum is meaningless.

        Before I flatly contradict someone that a term is meaningless I always google that term on the web just in case perhaps it is meaningful. Doing that for “line spectrum” in quotes, Google returns 322,000 results. At this page, under the heading “Continuous and Line Spectra,” we read,

        “A gas under low pressure does not produce a continuous spectrum but instead produces a line spectrum, i.e., one composed of individual lines at specific frequencies characteristic of the gas, rather than a continuous band of all frequencies.”

        When I speak of a line spectrum I am referring to the same thing that this article and all the other 322,000 hits are talking about. When I have more than one line spectrum in mind, I refer to them collectively as line spectra, as does the heading of the article.

        There is nothing meaningless about this: everyone except apparently you uses “line spectrum” and “line spectra” in exactly the same way. You will find hundreds of thousands of examples of this usage. Repeatedly denying that they exist or insisting that they are meaningless doesn’t make it any more true.

        In everyday jargon, the names are used mixed and inaccurately, and that is reflected in the posts on these threads.

        Only in your own mind. Everyone involved in spectral analysis understands the difference between a continuous spectrum and a line spectrum, which is explained very clearly at the above site. Note in particular the subdivision of line spectra into two kinds, bright-line spectra and dark-line spectra; we’ve been discussing the latter although the former is relevant to reradiation. The terms are used consistently and always with the same meaning, namely the meaning explained at that site. That is reflected in all posts on these threads except yours, where for reasons known only to you, you seem to have decided to define terms differently from the rest of the world.

        Whereas “line spectrum” has only two meanings, respectively bright-line or emission spectra and dark-line or absorption spectra, “spectrum” has a great many standardly accepted meanings. You can choose one of them as the official Jeff Glassman meaning and declare all the other meanings to be “mixed” or “inaccurate,” but bear in mind that other people will then have great difficulty understanding what you’re saying.

      • At what point do you get tired of showing you are more informed on the science. You obviously are….quit

      • Rob Starkey

        What a thoughtful way of praising someone for habits of brilliance, perseverence and pursuit of greater and more accurate knowledge over a lifetime, without obliging them to blush.

      • Rob, ordinarily attribution of a motive serves to undermine an argument as a way of implying that the speaker is motivated to misrepresent something. For example one might say that scientists who deny tobacco is harmful are motivated by the support their research program receives from tobacco companies.

        However when the motive is mere self-aggrandizement with no other agenda, usually the opposite is the case: the speaker is motivated to the greatest possible accuracy, since the least possible accuracy is clearly the wrong way to go about showing off how much you know.

        So I confess I’m a little mystified as to what you’re trying to prove by pointing out that I’m showing off. In this case I was trying to persuade Jeff of something, but if instead I was trying to show off, what difference does that make as long as I succeed in my goal of persuading Jeff?

        The only thing I was able to infer from your remark was that you frown on overly careful argument. This seems more common in the denier community than the scientific community, which I’ve always assumed was because arguments against AGW don’t hold up well under close examination.

      • Vaughan Pratt 1/12/11 at 10:17 pm,

        Your wrote, >> For example one might say that scientists who deny tobacco is harmful are motivated by the support their research program receives from tobacco companies.

        Did some scientists actually say that? Do you have a reference?

        I recall some saying that tobacco was not addictive. And when I looked into the matter, they were quite correct. Then a bunch of government officials and other kooks changed the definition of addictive to mean what everyone knows it means, nothing more, nothing less. Government 1, Scientists 0.

      • Vaughan- I have learned in reading your posts. I have not studied the science to the same degree that you obviously have. I do personally find that your comments tend to be longer than necessary to convey the required points (and you probably loose some readers), but they are still thoughtful and appreciated.
        Personally, I accept the AGW concept. Increasing GHG’s will increase the world’s temperature. I do not think science fully understands what the rate that warming will take, or the impact it will make (positively, or negatively) to specific regions.
        The science is interesting, but IMO; not nearly as important as the policy decisions that will be made by individual nations regarding their interpretation of the science.

      • Vaughan Pratt 1/12/11 at 8:24 pm,

        1. You’ve done it again. You’ve misconstrued what I said to make a faux point. That fallacy is as worthy of a name as an ad hominem. Let’s call it a Pratt Fall.

        I did not say that “a term is meaningless”. I said “your clarification … is meaningless”.

        2. “Line spectrum”, a term comprising spectrum (e.g., Wm^-2) but meaning a spectral density (e.g., Wm^-2/wavenumber), and not lines but lumps — things examined closely that have width, things that look like the Fourier transform of a slit. But everyone knows what were talking about, and we know that because people here have spoken to everyone and read every paper. That’s how science works, eh?

        3. A chap discovered that EM transmissivity through a gas multiplies with the number of absorbers. His name was August Beer, so we call his discovery Beer’s Law. Now September Stout, a university climate researcher and environmentalist working for IPCC, discovered that Beer’s Law doesn’t hold except for monochromatic light. Let’s erase Beer’s Law and call her discovery Stout’s Law. Now where did we put her paper? It’s important because it shows that CO2 can produce any amount of radiative forcing we wish to claim for it.

        4. You Googled for “line spectrum” and got 322,000 hits. Not bad. Good technique, actually. I Googled for “spectral density”, the term you won’t or can’t use, and got 1,130,000 hits. I win.

        5. How many hits can you find where probability distribution is used to mean probability density? How many times can you find a density histogram fitted to a nice bell shaped curve when the histogram may not converge to the density?

        6. The problem is not what we don’t know, but that what we do know isn’t true.

        7. Example: human activity changing the climate (AGW).

        8. But the models say it’s true, Pekka protests and Lacis laments.

        9. Of course.

      • 1. Preliminarily, let me settle the unsubstantiated claims by yourself, and others, e.g., Pekka Pirilä, DeWitt Payne, Andy Lacis, Miklos Zagoni, Fred Moolten, that Beer’s Law has been invalidated by Radiation Transfer theory, or that Beer’s Law applies only to monochromatic radiation. (Emphasis added)

        Given your statement “It matters not how often a fallacious belief appears in print. After all, AGW is supported by tens of thousands of refereed papers, and it is false and a fraud,” it is clear that Pekka, Fred, DeWitt and I have been going about the substantiation you are asking for the wrong way, since even if were to point you at ten thousand refereed papers all unanimously claiming we were right you would flatly deny them all as “false and a fraud.”

        I can think of only one other way to substantiate the claim that Beer’s Law breaks down for nonmonochromatic light, and that is to prove it to you from first principles, so that no question of fraud can arise, and ask you which step(s) in the proof you don’t accept.

        If that doesn’t work either, then we have no alternative but to agree to disagree, and go off and live in our separate alternative universes, you in yours where Beer’s Law works for all radiation and AGW is false, and us in ours where the opposite holds in both cases.

        Stated most simply, Beer’s law says that when unit intensity radiation enters a uniform medium (such as a gas of uniform pressure and temperature) with absorption coefficient α, the radiation remaining after travelling distance &ell; has intensity exp(−α&ell;).

        Now a gas such as CO2 will have different absorption coefficients at different wavelengths. Consider a sample of gas with absorption coefficients α and rα at two different wavelengths, such that r ≠ 1. If we inject the former wavelength at unit intensity, the remaining intensity at distance &ell; into the gas will be exp(−α&ell;). If we do the same with the other wavelength we obtain exp(−rα&ell;).

        Now suppose we inject unit intensity of radiation consisting of equal amounts of both, that is, intensity 0.5 of each. Then the intensity at &ell; will be

        (exp(−α&ell;) + exp(−rα&ell;))/2.

        We can rewrite this as

        exp(−α&ell;)(1 + exp((1 − r)α&ell;))/2.

        Now in order for Beer’s Law to hold for this nonmonochromatic radiation, there must exist an absorption coefficient β such that

        exp(−α&ell;)(1 + exp((1 − r)α&ell;))/2 = exp(−β&ell;).

        We shall suppose there exists such a β and derive a contradiction, from which we can conclude that no such β can exist.

        Now we can rewrite this equation as

        1 + exp((1 − r)α&ell;) = 2*exp((α − β)&ell;)

        Since 1 − r is nonzero by hypothesis, the left hand side depends on &ell;, whence so does the right hand side, whence α − β must also be nonzero.

        Now exp(x) < 0.5 for all x 0 for all x. This is the promised contradiction. QED

        Which is the first step in this argument that you don’t accept?

      • Rats. WordPress can understand & alpha; as α but not &ell; as ℓ. (Yet another occasion when a WordPress previewer would help; when I previewed it with a browser it correctly handled &ell;.)

        So please read all occurrences of &ell; in my post as the variable ℓ denoting length. (Judith, if convenient please edit all occurrences of &ell; to & #8467; without a space between & and #.)

      • Just spotted another WordPress problem: WordPress interpreted the less-than symbol as the start of a long tag extending to the next greater-than symbol, and killed everything in between. Here’s the last paragraph of the proof without that problem.

        Now exp(x) is less than 0.5 for all x less than ln(0.5) = −0.6931… . Hence we can find ℓ such that the right hand side is less than 1. But for no ℓ can the left hand side be less than 1, since exp(x) is positive for all x. This is the promised contradiction. QED

      • Ah heck, I’ll just submit the whole proof again. Judith, if it’s convenient please delete this and my two preceding error reports and substitute the following for the offending portion of my reply to Jeff above.

        ———————————

        Stated most simply, Beer’s law says that when unit intensity radiation enters a uniform medium (such as a gas of uniform pressure and temperature) with absorption coefficient α, the radiation remaining after travelling distance ℓ has intensity exp(−αℓ).

        Now a gas such as CO2 will have different absorption coefficients at different wavelengths. Consider a sample of gas with absorption coefficients α and rα at two different wavelengths, such that r ≠ 1. If we inject the former wavelength at unit intensity, the remaining intensity at distance ℓ into the gas will be exp(−αℓ). If we do the same with the other wavelength we obtain exp(−rαℓ).

        Now suppose we inject unit intensity of radiation consisting of equal amounts of both, that is, intensity 0.5 of each. Then the intensity at ℓ will be

        (exp(−αℓ) + exp(−rαℓ))/2.

        We can rewrite this as

        exp(−αℓ)(1 + exp((1 − r)αℓ))/2.

        Now in order for Beer’s Law to hold for this nonmonochromatic radiation, there must exist an absorption coefficient β such that

        exp(−αℓ)(1 + exp((1 − r)αℓ))/2 = exp(−βℓ).

        We shall suppose there exists such a β and derive a contradiction, from which we can conclude that no such β can exist.

        Now we can rewrite this equation as

        1 + exp((1 − r)αℓ) = 2*exp((α − β)ℓ)

        Since 1 − r is nonzero by hypothesis, the left hand side depends on ℓ, whence so does the right hand side, whence α − β must also be nonzero.

        Now exp(x) is less than 0.5 for all x less than ln(0.5) = −0.6931… . Hence we can find ℓ such that the right hand side is less than 1. But for no ℓ can the left hand side be less than 1, since exp(x) is positive for all x. This is the promised contradiction. QED

        Which is the first step in this argument that you don’t accept?

      • Vaughan Pratt 1/12/11 at 9:30 pm,

        1. You deliberately misconstrue what I said to make a faux point. I did not claim that papers, refereed or not, or anyone or anything to be false and a fraud. I said only that AGW is false and a fraud. I used it as an example of how a large number of people, the flat Earthers, can be wrong. Science is not about voting, majorities, or unanimity. It is about models with predictive power.

        2. You suggest that you, Pekka, Fred, and DeWitt may have been going about substantiating your arguments the wrong way. Indeed! I have asked repeatedly for the source, not “a source” but “the source”, for the claims that Beer’s Law has been invalidated, and the claim that it is incompatible with calculations using HITRAN. I asked where in the output of calculations with HITRAN the logarithmic relationship arises, and whether the log dependence has been validated.

        DeWitt first gave me a reference to Petty’s First Course text, I spent too much time searching it as he suggested, and found that it contradicted his position. 12/17/10, 7:23 pm. Subsequently, he produced a number of vague citations, meaning with no indication of what he found significant in them, nor where the significant parts could be found. His response was no better than a Google map turn-by-turn direction to a local library. He finally gave up. Even at that, though, he was way ahead of the others. Perilä finally said that he was convinced that humans were affecting climate.

        3. Your derivation does not apply to Beer’s Law. The most elementary form, which I addressed to you directly, is that t(n1+n2) = t(n1)*t(n2), where t is the transmissivity and n_sub_i are the number of absorbers. 12/16/10 at 9:35 pm. You responded to say you didn’t understand. 12/18/10, 5:21 am. The number of absorbers is the independent variable in Beer’s Law. Your manipulation of exponentials contains no reference to the number of particles, aka the concentration of the gas. With no reliance on the concentration, you are not dealing with Beer’s Law.

        By the way, Pekka Pirilä acknowledged that my “derivation is mathematically the same one that everyone gives, but it applies only to monochromatic radiation.” 12/23/10, 4:48 pm. That derivation places no reliance on the spectrum of the light. He has posted that monochromatic qualification repeatedly here, but never with an authority for its origin. He also wrote, “It is not correct to replace a varying absorption coefficient by its average. Doing so gives wrong results.” 12/18/10, 5:06 am. In the latter, he is quite correct, but note that I did not do that.

        Let me give you a Beer’s Law thought experiment taking clues from your derived contradiction. Start with any kind of light source you want. It could be monochromatic, or polychromatic, or Planck’s law of blackbody radiation. But to fit your model, let it be bi-chromatic, i.e., comprising two different monochromatic components. They could have any relative intensity, but to fit your model, imagine that each has an intensity of I0/2 (a number that is not needed). Now direct this bi-chromatic light through a gas filter of unit length, and containing n absorbers, say of CO2. Measure the output intensity, I. The absorption coefficient will be α = log(I0/I), by definition. We will also define a cross section for the gas filter, σ = α/n. Beers’ Law says, for example, that I(n1+n2) = I(n1)*I(n2). It says transmissivity multiples. The absorption coefficient or, equivalently, the cross section is not exactly frequency dependent. More precisely, they are spectrum dependent, and empirical.

        What you have done is try to formulate a joint absorption coefficient from individual absorption coefficients for each color. Pekka, above, warned against that. Beer’s Law doesn’t pretend to do it, and can’t.

        Beer’s Law applies to a single absorption coefficient for the spectrum of the light, an empirical constant. Once again, the independent variable in Beer’s Law is the number of absorbers, and not some vector of absorption coefficients for bands or lines. It is not restricted to monochromatic light. And being dependent on gas concentration, it is matches what is needed for the AGW model.

      • I did not claim that papers, refereed or not, or anyone or anything to be false and a fraud. I said only that AGW is false and a fraud.

        Ok, but now I’m very confused. You wrote “After all, AGW is supported by tens of thousands of refereed papers, and it is false and a fraud.” Are you saying that those papers are true and only AGW is false? If the tens of thousands of papers support AGW and they are all true, then how could AGW possibly be false?

        Your manipulation of exponentials contains no reference to the number of particles, aka the concentration of the gas.

        As explained at the relevant Wikipedia article, “[Beer’s Law] states that there is a logarithmic dependence between the transmission (or transmissivity), T, of light through a substance and the product of the absorption coefficient of the substance, α, and the distance the light travels through the material (i.e. the path length), ℓ.”

        For gases it then gives the transmissivity T as exp(−alpha;’ℓ). Except for the ‘, which is only there to avoid confusion with the liquid α, this is exactly what I wrote for the intensity resulting from inputting unit intensity, which is what transmissivity means.

        It goes on to explain that “the absorption coefficient [α] can, in turn, be written as a product of either a molar absorptivity (extinction coefficient) of the absorber, ε, and the concentration c of absorbing species in the material, or an absorption cross section, σ, and the (number) density N’ of absorbers.” The concept of absorption coefficient α is the appropriate way to combine the two notions into a single more abstract one. α depends on concentration, equivalently on number density. For the proof I gave either notion works, as can be seen from the fact that it only needed to refer to α and not to its deconstruction as either α = εc or α = σN.

        With no reliance on the concentration, you are not dealing with Beer’s Law.

        The concentration is a factor of α. That is, α depends on concentration.

        The absorption coefficient or, equivalently, the cross section is not exactly frequency dependent.

        I don’t know what “not exactly” means in this context. Either α is frequency dependent or it isn’t. If it wasn’t there would be no concept of an atmospheric window allowing some of Earth’s radiation to leak to space while blocking other radiation.

        3. Your derivation does not apply to Beer’s Law. The most elementary form, which I addressed to you directly, is that t(n1+n2) = t(n1)*t(n2), where t is the transmissivity and n_sub_i are the number of absorbers.

        I used the definition straight out of the Wikipedia article. The property t(n1+n2) = t(n1)*t(n2) follows as a consequence of Beer’s Law but the converse is false: you cannot deduce Beer’s Law from that property alone because it fails your own test: it has no reliance on the concentration. Therefore t(n1+n2) = t(n1)*t(n2) cannot be Beer’s Law because it is an incomplete expression of the law.

        What you have done is try to formulate a joint absorption coefficient from individual absorption coefficients for each color. Pekka, above, warned against that. Beer’s Law doesn’t pretend to do it, and can’t.

        Could you please clarify two things for me? (Since you don’t like working with the absorption coefficient α, let me expand it as α = σN so that the number N of particles is now explicit.)

        Question 1. Does T = exp(−σℓN) express Beer’s Law for you? (This is straight out of the Wikipedia article.)

        Question 2. Assuming it does, does T = (exp(−σℓN) + exp(−rσℓN))/2 express Beer’s Law for radiation with two wavelengths of equal intensity with associated absorption cross sections σ and rσ respectively? (Here r is simply the ratio of the two absorption cross sections, that is, r = σ/σ’ where σ’ is the absorption cross section associated with the second wavelength.)

      • Vaughan Pratt 1/13/11 at 2:29 am,

        1. You wrote, >>Are you saying that those papers are true and only AGW is false? If the tens of thousands of papers support AGW and they are all true, then how could AGW possibly be false?

        Many otherwise acceptable papers have a compulsory, perfunctory, deferential salute to AGW in the opening paragraphs. This respect for the dogma-du-jour is what professional journals now require.

        2. You misquoted Wikipedia. I can hear Reagan saying, “There you go again.”

        You wrote, >>As explained at the relevant Wikipedia article, “[Beer’s Law] states that there is a logarithmic dependence between the transmission (or transmissivity), T, of light through a substance and the product of the absorption coefficient of the substance, α, and the distance the light travels through the material (i.e. the path length), &8467;.” Where you inserted “[Beer’s Law]”, the article said “The law”, and at the top of the page you’ll see that it is referring to the “Beer-Lambert Law”.

        That answers your Question 1 is no, or, in the laboratory idiom, not exactly.

        The Beer side of the Law says that transmittance is proportional to the number of absorbers. The Lambert side says that transmittance is proportional to path length. You’ve managed to turn Beer’s Law into Lambert’s Law.

        You wrote, >> t(n1+n2) = t(n1)*t(n2) … has no reliance on the concentration.

        The numbers n1 and n2 stand for the concentration. They are the number of absorbers encountered by the light passing through the medium.

        You say, >>Therefore t(n1+n2) = t(n1)*t(n2) cannot be Beer’s Law because it is an incomplete expression of the law.

        Not true, and for the reason above. The formula holds for Beer’s Law, not the Beer-Lambert Law.

        3. As to your Question 2: No. Your expression is equivalent to asking if T = T1 + T2 under Beer’s Law. It should read T = T1*T2, as you yourself showed in your paragraph 3. More specifically in your notation, T(N+rN)=T(N)*T(rN).

        4. You >>don’t know what “not exactly” means in this context.

        The absorption coefficient depends on the spectrum of the light source and the type or composition of the absorbing gas. It is not linear in the light source spectrum or gas mix, nor does it behave with a dependence according to some mathematical relationship. It is an empirical number, meaning that it is the result of an experiment. If you varied the light source spectrum to measure various absorption coefficients, you could not graph the results because you wouldn’t be able to order the spectra.

      • Many otherwise acceptable papers have a compulsory, perfunctory, deferential salute to AGW in the opening paragraphs. This respect for the dogma-du-jour is what professional journals now require.

        That may be, though as a journal editor myself (in a different field), I would reject any paper that “deferentially saluted” a false theory, on the ground that this made the paper itself false.

        But it’s interesting to see how your earlier “supported AGW” has weakened to “compulsory, perfunctory, deferential salute to AGW.” That’s not support, that’s intellectual dishonesty.

        You hold that all the editors of the journals that publish papers that “deferentially salute” AGW as you call it are in this theory together, and conspire to select only referees that will turn a blind eye to “deferential saluting of a false theory.” That’s what’s known as a conspiracy. That makes you a conspiracy theorist.

      • Vaughan Pratt 1/13/11 at 12:07 pm,

        1. The time has come to re-post, on Climate Etc. for the first time, a canonical critique of modern peer-reviewed journals, this from Richard Horton, MD, editor of one of the most prestigious professional journals, The Lancet.

        >>The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability – not the validity – of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed [ jiggered, not repaired], often insulting, usually ignorant, occasionally foolish, and frequently wrong. Id.

        http://en.wikipedia.org/wiki/Peer_review, citing from http://www.mja.com.au/public/issues/172_04_210200/horton/horton.html .

        2. We’re (editorial we) not necessarily talking about a false theory here, but a theory that has risen to a dogma, a belief system beyond criticism. That moves the model out of science and into a belief system. The evidence for this transition includes castigation of scientific skepticism, a virtue in science, as deniers. This immediately casts the castigator as a believer. These gnarly-fingered priests might better use the word heretic, but that would be too honest and forthright. Instead they hurl skeptic as an epithet, which is actually humorous from outside the temple. This accusation is akin to the scientifically illiterate accusing a scientific proposition as being no more than a theory, which is actually a category of honor for great achievements in science not yet laws.

        3. Little recognized is the degree of damage that a single error or omission can do under the scientific method. Failure to validate a possible consequence of what would otherwise be a scientific theory limits the model to a hypothesis. Omission of an element of the model limits it further to a conjecture. When a paper inserts a gratuitous and dubious identification with a dogma, an otherwise perfectly good work is suspect. When Pekka Pirilä confesses that he is convinced that humans are harming the climate, he puts everything that he writes under a cloud. To read his comments, I imagine that each begins, “Assuming humans can cause global warming, … “.

        The path of science can prove too narrow for wannabe scientists.

        You wrote, >>That’s what’s known as a conspiracy. [True.] That makes you a conspiracy theorist. [I don’t think so.]

        As in psychiatry, it’s not paranoia when they are actually out to get you.

      • Vaughan Pratt 1/13/11 at 12:07 pm,

        P.S.

        Here’s an explicit example of a decent paper being polluted with the obligatory salutes to AGW, made all the more interesting because of its authorship, its early date, and its relevance on this thread. In an otherwise sound paper on radiative transfer using a 1D model, are the following remarks from Lacis, A., J. Hansen, P. Lee, T. Mitchell, and S. Lebedeff, “Greenhouse Effect of Trace Gases, 1970-1980”, Geo.Res.Lett., v. 8, no. 10, 9/29/81:

        >>Several other trace gases MAY have increased in the 1970’s as a result of MAN’S ACTIVITIES. Caps added, citation deleted, id., p. 1036.

        and

        >>These results underline the importance of UNDERSTANDING MAN’S IMPACT on tropospheric composition. It is now clear that several trace gases SIGNIFICANTLY impact the radiation and energy budget at the earth’s surface and are capable of modifying our climatic environment. It is thus imperative to develop a basic UNDERSTANDING of the factors determining the abundance of such trace gases and their sensitivity to ANTHROPOGENIC INFLUENCE. Caps added, id., p. 1038.

        The first citation (from p. 1036) is a dangling insertion. Deleting it has no impact on the paragraph, which ends inconclusively with, “Clearly a continuing inventory of atmospheric composition is needed.”

        The second citation (from p. 1038) is the last paragraph of the paper, and the last of three paragraphs comprising the Discussion section, which serves as a Conclusion. The discussion teaches the following:

        • The greenhouse effect of the trace gases in the 70s was 70& as large as the that due to CO2.

        • Little evidence exists that the trace gases added much prior to 1970.

        • Nevertheless, and notwithstanding the uncertainties in the increases, the authors conclude the trace gas effect was comparable to that of CO2.

        • “[T]he the combined growth of CO2 and other trace gases in the 1970’s was sufficient to cause a net computed greenhouse warming for the decade similar in magnitude to natural decadal temperature variability”.

        • “[T]he combined warmings in the 1970’s and 1980’s should exceed natural variability”, notwithstanding that “measurements, especially of trace gas abundances and solar irradiance, are needed during the 1980’s to permit cause and effect association of observed warming with the greenhouse gases.”

        The concluding paragraph is not just irrelevant — it is contraindicated by the rest of the discussion. The twin qualifications from the importance of human effects to the importance of understanding them seems an attempt to rescue impractical subject matter by appealing to higher, academic values of knowledge for its own sake.

        The authors’ RT calculations show little greenhouse effect, the results are inadequate to assign cause and effect to temperature and gas concentrations, and the results cannot distinguish from competing sources for the gas or for the warming. These are conclusions drawn from a study of total climate effects, necessarily a composite of natural and anthropogenic forces. One might better conclude that modest effort should be expended on understanding human contributions to global warming until the human effects can be weighed against natural increases in the gases, and until an excess warming can be estimated above the background of natural, and especially solar, forces.

        The gratuitous bits on human causes may not have been peer-review obligatory in 1981, but they have been for decades since. They diminish what might otherwise have been a provocative and unexpectedly frank paper.

      • As in psychiatry, it’s not paranoia when they are actually out to get you.

        Maybe you can find ways not to be discovered…
        We know plenty of ‘them’ hide their inconvenient discoveries behind ‘consensus-accepted’ words.

        See this one, how beautiful (read carefully the introduction, and last page (not only the conclusion):

        http://www.atmos-chem-phys.org/10/10941/2010/acp-10-10941-2010.pdf

        Poor guys, they just discovered that cosmic rays could explain about 4/5 of the changes in ‘SLAT anomalies’… However, the most incredible thing in that is the way they’ve hiden, with a very carrefull choice of words, even some contradicting their results, how their study is disturbing to the orthodoxy…

      • Sorry, could be 3/4 instead of 3/4. Now, I don’t have enough information on ‘SLAT’ data to be sure (if you have some, don’t hesitate).

      • Sorry, could be 3/4 instead of 4/5. Now, I don’t have enough information on ‘SLAT’ data to be sure (if you have some, don’t hesitate).

      • The numbers n1 and n2 stand for the concentration. They are the number of absorbers encountered by the light passing through the medium.

        In order for this to be equivalent to Beer’s Law there must be a translation between the standard formulation as T = exp(−αℓ) (the way the Wikipedia article for example expresses it) and your formulation, in both directions. How do you persuade people that your idea of Beer’s Law is equivalent to T = exp(−αℓ)?

        (I have one possible guess but I wouldn’t want to misrepresent you by guessing wrong.)

        3. As to your Question 2: No. Your expression is equivalent to asking if T = T1 + T2 under Beer’s Law. It should read T = T1*T2, as you yourself showed in your paragraph 3. More specifically in your notation, T(N+rN)=T(N)*T(rN).

        My apologies if I did not make myself clear. I was not asking about the effect of composing two absorbers sequentially, which is the situation described by T = T1*T2. In that situation T1 is the fraction coming out of the first absorber and going into the second, and T2 is the fraction of that coming out of the second. Obviously those fractions have to be multiplied.

        I was asking about the effect of injecting two different wavelengths of equal energy into a single absorber. That effect has to be additive or you will violate both energy conservation and units. If 0.5 watts enter the gas at each of the two wavelengths then the total light entering is 1 watt (unit intensity), not 0.25 square watts.

        The absorption coefficient depends on the spectrum of the light source and the type or composition of the absorbing gas. It is not linear in the light source spectrum or gas mix, nor does it behave with a dependence according to some mathematical relationship. It is an empirical number, meaning that it is the result of an experiment. If you varied the light source spectrum to measure various absorption coefficients, you could not graph the results because you wouldn’t be able to order the spectra.

        I think I understand what you’re saying. But that last sentence is a strawman. People don’t just randomly “vary the light source spectrum,” they sweep a very narrowband signal across the spectrum and measure the absorption at each frequency. Whenever a frequency is encountered at which measurable absorption occurs, that is a line. A number of properties of that line are then carefully measured and reconciled with a quantum mechanical analysis of the Hamiltonian for the wavefunctions for the vibrational quanta associated with that line.

        A typical entry for a line in the HITRAN08 database, with added explanatory annotations, is as follows.

        Molecule 2 (CO2)
        Isotopologue number 1 (first species, ¹²C¹⁶O₂)
        Transition wavenumber (cm⁻¹) 2361.465809 (ν̃, = ν/c)
        Line intensity 3.524E-18 (the strongest line of all)
        Einstein A-coefficient 2.066E+02 (emission rate, 206 photons/sec)
        Air-broadened width .0734 (line width in STP air)
        Self-broadened width 0.101 (air-independent)
        lower-state energy 106.1297 (in cm⁻¹)
        Temperature dependence 0.73 (exponent for air width)
        Pressure shift -.002280 (goes with air width)
        upper vibrational quanta 0 0 0 11 (transitions)
        lower vibrational quanta 0 0 0 01 (transitions)
        Upper local quanta
        Lower local quanta R 16e
        Error codes 465554 (6 uncertainties)
        References codes 6 2 1 1 1 1 (Hitran sources)
        Flag for line-mixing
        upper statistical weight 35.0 (for scaling)
        lower statistical weight 33.0

        This is atypical only in that it is the strongest line in the 128,170 recorded lines of the ¹²C¹⁶O₂ species of CO2. 206 photons per second is close to as high as it gets, much weaker lines generally have much slower emission rates.

        There is a separate line spectrum for each of the seven CO2 species, and together these constitute the line spectra for the CO2 molecule, consisting of 419,610 lines at wavenumbers from 5 to 12,784 cm⁻¹. Altogether 42 molecules, each with various species, are catered for, with H2O heading the list of course and CO2 placed second.

        Regrettably I will be away for a few days on an urgent project and unable to respond to further messages until I return.

      • Vaughan Pratt 1/13/11 at 2:41 pm,

        1. First, the relationship T = exp(-α&8467) is the mathematical consequence of Beer’s observation in two steps. It needs more because the number in the exponent has the dimension of length. It must be dimensionless, so read the equation to include some scale factor with dimension per length.

        According to Pfeiffer and Liebhafsky, “Beer’s law in analytical chemistry”, J. Chem. Educ., 1953, the modern form of Beer’s Law was not formulated by Beer, and they suggest the attribution is a misnomer. Beer experimented with light filtered to make it nearly monochromatic (1854), however the reasons, if known, may be hidden in his 1865 paper available only in German. A century earlier, Lambert had discovered the dependence of absorption, and observed that the number of particles was a factor in absorption, but did not formulate it. Source: Ball, W. D., “The basics of spectroscopy”, Google books. Lambert’s work was before the invention of the diffraction grating (Rittenhouse, 1785), before the discovery of absorption lines in solar radiation (Wollaston and Fraunhofer, 1814), and before the discovery that substances had individual spectra (Kirchoff, 1859). The fact that Beer was experimenting with narrowband light does cast doubt on the idea that Beer’s Law is free of frequency. On the other hand, Lambert’s broadband experiments and observations support that notion, and modern derivations of Beer’s Law are not restricted to monochromatic light. Furthermore I have been able to show in a yet to be published paper the most encouraging result that the MODTRAN output comprises the sum of a few Beer’s Law responses.

        This does not answer the question how well Beer’s Law fits with spectrally complex radiation, or how one might rationalize a total, empirical absorption coefficient with a set of empirical absorption coefficients covering the spectral density.

        2. You ask, >How do you persuade people that your idea of Beer’s Law is equivalent to T = exp(−α&8467;)?

        To derive Beer’s Law in the modern form, first recognize that the transmissivity is a number less than one, and from assumed experimentations with a pair of gas filters the total transmissivity must be the product of the two individual transmissivities. Next note from assumed empirical evidence that that the transmissivity is proportional to the gas concentration, or, equivalently, to the number of absorbers in the path of the light through the filter. These provide the basic functional equation t(n1+n2)=t(n1)*t(n2), for which the solution is unique – it is the exponential: t(n) = exp(-σn), where σ, a free constant called the cross section, is any positive number, and we’re free further to define α, called the absorption coefficient, as σn. I think this answers your question.

        3. You use the word “absorber” where I have been using “filter”. In the usual analysis of Beer’s law, the word “absorber” refers to a molecule of the absorbing gas. So when you write about

        >>injecting two different wavelengths of equal energy into a single absorber,

        you are talking about a light source comprising two distinct spectral lines passing through a gas filter, right?

        I agree with the power you assign to the light entering. I have been giving it the symbol, I0. I don’t see any problem with it. When we add EM energy that is incoherent, the power in the components sum. If the energy is coherent, the components must be added as potentials and then the power computed from the total. Just keep in mind that the absorption coefficient is an empirical parameter according to the experiment, and it may well vary with temperature, pressure, or the presence of other gases.

        Yes, that last sentence was a straw man. It was a thought experiment to make a point.

        4. Thanks for the reference on the HITRAN entry. Some time ago, I downloaded Rothman’s files on HITRAN and HAWKS, but haven’t had time to dig into them. David Archer’s 1995 on-line MODTRAN was just too convenient. I still want to learn more about other features, especially how the continuum and other structures are represented.

        5. Meanwhile, I want to pick a nit with you. Science has to be perfectly precise with language. It is tolerant of no ambiguity. That is why I insist on spectral density vs. spectrum and a proper use of the word line, regardless of the idiom we use in casual conversations. Further, a goodly number of empty words creep into scientific dialog and papers. Significant, for example, should be used only for statistical significance. Experienced writers used to tell tyros to use a profanity in place of very because the editor will delete the profanity. Lastly, for the moment, you use the word carefully, but it is best reserved to distinguish an action from others which are done carelessly.

      • Sorry, ignore my remark a few minutes ago about “square watts,” I just got caught up with this post and I guess we’re in agreement on that point.

        Regarding monochromatic light, when the absorber consists of aerosols then Beer’s Law is true for broadband radiation because aerosols aren’t terribly frequency selective. I very much doubt the early developers and users of Beer’s Law had paid any attention to absorption spectra.

        CO2 is not at all like aerosols, as one might imagine from how perfectly transparent it looks compared to smog etc. Beer’s Law fails very badly for greenhouse gases like CO2 when applied to nonmonochromatic radiation.

      • The Beer side of the Law says that transmittance is proportional to the number of absorbers. The Lambert side says that transmittance is proportional to path length. You’ve managed to turn Beer’s Law into Lambert’s Law.

        Jeff, you misquoted Wikipedia. I can hear Reagan saying, “There you go again.”

        The Wikipedia article says very clearly, right at the beginning, “In optics, the Beer–Lambert law, also known as Beer’s law or the Lambert–Beer law or the Beer–Lambert–Bouguer law…”

        I’m afraid you’re going to have to come up with some other reference than Wikipedia to support your novel claim that Beer’s Law and the Beer-Lambert Law differ in any way whatsoever.

        3. As to your Question 2: No. Your expression is equivalent to asking if T = T1 + T2 under Beer’s Law. It should read T = T1*T2, as you yourself showed in your paragraph 3. More specifically in your notation, T(N+rN)=T(N)*T(rN).

        Could you explain once again how injecting 0.5 watts of power of one wavelength and 0.5 watts of power of a second wavelength into an absorbing medium become 0.25 square watts of something? I could have sworn that 0.5 watts plus 0.5 watts was 1 watt, how did my math turn out so horribly wrong?

      • Vaughan Pratt 1/20/11 at 5:18 am,

        You, not I, relied on Wikipedia in this instance. 1/13/11, 2:29 am.

        I pointed out that you misquoted Wikipedia. 1/13/11, 10:11 am.

        Now you rely on another section of Wikipedia, which you did not quote, for an ambiguity to support your misquote.

        Wikipedia has no standards for definitions. Anyone can write anything there, which, contrary to your opinion, is why I wrote,

        >>Most often I start research with Wikipedia. For that it is a fine source. I never rely on Wikipedia. Some of the articles are horrible and wrong. 12/22/10, 1:34 am.

        I went on to give examples. Wikipedia is encyclopedic in scope, not accuracy.

        Now you ask for

        >>some other reference than Wikipedia to support your novel claim that Beer’s Law and the Beer-Lambert Law differ in any way whatsoever.

        Here they are, both from the Handbook of Chemistry and Physics, 34th Ed.:

        >>Beer’s Law (1852). If two solutions of the same salt be made in the same solvent, one of which is, say, twice the concentration of the other, the absorption due to a given thickness of the first solution should be equal to that of twice the thickness of the second.

        >>Lambert’s Law of absorption. [Circa 1750]. Each layer of equal thickness absorbs an equal fraction of the light which traverses it.

        Note that the independent variable in Lambert’s Law is the distance traveled. While the formulation given here for Beer’s Law seems to rely on Lambert’s Law, what is novel is that Beer introduced concentration as the independent variable. The Beer-Lambert Law combines them.

        Note also that neither law depends on frequency (for engineers) nor wavelength (for physicists) nor AGW (for climatologists).

      • Note also that neither law depends on frequency (for engineers) nor wavelength (for physicists) nor AGW (for climatologists).

        Be careful to distinguish broadband absorption from absorption lines. When all wavelengths find it equally easy to make their way through something like a weak milk or ink solution then there is little or no dependence on wavelength.

        CO2 does not absorb anything like milk or ink, it is hugely frequency selective. This is the point of having the HITRAN data.

        Furthermore the lines vary by many orders of magnitude, which means that if you have a given number of CO2 molecules and a beam of radiation consisting of just two frequencies that land on very different absorption lines in the CO2 spectrum, it looks to each wavelength as though there is a hugely different number of molecules, even though there is not. That’s what the strength concept is all about.

      • Vaughan Pratt 1/21/11 at 2:21 am,

        What I said and you quoted was an observation about Beer’s and Lambert’s Laws. Are you saying that my observation was in error, suggesting that these laws do have a frequency dependence?

        You draw information from colloidal suspensions (ink, milk), but these laws apply instead to fluid solutions.

        The following underscores Beer’s contribution in showing concentration as the independent variable and transmittance as the dependent variable:

        >> the transmittance of a concentrated solution can be derived from a measurement of the transmittance of a dilute solution (Beer, 1852).

        http://www.canberra.edu.au/irps/archives/vol21no1/blbalaw.html

        Are you saying that Beer was wrong? Or, when you observe that “lines vary by many orders of magnitude”, are you implying that Beer’s Law has an accuracy limitation, that it exhibits low order effects?

        You refer again (1/12/11, 2:27 am) to HITRAN data, and I answered then that HITRAN is used for Lambert-Beer Law calculations (1/12/11, 5:04 pm), quoting Laurence Rothman. You never responded to that point. Are you implying, as others posting here have done, that calculations with HITRAN somehow invalidate Beer’s Law?

        Some write Beer’s Law, a wide band, total power relationship, converting it into a monochromatic relationship. If it’s a monochrome law, why use a spectral database, i.e., HITRAN, when a single absorption coefficient is sufficient? Alternatively, if Beer’s Law is not monochrome but instead is narrowband, why don’t investigators write it as a narrowband law?

        The powerful interest in disassembling Beer’s Law arises from the fact that it confounds a vital assumption in AGW, namely that CO2 radiative forcing depends on the logarithm of CO2 concentration. If you would emancipate yourself from that belief system, that defunct conjecture, you’d be rewarded with the opening of a panoply of physics.

      • Are you saying that my observation was in error, suggesting that these laws do have a frequency dependence?

        Yes. Pekka and Dewitt have said the same repeatedly, long before I joined their chorus.

        You draw information from colloidal suspensions (ink, milk), but these laws apply instead to fluid solutions.

        By “fluid solution” I’m guessing you must mean “liquid solution” in this context. A fluid can be either a liquid or a gas.

        The principle is the same for all fluids, whether liquid or gas, but if you must have it for gases then use soot as an aerosol in place of milk or ink.

        In the case of a kind of soot that absorbs all frequencies equally, Beer’s Law need not depend on frequency. Many aerosols however do show some frequency dependence, with absorption lines several cm^-1 wide. But greenhouse gases have absorption lines that are about 0.07 cm^-1 wide.

        The following underscores Beer’s contribution in showing concentration as the independent variable and transmittance as the dependent variable:

        Ok, now here’s the biggest mystery of all. In “the following” that you pointed to just now, one reads,

        “In 1852, Beer published a paper on the absorption of red light in coloured aqueous solutions of various salts. Beer makes use of the fact, derived from Bouguer’s and Lambert’s absorption laws, that the intensity of light transmitted through a solution at a given wavelength decreases exponentially with the path length d and the concentration c of the solute (the solvent is considered non-absorbing).” (Emphasis added)

        Yet you continue to deny the “at a given wavelength” part. How can you possibly continue to ignore Pekka and Dewitt and me and deny this dependence on wavelength, and yet quote from a source like this that directly contradicts you on this point? Have you ever experienced any difficulty in maintaining attention in classes?

        Are you implying, as others posting here have done, that calculations with HITRAN somehow invalidate Beer’s Law?

        (a) I’ve never implied that. (b) Who are these “others” you speak of who did deny it? If they did then they’re wrong.

        Some write Beer’s Law, a wide band, total power relationship, converting it into a monochromatic relationship. If it’s a monochrome law, why use a spectral database, i.e., HITRAN, when a single absorption coefficient is sufficient? Alternatively, if Beer’s Law is not monochrome but instead is narrowband, why don’t investigators write it as a narrowband law?

        Beer’s Law is monochrome. It is not a narrowband law. And a spectral line is not a delta function, it has a shape. You cannot accurately incorporate that shape into Beer’s Law as a single coefficient; instead you have to treat every wavenumber across that spectral line separately, deriving a separate absorption coefficient for each wavenumber (but it is a reasonable approximation to consider each of those wavenumbers to itself be of finite width so long as the shape does not vary in strength by more than a factor of about 2 within that range).

        The powerful interest in disassembling Beer’s Law arises from the fact that it confounds a vital assumption in AGW, namely that CO2 radiative forcing depends on the logarithm of CO2 concentration. If you would emancipate yourself from that belief system, that defunct conjecture, you’d be rewarded with the opening of a panoply of physics.

        Why are you bringing in the Arrhenius law all of a sudden? It has nothing whatsoever to do with these more fundamental aspects of absorption. I could “emancipate” myself from it and it would change nothing in the above.

        In the meantime you still haven’t satisfactorily answered the following question I posed to you over a week ago.

        “Question 2. Assuming it does, does T = (exp(−σℓN) + exp(−rσℓN))/2 express Beer’s Law for radiation with two wavelengths of equal intensity with associated absorption cross sections σ and rσ respectively? (Here r is simply the ratio of the two absorption cross sections, that is, r = σ/σ’ where σ’ is the absorption cross section associated with the second wavelength.)”

        You first mistook it for a question about two absorbers in series, but later agreed it was about two frequencies being input into one absorber. However you did not answer the original question. Does that expression express Beer’s Law? Yes or no?

      • Vaughan Pratt 1/21/11 at 3:26 am,

        1. You say,

        >>Pekka and DeWitt have said the same repeatedly, long before I joined their chorus.

        That is not true. What they DID say happens to have been wrong, just like the error you repeat. But the subject at hand was whether my observation about the definitions I gave you were in error. You wanted definitions, I gave them to you, and they say nothing about frequency. The definitions I supplied are evidence that Beer’s Law and Lambert’s Law are (a) different and (b) independent of frequency.

        2. You write,

        >>By “fluid solution” I’m guessing you must mean “liquid solution” in this context. A fluid can be either a liquid or a gas.

        You guess wrong. I meant fluid, and chose the word with care. The original experiments may have been done with liquids, and some even with narrowband light. The laws generalize from those experiments and are not restricted to them. One law refers to solution, and the other is ambiguous. They apply to fluids.

        3. You write,

        >>… but if you must have it for gases then use soot as an aerosol in place of milk or ink.

        The laws apply to fluids, not colloidal suspensions. Having it for gases is according to the laws. It does not extrapolate to suspensions, which are dominated by particulate scattering.

        4. You quote correctly from the paper. I used that paper NOT as a source for what the author, Leif Gerward said, but for a paragraph, “the following”, he quoted translated from Beer’s paper. This is because I don’t read German. You have changed the meaning of the word following.

        When “Beer makes use of [a] fact” (Gerard) that is not the same as saying his law was restricted to that fact. I thought you specialized in logic! These laws are broader than the experiments, as is always the case in science. Beer’s law is restricted to a given wavelength, or to red light, only if it says so. It doesn’t.

        5. You write,

        >>Beer’s Law is monochrome. It is not a narrowband law.

        Beer’s Law says nothing about frequency, so it is neither. When it is reduced to its exponential form, a factor called the absorption coefficient must be introduced to keep the exponent dimensionless. In the form of exponentials, the sum of Beer’s Law functions are closed (meaning the sum is also a Beer’s Law function) if and only if the absorption coefficient is a constant (generally not zero). So Beer’s Law when reduced to the exponential form is as many frequencies as might have the same, or approximately the same, absorption coefficient. That is, it is (a) not monochrome and (b) narrowband, approximately or allowing for disjoint frequencies.

        6. You ask,

        >>How can you possibly continue to ignore Pekka and Dewitt and me and deny this dependence on wavelength, and yet quote from a source like this that directly contradicts you on this point? Have you ever experienced any difficulty in maintaining attention in classes?

        6.1 Rather than ignore Pekka, DeWitt and you, I have spent a great deal of time detailing corrections to the thought processes you three practice. It is not to educate you, the students in my class, but the readership here. They need to able to sort out rational argument from belief systems.

        DeWitt Payne provided a reference for his argument that Beer’s Law had been invalidated. I spent precious time reviewing that source, and detailing my results to him. It contradicted his conclusion. He gave me several other sources, but was unable to say what I was supposed to find in those sources, nor to direct me to the pages. He believes what he believes and cannot support it.

        Pekka Pirilä on the other hand finally admitted that he was personally convinced that humans were affecting climate, and left it at that – unsupported, a belief.

        6.2 My source was August Beer, not Leif Gerard. My reference was neither ambiguous nor did it contradict my point.

        7. You wrote,

        >>>> Are you implying, as others posting here have done, that calculations with HITRAN somehow invalidate Beer’s Law?

        >>(a) I’ve never implied that. (b) Who are these “others” you speak of who did deny it? If they did then they’re wrong.

        7a. You wrote,

        >>David, what would a lab experiment add to what you could not already infer from the HITRAN database? 12/19/10, 11:30 pm.

        >>>>Presumably, these investigators are equating the real world of gas absorption and their HITRAN based models of it.

        >>How are these different? I thought we’d dealt with that already. 12/24/10, 12:58 am.

        >>Let me see if I can convince you that Beer’s Law does not predict saturation. … HITRAN table … . 1/12/11, 2:27 am.

        >>CO2 … is hugely frequency selective. This is one point of having the HITRAN data. 1/21/11, 2:21 am.

        You didn’t exactly say HITRAN calculations invalidate Beer’s Law. Because you don’t realize that Beer’s Law demands saturation, I asked if you were implying that calculations with HITRAN invalidate it. Clearly that is what you are doing.

        7b. See DeWitt Payne, this thread, 12/17/10, 8:44 pm. See Pekka Pirilä, Radiative transfer discussion thread, 12/22/10, 4:16 am.

        In case there is a semantic confusion here, I used the word validate in the scientific sense: verification of prediction with measurements (facts).

        8. You ask,

        >>Why are you bringing in Arrhenius law all of a sudden?

        I didn’t even mention it, much less bring it up suddenly. But I think I see your confusion: Arrhenius said that Earth’s surface temperature, not CO2 radiative forcing, varied with the logarithm of CO2 concentration. For a complete explanation, see my post on this thread on 12/16/10, 6:12 pm.

        9. You wrote,

        >>In the meantime you still haven’t satisfactorily answered the following question I posed to you over a week ago.

        >>“Question 2. Assuming it does, does T = (exp(−σ&ell;N) + exp(−rσ&ell;N))/2 express Beer’s Law for radiation with two wavelengths of equal intensity with associated absorption cross sections σ and rσ respectively? (Here r is simply the ratio of the two absorption cross sections, that is, r = σ/σ’ where σ’ is the absorption cross section associated with the second wavelength.)”

        >>You first mistook it for a question about two absorbers in series, but later agreed it was about two frequencies being input into one absorber. However you did not answer the original question. Does that expression express Beer’s Law? Yes or no?

        I answered the question “No.” 1/13/11, 10:11 am. Even with your explanation, the answer is still no. Assuming that your T stands for transmittance, Beer’s Law states T = exp(-kN), for some k. Your question asks, does the equation exp(-σ&ell;N) + exp(-rσ&ell;N) = 2exp(-kN) have a solution for all N? The answer is no. Without the factor of 2, the answer is yes, when r = 1, and neglecting some singular solutions with zeros, and when it does hold, it holds for any intensity values.

        Your question leads to the proof that Beer’s Law holds for other than monochrome light USING ABSORPTION CROSS SECTIONS, and it does so when the absorbing cross sections are equal.

        Note that by definition the absorption cross section is the integral of the product of an empirical or estimated spectral intensity function, sometimes S, and a wave shape function, sometimes L for Lorentz, over some two-sided band interval, 2λ. So what has to be equal, to some approximation to be determined, are two integrals. The spectral intensity function need not be the same, nor the wave shape function, nor the bandwidth interval at the two wavelengths.

      • 3. Contrary to your statement, Beer’s Law does indeed predict saturation. The question is how much saturation would exist in the atmosphere.

        I will accept this if you can argue that Beer’s Law is independent of wavelength. Simply referring to thousands of papers supporting your position won’t work because as you have said there are tens of thousands of fraudulent papers out there so how do I know you’re not simply referring to those papers?

        This leaves you with no alternative but to convince me from first principles.

        If Beer’s Law does depend on wavelength, then saturation need not occur because increasing CO2 only closes off more of the atmospheric window while still leaving some of it open, part of which will be closed off by yet further increases.

  144. One can compare the lack of saturation of CO2’s influence to the lack of saturation in the flow of ever more skeptics, who do not want to believe even scientific facts known long before anybody was worried about climate change.

    Perhaps the logarithmic law applies also to the flow of skeptics.

    • Pekka, it only takes a level of 50% of skeptics to vote out of office those who are working to address the negative impact of CO2. At 60% the rest of the planet’s fossil fuel can be put into the atmosphere and the temperature raised to a level sufficient for a new mass extinction.

      Most humans have never had the opportunity to witness a mass extinction at first hand. I would love to see one, and therefore am throwing in my lot with those trying to stop the greenies. My goal is to convince people how wonderful it would be to witness a mass extinction at first hand instead of a mere Disney simulation thereof.

      • and that comment was just dumb

      • Ron, you are irony impaired. I think there must must be an irony pill to address that.

      • Since we’re already well into a major mass extinction, Paul, the only question is whether we’re a quarter of the way into it or three-quarters of the way into it.

        If the latter then there’s little left to wax ironic about. Only for the former could one reasonably suspect me of irony.

        I’m not telling. ;)

      • And your pleasant sense of humor aside, that you believe this is rather pathetic:
        You seem to be attempting to imply that
        1- CO2 is going to kill us in massive ways
        2- that it is going to do so or start irrevocably doing so in your life time
        all because you seem to believe that equations you outline mean that a dramatic dangerous change in the climate is going to happen due to CO2 increasing.
        So we can put you down in the Lovelock school of cliamte hysteria.
        Thank you very much for playing.

      • hunter, are you denying that mass extinctions have been happening for some time now? If so then you truly have your head in the sand. At the very least you could park it where liberals do, which would be somewhat higher.

        The world’s lion population shrank by a factor of 10 in the two decades prior to 2003.

        Heard of wild dogs? Elephants? Gorillas? Rhinos? All almost extinct by now. Same article.

        Coincidence that all their populations would diminish to a negligible fraction of their former size all at the same time?

        I think not.

        Sarah Palin wants to wipe out the wolves and the grizzly bears. Other humans fear other animals and want to wipe them out. The movie Jaws prompted people to eliminate the world’s shark population. Millions of whales were killed in the past century; the century before that killed hardly any.

        And all that without having to refer to the many ocean species dependent on the ocean’s carbonate levels, which are being depleted by CO2 as we speak. The pH barely changes, yet each tiny decrease in pH corresponds to gigatonnes of calcium carbonate in the ocean being converted to soluble calcium bicarbonate, which makes it useless to marine life dependent on it.

        And you think no mass extinction is going to happen in our lifetime?

        It’s already happening, hunter. Get real.

      • Vaughan,
        You have packed so many fallacies into your post that I am sure it will take more than one reply.
        ‘Mass extinction’…and then you point out that wild African lions have lost population. Not extinct, fewer.
        If you could list the species that have massively gone extinct, maybe you would not look so foolish?
        Elephants? Not extinct.
        Mountain lions? increasing. Wolves? increasing. Polar bears? Increasing. Whales? increasing.
        And then of course the gratuitous Palin jab. Busy after causing that shooter to kill 6 people and shoot many more, she is now out wanting to drive wolves and grizzly bears into extinction.
        And I am certain you can find some evidence of this somewhere?
        And of course as we speak ocean carbonate levels are doing…….nothing out of normal.
        No wonder you are so over heated: your apocalyptic clap trap faith is failing to deliver that end of the world panache, and you are tired of waiting on it.

      • Not extinct, fewer.

        By a factor of 10 in 20 years. Does that strike you as unremarkable?

        What is your prognosis for the next 20?

        Your denial that these populations have decreased dramatically in recent decades is completely consistent with your denial that humans are having any perceptible influence on the planet. You belong to the modest-and-humble crowd, that cannot picture humans harming a flea.

        If you could list the species that have massively gone extinct, maybe you would not look so foolish?

        I’d be happy to answer this if I knew what it meant. By “massively gone extinct” for a species, do you mean something different from “gone extinct?” Would you say Elvis Presley is merely dead but not yet massively dead, for example?

      • Vaugh,
        I nearly missed this part.
        You say that because of the movie Jaws, we ‘eliminated’ sharks from the oceans.
        Not Chinese over fishing pressuring shark populations, Jaws. And eliminate, not pressure.
        I was not aware of sharks being eliminated from the ocean.
        I am pretty certain no one else is, either.
        But I ate some some shark steak just this past year. Glad I was able to get a last taste!

      • I was not aware of sharks being eliminated from the ocean

        I said it prompted people to eliminate them, I didn’t say they succeeded. Nevertheless the great white shark, the subject of that movie, is now considered endangered, as of 2004.

        But you’re wasting your time mocking me when much bigger targets await your judgment such as overfishing.org. You could do worse than to start with this page.

        Ah, what’s the point, you’ll just keep contradicting all this until the cows come home. Just keep your head stuck in the sand, hunter, it’s probably safer there.

  145. tallbloke 12/18/10 12:35 pm

    A note or two of caution. First the little model I gave you postulated that the surge in CO2 to 8000 ppm might have come from volcanoes, not from outgassing from the ocean.

    Second, solubility is proportional to the partial pressure of the gas above the liquid, and you used the maximum estimate of 8000 ppm. The constant of proportionality decreases with the temperature of the solvent. Let me guess that Ferdinand’s outgassing number of 35 ppm/ºC is based on current concentrations, and that at 8000 ppm it would be 20 times as large.

    Third, the model for oceanic outgassing is incomplete. The Ekman pump draws deep water to the surface, mostly in the Eastern Equatorial Pacific. That water was charged with CO2 at about 0ºC and at the prevailing CO2 concentration roughly a millennium ago, and held under pressure until rising to the surface to outgas. It was then warmed to the prevailing tropical Sea Surface Temperature, making the local air heavy with CO2 but net lighter because of high humidity. The CO2 rich, moist air rises in the atmosphere, cooling on the way, to enter the Hadley cells, shifted with the trade winds, and to descend on Hawaii. So MLO sits in the plume of the most intense outgassing on Earth. How much is outgassed depends on the prevailing concentration 1000 years or so ago, and that is unknown. The problem appears to stress the solubility model to account for the MLO concentration. More research is needed.

    P.S. Gaia sounds supernatural to me, putting it beyond science.

    • Jeff,
      Thanks for the great explanation on the co2 outgassing issue, I understand this better now.

      On Lovelock’s book, it’s a baby out with bathwater problem. There is some great science in there, but because of the ‘it’s as if….’ metaphor he uses, the valuable detail is dismissed along with the ‘holistic’ concept. This is unfortunate, because Lovelock made a valuable contribution to our knowledge of anti-entropic dynamic systems.

      If you can overcome your preconceptions, please do read it. In that early work, Lovelock hmself is highly sceptical of the capacity of human effects on the global environment to cause any serious problem to a planetary atmosphere largely controlled by enormous natural cycles of fluxes emitted by microscopic lifeforms which agglomerate and outweigh us in sheer tonnage terms by many orders of magnitude.

      It seems his raising to semi deity by the greens got to him and he pandered to his audience in the end, but that doesn’t detract from the original book IMO.

    • P.S. Gaia sounds supernatural to me, putting it beyond science.

      I had the same reaction when I first heard about the concept, which was quite recently (we theoretical computer scientists can be pretty insular sometimes). After thinking about it for a while I concluded it was related to a number of natural phenomena such as Le Chatelier’s principle, osmosis, homeostasis, etc.

      However it takes life into account in explaining more complex adaptations than would be found on a lifeless planet. As I wrote at the talk page on the relevant Wikipedia article, “Life on Earth is without doubt a complex system. To the extent that life is in equilibrium, the mechanisms by which this is achieved would presumably constitute relatively sophisticated instances of the Gaia hypothesis in action.”

      On the one hand, “that’s all there is to it.” On the other, life is so complex by comparison to simple chemical reactions as to make the Gaia hypothesis itself incredibly sophisticated when viewed as a homeostat analogous to the thermostat attached to your house furnace.

      This was precisely what Lovelock had in mind when he first formulated his hypothesis (and I’m perfectly fine with continuing to call it a hypothesis since that term seems to have stuck). He developed the idea in the context of looking for ways to tell whether there was life on Mars. He felt that if Mars exhibited behavior more complex than that of a lifeless planet then that would increase the probability that it harbored life. He then elaborated that idea into a theory that has turned out to have remarkably good predictive value, as you can read at the relevant Wikipedia article.

      Viewed from the perspective of any other species on Earth, humans are supernatural. Any doubts you might have about this must surely be dispelled by this movie (which is great fun, btw, even a little romance in it).

      Those who deny global warming are in a sense denying that we are supernatural. So a positive way of viewing global warming deniers is that they are humble and modest about our position in the grand scheme of things.

      We AGW affirmers on the other hand view humans as uniquely capable of inflicting unprecedented technology on the planet, via the product of an exponentially growing population (6.7 billion humans weighing about a gigatonne, comparable in mass today to that of the shrinking Antarctic krill population) deploying technology that consumes exponentially increasing fuel per capita. The krill population is shrinking, perhaps exponentially, though their fuel consumption per capita seems to be holding up well, neither increasing nor decreasing.

      AGW affirmers are supremely arrogant compared to the humble and modest AGW deniers. The question is whether our arrogance is misplaced.

      • The irony of a self-declared theoretical computer scientist asserting that humans are ‘inflicting’ technology on the planet is noted for its richness.

      • The irony of those who can’t walk and chew gum at the same time inferring that everyone else on the planet must be living with the same handicap is even richer, hunter.

  146. Pekka Pirilä 12/18/10 17:04

    I will most certainly accept the failure of Beer’s Law when presented with empirical evidence of the fact.

    That acceptance is most difficult because the derivation of Beer’s Law does not require knowledge of the spectrum. It is further difficult because violation of Beer’s Law is necessary, though not sufficient, for the AGW model to be valid.

    Empirical evidence does not include computations with unmeasured band wings. Empirical evidence of those wings would go a long way to accepting the possibility of limitations on Beer’s Law. Further, empirical evidence does not include calculations with computer models alone. Those models reflect the desires and biases of the programmers. Until proven otherwise, I must be skeptical of those models because as described they serve the purpose of the AGW promoters.

    You mention an “approximately logarithmic relationship”. That is simple enough, and not questioned because you set no tolerances on the fit. In the end, though, we know the logarithmic assumption must fail. It produces the raft of nonsense I explained for Fred Moolten at 5:13 pm, above, and previously as well, but which no one has responded on point.

    Invalidating a law in physics is a major event. But IPCC discusses radiative transfer, but never mentions Beer’s Law (among others omitted). Furthermore, it only briefly mentions the logarithmic dependence in its reports. And in reports packed with thousands of references, it gives no reference whatsoever for the log model. And IPCC’s AGW model turns on the concept of a climate sensitivity, which is the logarithmic model, a fact never mentioned in its reports, and a surprise to some readers here. The backbone of AGW is without a reference. The task to validate whatever law applies is not mine as IPCC’s harshest critic, but a challenge to it for AR5.

    • There is absolutely no doubt, that the theoretical derivation of Beer’s law assumes that the absorption coefficient is at each point in space the same for all radiation considered (but may vary from point to point as you commented earlier). It is also easy to prove that the Beer’s law fails always, when we have radiation with different absorption coefficients. This is elementary physics and mathematics. It is actually nothing more than observing that the sum of two or more exponential functions with different coefficients in the exponent can never be exponential function.

      When the coefficients in the exponents are negative as they are in Beer’s law, the deviation in the shape of the function is in a sense always in the same direction, which could also be described as the direction where the logarithmic function is, but this is a wage description and should not be taken as anything more precise.

      The sum of many exponential functions with appropriate weights may approximate well logarithmic function over very large range of the values of the argument. It has been observed that for the absorption by CO2 in the atmosphere this happens over the whole range relevant for climate discussions. It is not exactly logarithmic, and it is not even close to logarithmic at extreme values of the concentration. At very small concentrations the effect is linear and at extremely large values it certainly also something different, although it is difficult to guess without proper calculation, how it will deviate.

      The main point is, that reliable well understood calculations show that the effect is close to logarithmic over the range of interest. Even this fact is not really used for anything more than for giving the general picture of the situation. Every analysis that aims to something more precise is independent of this observation and uses the Beer’s law directly for each line or some parameterizations checked against LBL calculations.

      • Correction: wage -> vague

      • The sum of many exponential functions with appropriate weights may approximate well logarithmic function over very large range of the values of the argument.

        The exponential functions of negative exponents are bounded and so is the sum of exponential functions. The logarithmic function of course has no bounds and I am unaware of any limitation placed on the range of the function in the literature.

      • Being bounded is not all. Although the sum of negative exponentials is bounded to approach gradually zero, it is impossible that Beer’s law would be valid, when two or more wavelengths are absorbed with different coefficients. It is also very easy to see that the deviation is not small in practice, but very essential. Beer’s law fails totally in describing the absorption by CO2 in practice when all relevant wavelengths are taken into account. There is no doubt about these facts. Everybody can easily check them from reliable and non-controversial data.

        That the logarithmic formula is a good approximation is not something that can be proven from first principles. It depends on many things that happen to go in the way that the approximation is rather good. It is naturally clear that the radiative forcing due to CO2 cannot grow without limits, but must ultimately saturate more rapidly than logarithmic law saturates. There is, however, still plenty of potential for continuing the approximately logarithmic trend.

        The ultimate limit is reached, when no IR can get through even a large fraction of atmosphere. Looking at where this limit would be based on applying Beer’s law to all wavelengths separately, the final limit is really far. Thus the Beer’s law does not save us. It does not save even taking into account only wavelengths with known absorption by CO2 as there are so many of them where the absorption is just starting to gain significance at the present concentrations and is therefore growing almost linearly for a while.

      • Being bounded is not all.

        No, but it does effectively eliminate a function that is unbounded as describing a bounded or saturating process.

        it is impossible that Beer’s law would be valid, when two or more wavelengths are absorbed with different coefficients.

        I don’t recall anyone suggesting that Beer’s law is valid for anything but monochromatic light you seem to be setting up a strawman argument.

        It is naturally clear that the radiative forcing due to CO2 cannot grow without limits, but must ultimately saturate more rapidly than logarithmic law saturates.

        Problem is the logarithmic function is unbounded so it never saturates and the sentence is nonsense.

        Looking at where this limit would be based on applying Beer’s law to all wavelengths separately, the final limit is really far. Thus the Beer’s law does not save us.

        Actually we can and do use Beer’s law line by line to determine where each spectral line for which the atmosphere is opaque is extinguished. Fact of the matter is that it’s less than 100m for all absorbing species.

        You can see it in figure 3 here, that it’s actually all over in less than 100m.

      • I don’t recall anyone suggesting that Beer’s law is valid for anything but monochromatic light you seem to be setting up a strawman argument.

        Jeff Glassman was.

      • Fact of the matter is that it’s less than 100m for all absorbing species.

        Define “it”.

        At the wavelength of the strongest line it is all over in less than a foot.

        At most wavelengths the fat lady hasn’t even hit the first high C.

  147. tallbloke and others,

    Ferdinand Englebeen’s work can be found here:

    http://www.ferdinand-engelbeen.be/klimaat/climate.html

    The CO2 Measurements in Detail is particularly good.

    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html

  148. What is the best evidence to date for Arrhenius’s logarithmic dependence of surface temperature on CO2? Andy Lacis said earlier near the start of this thread that Hansen 1988 had derived it, but all I could find there was a derivation in Appendix B of the CO2 etc. forcing based on the Arrhenius law composed with a cubic fit to the Keeling curve (this was 20 years before Hofmann, it hadn’t occurred to them back then to link the growth of CO2 to the exponential population explosion).

    I found a purported derivation of it on, of all places, Lubos Motl’s blog. However I found Motl’s statement “A linear decrease of the temperature means that the radiation that is emitted by the tropopause decreases by a linear term” rather off-putting without some reason why the fourth-power law can be neglected—if he’s basing that on operating over a small-hence-linear portion of the curve he should say so. He also ignores spectral lines altogether. Since one would expect the distribution of spectral lines to play a big part I found his reasoning quite unconvincing.

    In his book Principles of Planetary Climate, starting on p. 241, Ray Pierrehumbert gives an excellent discussion of what’s involved in building a homebrew radiation model that Jeff Glassman would benefit greatly from. On page 225 he (presumably) uses it to plot the absorption coefficient of CO2 as a function of wavenumber and CO2 density in kg/m³, and infers the logarithmic law from the straightness of the two broad lines defining what he calls the “ditch”. This is somewhat more convincing than Motl’s derivation, though as I said earlier I don’t trust these bottom-up analyses, the TOA approach Chris Colose was advocating seems to me the way to go for this sort of thing.

    In any event it’s not a derivation, it’s still just fitting a line to data, admitted vastly more data than Arrhenius had. Arrhenius based his argument on a table of estimates of CO2 and temperatures he worked up for hypothetical CO2 levels in proportions of 0.67, 1, 1.5, 2, 2.5, and 3 to the then current level. After discussing the numbers, his argument reads in full, “We may now inquire how great must the variation of the carbonic acid in the atmosphere be to cause a given change of the temperature. The answer may be found by interpolation in Table VII. To facilitate such an inquiry, we may make a simple observation. If the quantity of carbonic acid decreases from 1 to 0.67, the fall of temperature is nearly the same as the increase of temperature if this quantity augments to 1.5. And to get a new increase of this order of magnitude (3.0° – 3.4°), it will be necessary to alter the quantity of carbonic acid till it reaches a value nearly midway between 2 and 2.5. Thus if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression. This rule – which naturally holds good only in the part investigated – will be useful for the following summary estimations.”

    (I changed 3°.4 to 3.0° – 3.4° because I couldn’t parse the former and the latter made sense.)

    So the only “derivations” of the logarithmic law that I’ve seen so far, with the exception of Motl’s, seem to have the form, “here’s some calculated estimates of what changing the CO2 would do, notice how the plot seems to be roughly straight.” Motl and Pierrehumbert can’t both be right because Motl doesn’t use spectral lines while Pierrehumbert’s plot depends in an essential way on them, which I’m sure it must.

    So is there no actual derivation of the law in the usual sense of “derive?” Appealing to the approximate straightness of plotted data is not exactly a derivation.

    • The best explanation is found in Pierrehumberts Figure 4:12 and accompanying text, which shows an approximately exponential decline in CO2 IR absorption coefficients as one moves from the center of the band into the wings. This is based on spectroscopic measurements, and I’m not aware of any theory from which it can be deduced from first principles. It’s true for CO2 because of the change in the probability and strength of photon absorptions as a function of the different quantum energy states that are excitable at different wavelengths. Apparently, however, it is not universally true for all GHG IR absorbers, although water appears to behave in a roughly similar fashion. It seems therefore that it would be hard to generalize. I doubt that Arrhenius arrived at his conclusions in this way, and was probably correct mostly by accident. Also, of course, the logarithmic relationship, which is approximate, only applies within certain regions of the entire range of possible CO2 concentrations, but this is the range of interest to climate science.

      • I doubt that Arrhenius arrived at his conclusions in this way, and was probably correct mostly by accident.

        If you want to know how Arrhenius came to his conclusions, his April, 1896 paper is available ( http://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf ). It seems quite sophisticated given the relatively primitive instrumentation available at the time.

      • “If you want to know how Arrhenius came to his conclusions, his April, 1896 paper is available”

        You seem not to have read my post.

      • (Meaning that I quoted the whole of Arrhenius’s reasoning as to how he came to his conclusion.)

      • I’m in full agreement with Fred. (I referenced page 225 of Ray’s book but neglected to say that the main content of that page was Fig. 4.12 and accompanying text or I could have saved Fred a sentence or two.)

        I’m not aware of any theory from which it can be deduced from first principles.

        Right, it’s a very interesting question why the lines are so straight.

        Apparently, however, it is not universally true for all GHG IR absorbers, although water appears to behave in a roughly similar fashion.

        What about CO? That comes under suspicion for being a diatomic greenhouse gas.

        Also, of course, the logarithmic relationship, which is approximate, only applies within certain regions of the entire range of possible CO2 concentrations, but this is the range of interest to climate science.

        While this is true in general, a notable exception is explanations of SBE, snowball Earth. In 2005 RP published “Climate dynamics of a hard snowball Earth” in Journal of Geophysical Research, Vol 110, doi:10.1029/2004JD005162, confirming earlier arguments from the 1960s that CO2 could not melt an SBE. RP’s simulations with the FOAM 1.5 GCM showed that even 0.2 bars of CO2 could not get Earth to within 20 °C of melting the snowball.

        I am not entirely convinced that RP’s analysis at these high elevations of CO2 is meaningful. This is something I’m looking into. I don’t have even an opinion either way so far, other than that I don’t trust the methodology he’s using. Something more along the lines of what Chris Colose was talking about is needed.

    • Here is my simplistic explanation, I just came up with, so I may be wrong. As I mentioned just earlier, lapse rate is an important part of it.
      Looking from above CO2 path goes as pressure.
      Temperature in the upper troposphere goes as height, which is about log pressure.
      Doubling CO2 means you go the same path in half the pressure (i.e. you saturate at half the pressure).
      The temperature change for this pressure change would go as log pressure.
      Therefore temperature change goes as log CO2.
      Radiation change is fairly linear with temperature change.
      Therefore radiation change goes as log CO2.
      QED. Maybe too simple?

  149. DeWitt, on a different note, you wrote back on 21/10/2007 that “my personal opinion is that a sensitivity to CO2 doubling of 1.2 K is likely to be the upper limit” (easily googled for).

    Is that your current estimate? And is it your no-feedback calculation or are you taking the main feedbacks into account?

    Of course if it is merely an “opinion” as you say then we are all entitled to our opinion. It is my opinion that the number of gods is a perfect square, ruling out 2 and 3 while accepting 4, and that is based on no calculations at all.

    (If no-feedback I’d actually go along with that myself, or maybe even a little less. What the HADCRUT record shows includes feedbacks, it’s rather hard to see how to confirm non-feedback theories empirically.)

    • In terms of my understanding of the science, that’s a long time ago. When the question was asked recently at The Air Vent of what would the temperature be in 2100, I agreed with the poster who said he didn’t know and neither did anyone else. I do think that the no feedback sensitivity to doubling is about 1.2 K, but that’s as far as I’m willing to go now. I just get annoyed when people waste time and bandwidth on attacking the parts of the theory that are bullet proof rather than the much better targets represented by IPCC Working Groups II and III. It makes it too easy to dismiss valid criticism.

      As far as policy, significant mitigation or rapid decarbonization of the economy isn’t going to happen any time soon barring a breakthrough in alternative energy like the Bussard fusion reactor or a complete collapse of the global economy. Pielke, Jr. has several relevant posts on why. I am more worried about the short term economic and political consequences of the fact that conventional petroleum production peaked in 2005 or 2006, depending on who you ask. Peak Coal will happen sooner that most people think too. Shale gas production had better work. The problem will be that we haven’t started development soon enough.

      • When the question was asked recently at The Air Vent of what would the temperature be in 2100, I agreed with the poster who said he didn’t know and neither did anyone else.

        It’s a fair point. If fusion (inertial confinement of course, tokamak and Bussard don’t stand a prayer) kicks in at 2060, by 2100 the temperature will be back to that of the 1990s.

        I very much doubt the “business as usual” scenario is sustainable beyond 2060, with or without the cooperation (?) of the US Republican Party.. But on the off chance that it is, the temperature in 2100 will be pretty close to 2 °C higher than today. That is, unless the permafrost releases a large chunk of methane, in which case I could imagine 4 °C higher.

        So there’s the range: from 1990s temperature to 4 °C higher. No one knows, although the 1910 temperature 0.8 °C below current seems unlikely, as does 6 degrees higher.

        I do think that the no feedback sensitivity to doubling is about 1.2 K, but that’s as far as I’m willing to go now.

        Oh good. So add in the evident water vapor feedback we’ve been seeing and that brings the total climate sensitivity up into the vicinity of 2 K, which I find perfectly reasonable.

        If furthermore you try to define a “transient climate response” along the IPCC lines you can kick it up to close to 3 K. Then everyone would be happy and we could work on something more useful than arguing over something so undefinable as “climate sensitivity.”

        I just get annoyed when people waste time and bandwidth on attacking the parts of the theory that are bullet proof rather than the much better targets represented by IPCC Working Groups II and III. It makes it too easy to dismiss valid criticism.

        With you on that.

        Peak Coal will happen sooner that most people think too.

        But won’t that merely make coal more expensive, while doing nothing about its impact on global warming? So far people have seemed prepared to shell out for energy, and I don’t see the coal companies complaining about that prospect.

      • But won’t that merely make coal more expensive, while doing nothing about its impact on global warming?

        You can only burn the amount of coal or oil that can be produced. The price doesn’t matter. You’re also neglecting the fundamental relationship between energy and economic activity. Limit energy supply and you limit economic activity. The case has been made that, like what happened to the global economy during the Oil Shocks of the 1970’s, the recent economic problems are the result of the limited supply of petroleum available. We’ll adjust eventually, but the short term could get even uglier than it is now.

      • Limit energy supply and you limit economic activity.

        Source?

  150. Vaughn Pratt,

    How were you able to infer a temperature of 3644 K without knowing the distribution over the spectrum of my 10⁷ W/m² power density?

    That’s the minimum temperature. It could, of course, be higher. It’s the temperature of your 0.01 m2 absorber assuming it was insulated on the other side and was radiating to a background at 0 K. It’s the same principle as used in a pyrgeometer for measurement of whole sky incident LW radiation. You measure the temperature of a small black disk covered with a dome that transmits radiation between 3.5 and 50 μm. After correcting for various interferences such as the temperature of the dome, one converts the measured temperature to incident power with the Stefan-Boltzman equation. So the temperature of the disk corresponds to the brightness temperature of the sky. Of course, that’s all built into the circuitry so the readout in voltage from a thermopile attached to the disk is converted to W/m2 rather than temperature. So regardless of the structure or bandwidth of the radiation, at the point of absorption it has a brightness temperature of 3644 K.

    The existence of and data from pyrgeometers is sufficient in itself to refute Gerlich and Tscheuschner’s paper, certainly section 3.7.2 where they try to trash energy balance diagrams by hand waving them away.

    • It’s the temperature of your 0.01 m2 absorber assuming it was insulated on the other side and was radiating to a background at 0 K.

      But that’s nothing like the definition of “brightness temperature.”

      The existence of and data from pyrgeometers is sufficient in itself to refute Gerlich and Tscheuschner’s paper, certainly section 3.7.2 where they try to trash energy balance diagrams by hand waving them away.

      The challenge with the G&T paper is finding data from anything relevant to climate that isn’t sufficient to refute it.

      • Sorry, that should have been effective temperature rather than brightness temperature. The brightness temperature would be much higher and depend on the bandwidth of the radiation. I also left out the geometry as well. For a complete calculation of brightness temperature, the solid angle subtended by the beam as seen from the source would need to be known as well. I have occasional problems with lack of rigor in terminology.

        I haven’t plugged in the numbers, but I think that’s also the difference in the interpretation of solar radiation. The brightness temperature of the sun would be 5700 K while the effective temperature at 1 AU would be 394 K.

      • Sorry, that should have been effective temperature rather than brightness temperature. The brightness temperature would be much higher and depend on the bandwidth of the radiation.

        Careful, DeWitt. Jan Lapompe might take us for Orcas ganging up on him. ;)

        (JL outdoes the divine RSC in pretending to understand physics. That’s what I meant by conversations with JL being the closest thing to a Monty Python skit for physicists. Nonphysicists will simply be mystified how two “physicists” can be so far apart, we’ve already seen some examples of that on this blog.)

        But yes, you’re exactly right in every detail there. Your figure of 3644 K (3644.15 K to be more precise) is indeed the effective temperature of a black body absorbing (and hence radiating) 10⁷ W/m². And yes the brightness temperature would require knowledge of the bandwidth, as I pointed out earlier. Doubling the bandwidth halves the brightness temperature.

        The annoying thing about Lapompe (who somehow seems imprinted on my brain like a mother duck on her duckling) is that he just goes on being stupid while pretending to have outsmarted the scientists. To his credit Ferenc Miskolczi had the good grace to back off when confronted with unassailable physics. Lapompe’s problem is he can’t follow even simple reasoning, witness his insistence that he meant the Wien displacement law back when he said the Wien law only held at short wavelengths, even after I (thought I) had carefully explained both the history and science behind the Wien distribution law and the Wien displacement law. Even a history major ought to have been able to follow that distinction (well, the history majors around these parts anyway).

        The brightness temperature of the sun would be 5700 K while the effective temperature at 1 AU would be 394 K.

        Now that I’ve been schooled by Pekka on that, this is the effective temperature of the sunny side of a flat insulating black body constantly facing the Sun, whose dark side therefore radiates nothing. The more usual definition of effective temperature assumes equal temperature around the object, whether by virtue of being a perfect conductor or a fast spinner. In that case the effective temperature is 394/√2 = 279 K. An albedo of .3 further drags this down to the familiar figure of 255 K often cited as Earth’s temperature without greenhouse gases.

      • Doubling the bandwidth halves the brightness temperature.

        Forgot to qualify that with “at long wavelengths relative to that given by the Stefan-Boltzmann law.”

    • DeWitt Payne

      ………”The existence of and data from pyrgeometers is sufficient in itself to refute Gerlich and Tscheuschner’s paper, certainly section 3.7.2 where they try to trash energy balance diagrams by hand waving them away”……

      As you know these instruments have had avery troubled history.
      If you google pyrgeometers and inaccuracy’s you will get several references.
      So much so that any historical data based on previous models is almost worthless.
      Why should that be so?
      They are calibrated against idealised black body sources which dont exist in nature.
      Often a single figure is used for emissivity.

      However if you read a radiative transfer textbook you will find;

      (a) there is no perfect black body emitter so use gray body (e<1)

      (b) there is no gray body emitter that has a reliable reduced black body spectrum.

      What apparently is best pracice is to make a page for each material noting how it differes from the black body envelope and then recalibrate the measuring instruments.

      Look at the problems encountered using this instrument.
      Look at theway emissivity varies.
      http://www.pyrometer.com/Tech/emissivity.html

      • You’re grasping at straws. No instrument is perfect. So what? There is reasonable agreement between pyrgeometric measurements of total incident LW radiation and high resolution FT-IR spectrophotometric measurements as well as agreement with line-by-line calculated spectra integrated to give total intensity. The errors you mention are a small percentage of the measurement and do not invalidate the method, especially since those errors have been identified and corrected. A pyrgeometer doesn’t care about the shape of the incident spectrum. It integrates over the full spectral range.

      • DeWitt Payne

        …….”No instrument is perfect”……but some are more perfect than others.

        Contrast the K&T Earth upward surface figure of 390W/m2 continuous spectrum centred around 288K with the “backradiation” figure of 324W/m2 from line spectra from at best 1% of the atmosphere at temperature of around 250K.
        This back radiation with its random direction x,y,z half up, half down implications when measured gives a highly questionable magnitude.
        There have been a number of past instances in science where falsely calibrated instruments gave a misdirected idea of the underlying physical reality.

        http://www.iitap.iastate.edu/gccourse/forcing/images/image7.gif

      • The back radiation in K&T is the integration over the visible sky and all wavelengths between 4 and 50 μm. The arrow represents both the direction and the perpendicular to the plane of the surface (or detector). Because the atmosphere contains molecules that can radiate in the thermal IR, the sky does have an effective temperature greater than the effective temperature of deep space. It does radiate. Essentially all the radiation from the sky that impinges on the surface will be absorbed by the surface. In order for the surface to lose the incident solar energy by radiation and convection, the surface temperature must be higher than the effective temperature of the sky. The greenhouse effect in a nutshell. If you want a more sophisticated treatment, try Arthur Smith’s paper.

        http://arxiv.org/PS_cache/arxiv/pdf/0802/0802.4324v1.pdf

      • DeWitt Payne

        The K&T figure for back radiation just cannot be justified.

        Of course CO2 and H2O radiate in the IR, but that hardly justifies the grand title of a “greenhouse effect”.

        Lets however give this “effect” the benefit of the doubt for a moment and give generous estimates for any parameters that influence it.
        A local flat Earth is assumed ( a 10km troposphere is not so high that the earths curvature is significant).

        1. Instead of the limited spectra of CO2 and H2O we will assume a continuous spectra centred around 250K as an average tropospheric temperature.
        2. Instead of only 1% of atmosphere radiating we will assume that the atmosphere radiates as well as a solid or liquid.
        3. We will assume that the Stephan Boltzmann equation can be used for gases.
        4. We will give the atmosphere the highest possible value of emissivity which is unity.
        We then calculate the radiation towards the Earth surface.
        The figure we obtain after all these hyper generous assumptions;

        …. is 221 W/m2 compared to the K&T value of 324W/m2

        An answer to the Arthur Smith paper by Gerhard Kramm and Ralph Dlugi is given below.

        arXiv:1002.0883v1 [physics.ao-ph];;A

        A further paper (April 2010).
        Gives a critical review of the evidence for the Greenhouse Hypothesis as well as the Arthur Smith paper.

        http://arxiv.org/abs/0802.4324

      • Correction for sources above;

        Comments on the Arthur Smith paper
        http://arxiv.org/abs/0904.2767

        Comments on the Greenhouse Theory
        http://arxiv.org/abs/1002.0883

      • Your average temperature is too low. Most of the radiation seen by the surface comes from very close to the surface where the air temperature is very close to the surface temperature. Cloud cover would be a more appropriate example. A cloud is effectively a black body for LW IR. So the back radiation from a low cloud is almost exactly equal to the surface emitted radiation. That’s why the K&T and KT&F back radiation is as high as it is. It would be much lower for clear sky. Example: 1976 standard atmosphere clear sky:
        0 km looking up: 258.673 W/m2 or Teff = 254 K (actually a little higher as MODTRAN doesn’t calculate the full spectral range)
        0 km looking down 360.472 W/m2 (for the full spectral range with an emissivity of 0.98 and T = 288.2 K that should be 383.3 W/m2)
        Add cumulus cloud cover with a base at 0.66 km and top at 2.7 km:
        0 km looking down doesn’t change
        0 km looking up 359.216 W/m2 or about the same temperature as the surface. Assuming 60% cloud cover, the average radiation seen by the surface in the 100-1500 cm-1 range is 319 W/m2.

  151. Vaughan Pratt on December 19, 2010 at 12:52 am wrote:

    Would it be reasonable to say that the concept of brightness temperature was invented in part to avoid the sorts of violations of the 2nd law of thermodynamics that would arise with a definition of temperature based solely on wavelength while neglecting intensity?

    I doubt very much that 2nd law violations figured very much at all and it’s just ensure that it is known that brightness temperature rather than colour temperature (Wien Displacement applies) is being talked about.

    A graphic from one of DeWitt’s favourite texts might do to illustrate the difference. I’ve taken the trouble to make some extra labels. If you look at the right shifting with temperature of the peak of the Planck curve you can pick the colour temperature. For example at ~530 cm^-1 colour temperature is ~270K. In the CO2 band you see the the brightness temperature is ~222K it’s pretty cool at the tropopause which is what I think the interferometer is seeing at 20km altitude. In the CO2 band in the upward looking panel below that you may notice that the brightness temperature is ~268K which is also the given surface temperature at the time the measurement was taken. That’s interesting because even though we have evidence like this in text books even to the contrary people are still saying that Miskolczi (2007) has it wrong.

    • Miskolczi is only approximately correct about clear sky conditions. My problem with him is not the Virial Theorem part but how he gets from the Virial Theorem to radiation balances. Then there’s the problem of clouds, which he seems, at least to me, to think don’t matter.

      Remember Nasif Nahle? I had another go round with him at the Science of Doom site.

      • Miskolczi is only approximately correct about clear sky conditions. My problem with him is not the Virial Theorem part but how he gets from the Virial Theorem to radiation balances.

        Actually he went from observed radiation balances (the red ones apply) to the Virial. Initially he only wanted to publish the empirical results (he is essentially an empiricist) but was pressured into adding the theoretical explanations that were at the time mainly conjecture by an astrophysicist reviewing the paper. I understand that the version of the paper that was withdrawn by his NASA supervisor did not have so much in it. He keeps looking for error but the empirical result even with the more extensive TIGR2 data set the ratio of surface radiation to atmospheric radiation to space is 2:1. (I must ask him if he has recently done an error estimate.) Nevertheless the ratio seems to me at least to be constrained and the question is how?

        Then there’s the problem of clouds, which he seems, at least to me, to think don’t matter.

        They don’t seem to affect the result whether the radiosonde ascents occurred on clear or cloudy days is unknown and I don’t know if it can be known.

      • They don’t seem to affect the result whether the radiosonde ascents occurred on clear or cloudy days is unknown and I don’t know if it can be known.

        Interesting. I wonder if there’s some way of estimating cloud cover from the readings. There should be a sharper drop in temperature going through a cloud than on a clear sky. Furthermore the temperatures above clouds should be relatively consistent with our without clouds while the temperatures below clouds should be higher than when there’s no clouds.

        There’s quite enough radiosonde data to be able to tease out some sort of pattern here.

      • “temperatures below clouds should be higher than when there’s no clouds.”

        Wouldn’t that depend on time of day and humidity?

      • Wouldn’t that depend on time of day and humidity?

        In what way?

        In any event the radiosondes include the time of day and humidity, so in addition to deducing presence of cloud cover one could also work out very accurately how time of day and humidity influence the detectability of clouds in this way.

      • Some parts of the world where regular radiosonde balloons are released have fairly predictable and stable weather. Antarctica for example, las very low humidity and precipitation.

        I don’t know how to access metadata for radiosonde records. Do you?

        It woul dbe of value perhaps to derive a model for cloud from the data as you propose, and then compare the model output for the post 1980 period with ISCCP data.

      • I don’t know how to access metadata for radiosonde records. Do you?

        Sure, it’s at the National Climate Data Center’s Integrated Global Radiosonde Archive, IGRA.

        Bear in mind that some 1500 stations send up balloons twice a day, some since as far back as 1970. Each balloon sends back on the order of a hundred readings each containing the instrument readings at that point during the ascent, which typically goes to some 30 km before the balloon bursts. So if all 1500 stations had been active throughout that period each returning 5 readings you’d have 1500*2*365*40*100*5 = 20 gigasamples to deal with, which with 64-bit doubles would occupy 160 gigabytes. That’s a considerable overestimate however, and it should fit on even a small hard drive, perhaps even a 64 GB thumb drive.

        You might imagine writing a program to sort through all that data, but it would have to be a fairly elaborate program as it would have to allow for variation in the instruments carried, time of launch (whether based on local time or UTC), and activity of station. Collating all that material would be a major exercise in handling sparse datasets.

        Fortunately NCDC has integrated this data to a considerable degree, performing fundamental “sanity” checks, checks on the plausibility and temporal consistency of surface elevation, internal consistency checks, checks for the repetition of values, climatologically-based checks, checks on the vertical and temporal consistency of temperature, data completeness checks, etc. Given the staggering volume of data involved, this is not something one would want to undertake on one’s own singlehandedly, it’s a major undertaking.

        One thing to watch out for however is that NCDC has not attempted to compensate for systemic changes in instrumentation and observing practices resulting from improvements in technology and choices of convention. You might therefore want to check out the 74-page ERA-40 Project Report Series No 23, “Homogenization of radiosonde temperature time series using ERA-40 analysis feedback information” by Leopold Haimberger at the Reading-based (west of London) European Centre for Medium-Range Weather Forecasts (ECMWF).

        Fun stuff.

      • “Fun stuff.”

        I can imagine. :)

        I’m more likely to try to find other ways of evaluating the NCEP re-analyses. So far, my finding is that they are probably better than some seem to think.

        There has been an exciting development in comparing TOA radiative balance and ocean heat content which will hopefully enable me to back project some data to create a better OHC reconstruction which I can compare with my solar model.

        See my blog for more on this, if you are interested.

      • Almost forgot. How can one forget Nasif? Did you have fun? Talking of those guys I haven’t seen much of Hans Erren of late.

      • I did my best to work through that exchange between DeWitt and Nasif. There were strong points made by both parties, and a degree of talking past each other, and there did seem to be some unanswered issues at the end, as always!

        http://scienceofdoom.com/2010/06/03/lunar-madness-and-physics-basics/

      • Remember Nasif Nahle? I had another go round with him at the Science of Doom site.

        It would be interesting to plot the number of people changing sides in this debate over the past five years. I’m guessing it’s now approaching zero. Everyone who had any doubts either way will have resolved them one way or the other by now. The returns on further debate are no longer justified by the investment of time. We all need to move on to more productive pursuits.

  152. I doubt very much that 2nd law violations figured very much at all and it’s just ensure that it is known that brightness temperature rather than colour temperature (Wien Displacement applies) is being talked about.

    That’s very sensible.

    If you look at the right shifting with temperature of the peak of the Planck curve you can pick the colour temperature.

    You do realize you’re picking the frequency peak here and not the more usual wavelength peak, right? In terms of energy percentiles the wavelength peak is at the upper 25 percentile while the frequency peak that you’re showing here is at the 64.6 percentile, 17.7 microns or 565 cm⁻¹ at 288 K.

    That’s interesting because even though we have evidence like this in text books even to the contrary people are still saying that Miskolczi (2007) has it wrong.

    Bzzzt. That does not compute.

    • You do realize you’re picking the frequency peak here and not the more usual wavelength peak, right?

      What difference is that supposed to make? The wavelength is shown at the top of the graph.

      BTW the maximum temperature curve shown is 270 and looking at the scale at the top the wavelength is closer to 19 micron.

      Bzzzt. That does not compute.

      I expected it wouldn’t.

      I’m wondering if you have worked out yet that it’s the brightness temperature rather than colour temperature that gives us the heating potential. If not consider that the full moon has approximately the same colour temperature as the sun.

      • Me: You do realize you’re picking the frequency peak here and not the more usual wavelength peak, right?

        You: What difference is that supposed to make?

        I don’t understand the question. Are you asking whether the frequency and wavelength peaks are different, or whether the color temperature depends on which one you use?

        BTW the maximum temperature curve shown is 270 and looking at the scale at the top the wavelength is closer to 19 micron.

        Well observed. But one does not need to look at the curve to know that the frequency peak for 270 K is at 18.886801383026… microns. Exercise: for the same temperature calculate the wavelength peak in microns to one decimal place.

        I’m wondering if you have worked out yet that it’s the brightness temperature rather than colour temperature that gives us the heating potential.

        I’m wondering if you have worked out yet that for high-intensity (10⁷ W/m²) low-frequency (1 GHz) situations such as my example the brightness temperature is far higher than the heating potential. Do you even understand the concept of brightness temperature, which I explained above after DeWitt had brought it up.

        You may be confusing brightness with heat, they are very different things in general.

        Now that I think about it, the fact that you continue to defend Miskolczi even after I gave two independent and very easy ways of calculating the kinetic energy of the atmosphere, which agree with each other but not with Miskolczi’s calculation of it (by a factor of 5!) suggests that you have difficulty following technical arguments in general. The KE calculations were somewhat simpler than my explanation of brightness temperature so I could well believe you didn’t follow the latter either.

  153. Pekka Pirilä 12/19/10 3:42 pm

    You write,

    >>It has been observed that for the absorption by CO2 in the atmosphere this [logarithmic approximation] happens over the whole range relevant for climate discussions. …

    Where and how exactly was this observed?

    If it was observed by simulation, it is a curiosity, but has fully negligible scientific weight. Simulations are always suspect as self-fulfilling predictors.

    I asked you yesterday (12/18/10 17:04) for empirical evidence, and you have not responded. Your silence is growing louder.

    • I asked you yesterday (12/18/10 17:04) for empirical evidence, and you have not responded. Your silence is growing louder.

      He’s supposed to have this site under 24-hour surveillance?

      Some of us do have a life outside Judy’s blog. Patience!

    • Anyway it’s obvious from context that by “observed” Pekka meant “remarked on”, not “observed empirically.” How could one possibly empirically observe the extreme ranges he talked about? Those were obtained by calculation, with Pierrehumbert being the one who has been pushing the range up to a CO2 level of 0.2 bars, some 550 times any level we’ve been in any position to observe empirically. 0.2 bars is relevant to climate discussions, in particular discussions of how SBE might have melted.

  154. Vaughan Payne 12/19/10 4:21 pm

    You wrote, >> Anyway it’s obvious from context that by “observed” Pekka meant “remarked on”, not “observed empirically.” How could one possibly empirically observe the extreme ranges he talked about?

    In a laboratory.

    • Whoops! Vaughan Pratt. Apologies all round.

    • VP: How could one possibly empirically observe the extreme ranges he talked about?

      JG: In a laboratory.

      Ah, good idea. You’d need a container of air whose pressure was 1 bar at one end and 100 mb or less at the other. The temperature of the air at the 1 bar end would need to be 288 K and at the 100 mb end around 200 K. And you’d need a light source approximating the black body radiation of the Sun illuminating the 100 mb end.

      But I’m not sure exactly how I’d construct it. What did you have in mind?

      • David L. Hagen

        Vaughan
        Be serious or learn how experiments are done. See HITRAN database etc.
        I recall seeing geological evidence for 6000 ppm CO2 etc.

        As to “high pressure”, 100 mb is not “extreme”. A quick search brought up evaluating CO2 absorption at 10 bar, (100 X 100 mb)
        “Measurements of high-pressure CO2 absorption near 2.0 μm
        and implications on tunable diode laser sensor design”
        http://www.springerlink.com/content/lu8x2um3l34v55v4/

        For experimental method, try a pressure vessel and run 10,000 experiments measuring absorption while varying the pressure by 0.01%. Use a digital quartz pressure sensor with ppm resolution. See Digiquartz. Use a 24 bit a/d. etc.

      • David, what would a lab experiment add to what you could not already infer from the HITRAN database?

      • Vaughan,
        For many people it is sufficient that well known basic physics (text book level theories combined with values determined accurately by laboratory measurements) describes the situation and allows for reliable calculations.

        For people, who absolutely refuse to believe, there is always a way of denying the understanding and insisting that they can set the rules for proper evidence – and modify the rules, whenever necessary for their goals. How could you ever convince such people?

        If these people would like to learn, they would already have learned, but they want only to contradict and that they can do forever. I believe that many of them actually know the right answers, but they like to play this game, some just for fun, others to maintain confusion in less educated readers.

      • Pekka,

        So your answer remains, trust you.

        That’s not how science works. We validate one another’s work. Where’s the work?

        Science is not about believing. It’s about models with predictive power, validated by experiment.

        Of course, when IPCC says that radiative transfer theory is the greatest source of uncertainty in modeling CO2 forcing, I tend to believe that. Of course, IPCC also says that cloud modeling is the greatest source of uncertainty, and it isn’t even including the dominating effect of cloudiness!

      • I believe that many of them actually know the right answers, but they like to play this game, some just for fun, others to maintain confusion in less educated readers.

        I’m certain you’re right about that for some of them. But I also think there are some who aren’t particularly good at physics yet who through much experience with their specialty at least have gotten the hang of their little corner. This emboldens them to pronounce confidently on areas in which they’re obviously weak. Why they can’t see they’re weak in those areas is a good question, but I have the feeling they’re not doing it as a game.

        For many people it is sufficient that well known basic physics (text book level theories combined with values determined accurately by laboratory measurements) describes the situation and allows for reliable calculations.

        It certainly would be remarkable if greenhouse warming involved any new physics. I assume like you that it doesn’t.

        What is not so clear is the greenhouse mechanism itself, without which one can’t do reliable calculations. I would analyze it terms of the distribution of photons leaving Earth. In equilibrium (which we aren’t since we’re warming but let’s assume it anyway) the total OLR energy equals insolation times albedo. What changes is the spectral distribution of the OLR, moving energy out of the blocked parts of the spectrum into the unblocked parts to maintain equilibrium.

        What’s tricky about this analysis is that the photons in the shrinking blocked part come from two places, Earth’s surface and the atmosphere. The former are black body photons, the latter are at the emission lines of the GHGs. (I don’t agree btw that Stokes shift only happens at high energy, we only think that because we use high energy in order to see it better. Why should the effect change at low energies? Because the internal transitions would be forbidden?)

        Pierrehumbert’s analysis based on Fig. 4.12 in his book only considers the black body photons, arguing that progressively more of them are blocked with increasing CO2. This would be valid if increasing CO2 had relatively little impact on the OLR coming from the atmosphere. But that’s not obvious: it may conceivably decrease it by a significantly larger amount than the mechanism RP limits himself to, which would tend to invalidate his reasoning.

        Do you have any reason to assume that increasing CO2 impacts primarily the OLR originating from the Earth’s surface, and not the OLR emitted by the atmosphere?

      • Vaughan – Maybe I didn’t understand your point, but I don’t believe Figure 4:12 implies that only surface-emitted black body radiation is involved in evaluation the greenhouse effect and its logarithmic relationship to CO2 concentration. What Pierrehumbert and others point out is that the different absorption coefficients at different distances from band center demand that radiation escaping to space at the different wavenumbers must emanate at different mean altitudes. At low CO2 concentrations, wavenumbers near the center dominate, but as CO2 increases, the part of the spectrum that determines maximum OLR change moves outward in the wings. The most important wavenumbers are those with an optical thickness, tau, of approximately one. The atmospheric window surface-emitted radiation is a passive bystander – its radiating altitude can’t change as a function of CO2 concentration, but only as a function of temperature dictated by the effects of CO2 within its absorption bands. All of this means that radiation emitted by the atmosphere is the primary determinant of greenhouse effects via the change in mean radiating altitude.

      • Maybe I didn’t understand your point, but I don’t believe Figure 4:12 implies that only surface-emitted black body radiation is involved in evaluation the greenhouse effect and its logarithmic relationship to CO2 concentration. What Pierrehumbert and others point out is that the different absorption coefficients at different distances from band center demand that radiation escaping to space at the different wavenumbers must emanate at different mean altitudes.

        You’re right, I didn’t read enough of the context. I could well believe that playing with the CCM that RP refers to in Section 4.5 would give more insight into what actually happens in practice than trying to work it out theoretically.

        On the other it would be nice to know whether the CCM gets significantly closer to reality than what can be achieved by an algorithms expert with a sufficient grasp of the basic physics. Granted the task is at least ten times as complex as Jeff Glassman seems to think, but is that factor closer to ten or a hundred? That is, is the extra work of going from ten to a hundred (or a thousand) worth the candle?

        The atmospheric window surface-emitted radiation is a passive bystander.

        Yes but not all surface-emitted OLR photons are in the passive part. I divided OLR photons into two types, and so do you, but not the same way. Your division is into photons in the atmospheric window, and photons emanating from altitude. Mine replaces the atmospheric window with photons emanating from the surface, a proper superset of the atmospheric window.

        On reflection I’d like to divide wavenumbers into three kinds, those with optical thickness near unity, and those on either side. Those well below are what I’m assuming you’re calling the atmospheric window. For those well above we can assume the OLR photons were emitted by the atmosphere.

        Those in the middle are more interesting because some come from the surface and some from the atmosphere. This distinction is important because surface-emitted photons obey a black body distribution while atmosphere-emitted photons congregate at the emission lines. (I understand Pekka to be claiming that emission strengths coincide with absorption strengths but in light of Kasha’s rules and Stokes shift I’d very much like to see some support for that.)

        The most important wavenumbers are those with an optical thickness, tau, of approximately one.

        Right, the ones in the middle. You didn’t say anything about the two distinct distributions in that portion of the OLR. You seem to be implying that only those emitted by the atmosphere matter, or did I misunderstand you?

      • There are lots of difficult problems in understanding climate and many more in understanding, how it interacts with nature and human societies, and further in deciding what should be done and even, whether anything specific at all.

        To what extent you or me can improves anybody’s understanding by continuing endless arguments on the basics with people who for one reason or other are unwilling to be accept that there are also things, which really are well known, is a different issue.

        It is pity that the real problems are not discussed at all as much as the points, where no significant problems remain and where everybody with sufficient understanding of physics can verify this personally.

        Judith seems to be moving step by step towards the big open issues. Whether the discussion can keep focus all the way, will be seen.

      • There are lots of difficult problems in understanding climate and many more in understanding, how it interacts with nature and human societies, and further in deciding what should be done and even, whether anything specific at all.

        Besides “should” there is also “could”. The former is policy, the latter is engineering. Neither one is science, which is responsible for the understanding part.

      • It is pity that the real problems are not discussed at all as much as the points, where no significant problems remain and where everybody with sufficient understanding of physics can verify this personally.

        My interest until recently was in better ways of packaging those points so that the public could accept them more readily. I saw this as a strategy for dispelling the confusion being deliberately created by those who feel threatened by AGW.

        Some feel threatened by it the way one might fear an oncoming train. Like an ostrich hiding its head in the sand, they seem to hope that merely by denying AGW it will go away of its own accord, somehow wilting under the combined willpower of the planet’s massive population.

        Others seem to see it more pragmatically as threatening their livelihood as the world rejects their products in favor of more Earth-friendly ones.

        And then there are the game players as you say, but I suspect many of those could be put in of the two preceding camps.

        Lately the packaging has been shading slightly into research for me. I guess I’ll know more about that after I’ve had a climate paper accepted by a reputable journal, if ever.

        Judith seems to be moving step by step towards the big open issues. Whether the discussion can keep focus all the way, will be seen.

        With one definition of focus, namely whether Judith can keep track of it all, I’d say it’s already lost focus. Someone should pick the eyes out of the discussions and edit them into something (a) semi-readable and (b) credible.

      • Actually, this REALLY needs to be done for this thread, any chance you (vaughan and pekka) can summarize so we can pull it over to the other thread? I have definitely lost track, far more efficient for those involved to try to sort through this. thx.

      • Hopefully Fred could help too, and DeWitt if he’s so inclined. Definitely worth doing. I’d be inclined to set the threshold for what to carry over rather high. So maybe the easiest approach is to start with what clearly should be retained, then lower the bar by degrees until the threshold of manageability has been exceeded.

        Spoilers not welcome (some of us know who you are even if you don’t).

        I don’t see Jeff Glassman as a spoiler, what I don’t know is where he sits relative to the threshold of manageability. On the one hand he makes elementary mistakes, on the other these may be sufficiently common mistakes as to be worth addressing. Perhaps on a separate thread addressing common misconceptions? I would think this thread is large enough by now to warrant forking off more than just one new thread.

        (Me being blunt as usual.)

      • Vaughan Pratt 12/21/10 10:44 pm

        You say with regard to me, “he makes elementary mistakes”.

        Your generalization and unsupported charge is arrogant and truly offensive. It is an obnoxious tactic of the AGW movement, repeated frequently on these threads, to place yourself above those with whom you disagree and dismiss them with disdain and empty generalizations. I can only believe that this tactic arises out of an indefensible scientific position and fear of exposure.

        Bluntness is no excuse. You can apologize by doing me, and the readers here, the courtesy of listing categorically those elementary mistakes of mine that you perceive, providing explicit references to each. Leave nothing out. Let’s be finished with this despicable tactic.

        I participate here on the hope that Judith Curry’s intent is to raise the level of controversy on AGW out of the gutter of the movement, and into the open for a scientific cleansing. This would be a proactive step fully in keeping her position, “troubled by the ‘tribal nature’ of parts of the climate-science community”, “that climatologists should be more transparent in their dealings”, and that they “should engage with those skeptical of the scientific consensus on climate change.” Wikipedia. No one else connected to the movement is doing so, and she is especially well placed as a GaTech climatologist and professor, and a TAR Contributing Author. The others seem to be cloistered for the most part.

        Many of the contributors to her blog seem to speak from the shadows, intent on stifling opposition and defeating Dr. Curry’s efforts to raise climatology to the level of science. I will not name them at this point, but I have been responding to such postings explicitly for all to see and constructively for them to respond. I may use sarcasm when they get truly tiresome or out of bounds, but I do not stoop to their level. Instead I respond in the disciplined manner I ask of them.

      • Your generalization and unsupported charge is arrogant and truly offensive.

        I would agree with the second (no one here doubts that you have taken offense) but not with the first (since all I did was to repeat what people had been telling you all along, that you make elementary mistakes).

        It’s also an elementary mistake not to pay attention when people tell you this. If you would pay more attention I would not be so hard on you.

      • As a first step towards Judith’s plan to “pick the eyes out of” this thread, I enumerated the 590 or so posts and their sources. I’m very embarrassed to find myself at the top of the list, especially since I’m supposed to writing a paper for a robotics conference and writing slides for an invited talk at a distributed computing conference. Evidently CO2 acts like a magnet on me (MV will appreciate that one).

        Here’s the breakdown of comments by their commenters.

        195 Vaughan Pratt
        63 Pekka Pirilä
        53 Jan Pompe
        52 tallbloke
        49 Jeff Glassman
        34 Fred Moolten
        30 curryja
        21 vukcevic
        17 willard
        13 Jim Cripwell
        11 Al Tekhasski
        9 John N-G
        7 David Wojick
        6 Nick Stokes
        5 scienceofdoom
        5 Dan Hughes
        4 jstults
        4 chriscolose
        4 Miklos Zagoni
        3 PaulM
        3 John Eggert
        2 Chris Colose
        1 Berényi Péter

        Next step is to coordinate these nearly 600 comments into something that can form the basis of a new thread under the same heading.

        I have the advantage of being an outsider here, so no one can accuse me of getting part of the $47 trillion that I hear is being spent on the insiders (though I’ll gladly take 1 ppm of that if I can figure out how).

        But I’m also obviously (I hope) not from the Fear, Uncertainty, and Doubt gang, so no one can accuse me of pretending that the basic science is up for grabs.

        Down to work.

      • Thank you!

      • Hmm, the blog itself claims to have 935 responses. Either the blog or I can’t count. Sorry about that, I’ll try again.

      • In the meantime I realize that the new thread has become unstoppable, so I’m
        abandoning the above plan as being overly ambitious and unworkable.

      • Vaughan,
        Some comments on your text.

        What is not so clear is the greenhouse mechanism itself, without which one can’t do reliable calculations. I would analyze it terms of the distribution of photons leaving Earth.

        The starting point of this chain is to look what happens to radiation, when the atmosphere does not change in any other way than in its CO2 concentration. This is the problem of radiative transfer models. This part is the easiest and least controversial part.

        The temperature of the surface is still assumed to be the same (it is going to warm, but has not yet done it). Thus it radiates as before.

        The troposphere is also assumed to have the same temperature profile (no time yet to change). Increased CO2 leads to stronger absorption at some wavelengths. The atmosphere radiates also more (to all directions), because the emission by CO2 at any single point is proportional to its concentration when the temperature is fixed.

        Because temperature drops with increasing altitude and because a large part of radiation originates from surface, the net change in outgoing radiation is a reduction at the TOA or at tropopause. This is the radiative forcing and this is the thing that the radiative transfer models really can calculate without large uncertainties.

        What’s tricky about this analysis is that the photons in the shrinking blocked part come from two places, Earth’s surface and the atmosphere. The former are black body photons, the latter are at the emission lines of the GHGs. (I don’t agree btw that Stokes shift only happens at high energy, we only think that because we use high energy in order to see it better. Why should the effect change at low energies? Because the internal transitions would be forbidden?)

        Stokes shift is not important here, because the emission related directly to preceding absorption is not the important part of emission. Most of the emission comes from thermally excited molecules and the thermal excitations keep all states essentially equally occupied (there is no significant preference of the lowest states of a group). This is different from the excitations of the electrons, which are at an energy level much higher than typical for the thermal excitations. For the exited electronic states higher energy radiation is needed for excitation and the upper levels are not kept occupied by molecular collisions of thermal origin.

        Pierrehumbert’s analysis based on Fig. 4.12 in his book only considers the black body photons, arguing that progressively more of them are blocked with increasing CO2. This would be valid if increasing CO2 had relatively little impact on the OLR coming from the atmosphere. But that’s not obvious: it may conceivably decrease it by a significantly larger amount than the mechanism RP limits himself to, which would tend to invalidate his reasoning.

        Do you have any reason to assume that increasing CO2 impacts primarily the OLR originating from the Earth’s surface, and not the OLR emitted by the atmosphere?

        Considering only photons radiated by the earth surface is not enough. Correct calculation includes also the radiation by the atmospheric greenhouse gases. The radiation codes that are used for the real calculations of radiative forcing include also the radiation originating in the atmosphere. One of the early such codes is GENLN2, which was used by Myhre in his calculations of the radiative forcing in 1990’s. Also MODTRAN includes both absorption and emission by atmospheric gases.

        Details in the description of the atmosphere affect naturally the results, but not their basic features. The final number changes a little, when these details are modified, but not essentially.

        More complications come into the calculations, when changes in temperatures are allowed to proceed, but then the calculation is not any more the radiative transfer calculation of radiative forcing, but something else like climate sensitivity without or with feedbacks.

      • Vaughan Pratt 12/19/10 5:53, David L. Hagen 12/19/10 11:30 am

        Enough could be done in the laboratory to create a commendable, but alas most likely unpublishable, dissertation.

        One does not have to emulate the atmosphere to study Beer’s Law and the claims made for it by the radiative transfer and AGW promoter specialists. Use a rather ordinary gas filter. Fill it with a transparent, non-absorbing gas, i.e. some mix of N2 and O2, to nominal pressure. Add CO2 in varying amounts in parts per million. Illuminate with IR of varying temperatures and with one or several monochromatic sources. Measure the intensity through the filter for the different sources, determining the form of its dependence on CO2 concentration and its spectrum, and the effects of the different source spectra.

        Run tests where the CO2 concentration replicates the total CO2 encountered from the surface to the top of the atmosphere, the zenith transmittance. See Petty, First Course in Atmospheric Radiation, p. 179, Figure 7.6 for CO2; see also Radiation Transmitted by the Atmosphere, Wikipedia. http://en.wikipedia.org/wiki/Absorption_band .

        Build additional filters and test absorption through layers of filters configured in interesting ways to replicate the atmosphere or GCMs. Validate or invalidate Beer’s Law, and determine whether the order of the filters has an effect. Estimate where Earth’s atmosphere might be today on the CO2 absorption curve.

        One would think that this has already been done, and that it might even be a demonstration in a first year lecture on light. But if so, Lacis, Pirilä, and Payne remain silent about it, even when asked to supply empirical evidence to support their claims. Maybe the results aren’t being reported because they confirm Beer’s Law and so contradict IPCC’s model.

        If these experiments do uphold Beer’s they would not be publishable in the climate journals, nor in Nature or Science. That would be a dead end for the career of a budding climatologist.

      • Beer’s law cannot be valid at the same time for many wavelengths with different absorption coefficient and for their sum. This is not something that could be proven experimentally, because it is impossible by simple mathematics. There is no need for any physics or any experiments in it, just simple mathematics.

        It is not possible to invent such artificial results that would keep Beer’s law valid both the total and its parts at the same time.

      • Pekka Pirilä 12/21/10 1:53 am,

        Who and where was the claim made that Beer’s Law might be valid both for the total intensity and for narrow bands or lies? No one would have been like to have experimented on a phantom conjecture, those experiments often provide unexpected information. Which is why the data need to be examined.

        Beer’s Law says nothing about the spectrum of the EM. It’s a law about the total intensity. It is statistical in nature, dealing with the probability of extinction in its derivation. This is not unusual in physics. The laws of thermodynamics are macroparameters and statistical, too, and should be expected to apply at some microparameter level, e.g., photon-by-photon.

        You are now dismissing the request to provide experimental data for your claim that logarithmic dependence applies and not Beer’s Law. You do so by creating a novel straw man problem. Can we now assume from your diversion that you have no empirical justification for your claim?

      • Beer’s Law says nothing about the spectrum of the EM.

        Where did you read that? (There’s a 100% rigorous proof that it’s false, no two ways about it.) Would you say your source for it is more reliable than the Wikipedia article on Beer’s law?

        Note in particular prerequisite 4:

        “4. The incident radiation should preferably be monochromatic, or have at least a width that is more narrow than the absorbing transition;”

        This rules out “total intensity.” If your mileage varies you need to take your odometer in to the shop.

        But why am I bothering to tell you this when it seems to have no effect?

      • Vaughan Pratt 12/21/10 10:59 pm

        You write in response to me, >>>>Beer’s Law says nothing about the spectrum of the EM.

        >>Where did you read that? (There’s a 100% rigorous proof that it’s false, no two ways about it.) Would you say your source for it is more reliable than the Wikipedia article on Beer’s law?

        1. I haven’t read it, at least in recent decades. I derived it the way August personally explained it to me, and I posted the derivation on this thread on 12/15/10 at 7:56 am. You can read it there. No response to the derivation. I linked to it again on 12/17/10 at 8:44 pm. Pirilä responded, “Your derivation must contain an absorption coefficient that is constant for all radiation”. My derivation didn’t use and didn’t need absorption coefficients. I know from another post by Pirilä that he is laboring under the misinterpretation that Beer’s Law applies to spectral lines as well as the total radiation when it applies only to the latter.

        2. What does 100% rigorous mean, especially in the context of science? Why do you add “no two ways about it”, and what does that mean?

        3. I assume you meant that you found proof that Beer’s law was false, not what I had written about it, meaning you had proof that Beer’s Law said something about the spectrum of EM. Fine. Kindly provide the citation containing that proof. If it’s freely available to the public, I’ll read it and get back to you.

        4. Most often I start research with Wikipedia. For that it is a fine source. I never rely on Wikipedia. Some of the articles are horrible and wrong. The Wikipedia article on global warming is an excellent example. What it attributes to IPCC is, as far as I have been able to discern true in the sense that that is what IPCC said. Sometimes the article steps out of that context to repeat what IPCC said as if it were true.

        In checking this article for you, I found this amusing: “The scientific consensus is that anthropogenic global warming is occurring.” The words “scientific consensus” was a hot link to an article with the title, “Scientific opinion on climate change”, a long article exclusively on climate change. This seems to confirm that no branch of science, other than climatology, considers “scientific consensus”. This is one of the reasons that climate thinking lies “outside the box”, that box being science.

        Also, one of the Wiki references for the scientific consensus conjecture was to Naomi Oreskes and her famous article in which she claimed, as quoted in Wikipedia, “Such statements suggest that there might be substantive disagreement in the scientific community about the reality of anthropogenic climate change. This is not the case.” This was her conclusion from a survey she made of 928 refereed journal articles having the key words “global climate change”, and in these none disagreed with the alleged consensus. Her conclusion, popular within the AGW movement, of course didn’t follow from her research. She did not survey the scientific community, and that should have been obvious. She surveyed refereed journal articles. A correct conclusion from her effort is that refereed journals to not publish papers critical of AGW dogma. Wikipedia and a host of organizations, including even Scripps Institution of Oceanography, were suckered.

      • Vaughan Pratt 12/20/10 1:24 am

        What you can infer from HITRAN is how the programmer made his program work, and maybe something about his assumptions. This is computer science.

        What you can learn from a lab experiment is how a sample of nature works in the laboratory. This is physics.

      • HITRAN is a database of spectral lines, not a program. Line-by-line programs like SpectralCalc use the data from HITRAN to calculate spectra for almost any set of conditions you can imagine including the surface of Venus. The spectral line data were, in fact, measured in the laboratory and the field as well as calculated. It’s not made up out of whole cloth. See for example this article describing HITRAN 2008, the most recent version of the database. The database is also updated between major revisions as new data become available.

        With a lot of work, you could even write your own line-by-line program. It’s a lot easier, though, to subscribe to SpectralCalc.

        Your implication that this is all some grand conspiracy, which would have to involve thousands of working scientists outside the climate science community, is tin foil hat level.

      • DeWitt Payne 12/20/10 1:28 pm,

        Thanks for the correction. Clearly I am not a user of HITRAN, HAWKS, or any other program computing absorption spectra. I interpreted Vaughan Pratt’s question, “what would a lab experiment add to what you could not already infer from the HITRAN database?”, to mean that he was thinking about a spectrum created in the lab as opposed to a spectrum created with HITRAN, intermediate program or not, and not a spectrum compared with a data base of line coefficients. I’m not doing the lab experiments either.

        I had thought that HAWKS (HITRAN Atmospheric Workstation) was an add-on to HITRAN. Seems like it would be from the name, doesn’t it? Following the lead from your post, I learned that HAWKS is officially the parent, and HITRAN the child. Sorry about that. No offense intended.

        Still, the computation of absorption spectra seems rather straightforward. A matter of quadrature, some would say. Therefore the fact that HAWKS (not HITRAN) seems to produce a result at odds with Beer’s Law (a fact not recognized even by Petty in his introductory text) must have its cause in the database! As we have already determined, the apparent logarithmic region is due to band wings, not to the integration, line-by-line. Thus the presumptions of the programmers, and programmers do create computer databases, are of those who created that database with its band wings developed from some third party’s unvalidated absorption model.

        I didn’t mean to imply that some grand conspiracy exists here. I meant to state it clearly. It doesn’t lie in radiative transfer, or just in radiative transfer. It is known as AGW. Thousands of scientists subscribe to it, tarnishing their credentials everyday. Many are frequent posters here.

        Some wag said the problem is not in the things that we don’t know, but in the things that we know that aren’t true. Newton wasn’t wrong – Einstein just shrank his domain. Flat Earthers did fine until Pythagoras, and in fact we still use a flat Earth today for local navigation.

        Climatology, though, is over-brimming with things that aren’t true – roiling thermodynamic equilibrium, feedback without flow variables, missing natural forcings attributed to man, revised laws of physics, a constant climate sensitivity parameter. One more isn’t going to hurt, especially considering how far down in the noise that radiative transfer is – the largest source of uncertainty in CO2 radiative forcing, hiding behind a horribly misunderstood, overestimated, upside down feedback coefficient (ΔF/ΔT), dependent on dynamic cloudiness but reduced by The Leader to a static parameterization.

        We are deluged everyday with conspiracy theories and the ranting of conspiracy theorists, some with their own television shows, and some sporting high honors. But to consider the existence of conspiracy theories proof that conspiracies don’t exist is the same as saying noise exists, therefore signals don’t.

      • Some wag said the problem is not in the things that we don’t know, but in the things that we know that aren’t true.

        stochasm
        n
        What we think we know, minus what we actually know.
        [Greek stokhazesthai, to guess at]

        sarcasm
        n
        Clinical procedure, used in the treatment of stochasm; rarely effective, deprecated.
        [Greek sarkasmos, from sarkazein, to bite the lips in rage]

        stochasmectomy
        n
        Surgical alternative to sarcasm. Four-year operation leading to a college degree.

  155. I don’t understand the question.

    I’m not surprised since you obviously didn’t notice the wavelength scale at the top of the plot.

    You may be confusing brightness with heat, they are very different things in general.

    What? Like you confuse energy and heat? Not bloody likely.

    Miskolczi even after I gave two independent and very easy ways of calculating the kinetic energy of the atmosphere,

    Why don’t you try again when you work out that vectors in general do not have an orthogonal component.

    • VP: You do realize you’re picking the frequency peak here and not the more usual wavelength peak, right?

      JP: What difference is that supposed to make?

      VP: I don’t understand the question. Are you asking whether the frequency and wavelength peaks are different, or whether the color temperature depends on which one you use?

      JP: I’m not surprised [that you didn’t understand the question] since you obviously didn’t notice the wavelength scale at the top of the plot.

      What I noticed was that the wavenumber scale at the bottom is linear, the wavelength scale at the top is not. Linearizing the latter changes the location of the peak at 270 K from 18.886801383026… microns to 10.73247513517118543… microns, which you may or may not recognize as the more usually given peak for 270 K. (Planck’s law is a mathematical idealization of black body radiation which explains why the frequency and wavelength peaks can be given exactly, taking 270 K to mean 270.000000000… K. Paradoxically this precision is valid even though the universal constants appearing in Planck’s law are only known to a handful of decimal places. In its usual application to stochastic phenomena only the first few digits of these two peak wavelengths are physically significant.)

      Thank you, you’ve answered my original question.

      JP: I’m wondering if you have worked out yet that it’s the brightness temperature rather than colour temperature that gives us the heating potential.

      VP: You may be confusing brightness with heat, they are very different things in general.

      JP: What? Like you confuse energy and heat? Not bloody likely.

      Ah, then you do agree that brightness is not the same as heat. Good, I was worried for a minute you were going to stick to your guns. Though I see you’re still sticking to your guns for energy and heat. Heat is a form of energy. Heat is not a form of brightness, nor is brightness a form of heat, they’re very different things. Heat is much closer to energy than it is to brightness.

      Why don’t you try again when you work out that vectors in general do not have an orthogonal component.

      I knew your physics was above my pay grade (I’d have to be paid a lot more before I could understand it), but I see this is also the case for your mathematics. You’ll have to forgive me but I’m unfamiliar with that branch of linear algebra in which a component of a vector can be orthogonal. Please school me.

      • What I noticed was that the wavenumber scale at the bottom is linear, the wavelength scale at the top is not.

        Hardly surprising given that the one is the reciprocal of the other. So let’s see if we can get on the same page here.

        Linearizing the latter changes the location of the peak at 270 K from 18.886801383026… microns to 10.73247513517118543… microns, which you may or may not recognize as the more usually given peak for 270 K

        I don’t think so. If you linearise one in a chart the other will be nonlinear but the the relationship remains the same. For instance
        1cm-1/530 = .0018.8 = 18.8 micron. This is a measured peak using a Michelson interferometer. You will find that is the ball park for every spectrum obtained that way. The number that you quote is the Wien’s law result 2.9/270 = 10.74 micron. So what is it to be the measured or the computed?

        I think you’ll find if you look into it that Wien’s law only holds for short wavelengths like Raleigh-Jeans for LW.

        Ah, then you do agree that brightness is not the same as heat.

        Brightness like luminance, intensity and are indicators of heating potential obviously if put into a EMR field of higher brightness or intensity it will be considered cold and the field hot and heat will flow from the field source to the object.

        You’ll have to forgive me but I’m unfamiliar with that branch of linear algebra in which a component of a vector can be orthogonal.

        Angular momentum and rotational motion has rules where the momentum does not just have orthogonal component but actually is orthogonal to the plane of rotation.

      • If you linearise one in a chart the other will be nonlinear but the the relationship remains the same.

        Usually before I demonstrate my ignorance (though sometimes I forget) I take the precaution of googling something I’m about to get wrong at the top of my voice. You could have avoided this here by putting the following line in the search bar:

        “peak wavelength” “peak frequency”

        Google’s first hit is a conversation much like the one we’re having here, with your part being played by grav and Jeff Root who are getting schooled by the others on the distinction between these two concepts.

        I take it you don’t work professionally in this area. Confirmed by:

        I think you’ll find if you look into it that Wien’s law only holds for short wavelengths like Raleigh-Jeans for LW.

        That would have been true in 1899. You’re mixing up physics and history, and also the two Wien laws. The Wien law you’re speaking of, nowadays called the Wien approximation or the Wien distribution law, is the expression c/(λ⁵ exp(d/λT)) for the energy E(λ) of a black body at temperature T as a function of wavelength λ, where c and d are empirically determined constants. German physicist Wilhelm Wien came up with this law in 1896, based on adiabatic invariance, as his answer to the problem raised by Kirchhoff in 1859: what was the law governing the dominant frequency of black body radiation as a function of wavelength and temperature? Even though his answer was wrong at long wavelengths it earned him the 1911 Nobel Prize as being a major step towards Planck’s 1900 reconciliation of Wien’s law and the Rayleigh-Jeans law you mentioned. Both Wien’s 1896 law and the Rayleigh-Jeans law were based on classical (non-quantum) considerations. Even though both laws are wrong, both are still in use today for wavelengths respectively well below and well above the dominant wavelengths of the radiation. Both laws are wrong in the vicinity of the dominant wavelengths, where Planck’s law must be used, and each is badly wrong in the domain of validity of the other.

        Planck’s law permits the correct derivation of Wien’s other law, his earlier (1893) displacement law, which says that the dominant wavelength of radiation at temperature T is inversely proportional to T. So even though Wien’s 1896 distribution law was wrong, his 1893 displacement law was right, and is still in use today across the whole EM spectrum (well, above VHF anyway, I have no idea whether the statistics still holds up in the medium frequency band, .3-3 MHz).

        The law I was referring to was the latter, his displacement law. This law standardly recognizes very different peaks, for respectively wavelength and frequency. There are also two more useful peaks, both intermediate between the first two, a wavelength-frequency-neutral peak and the median-energy peak, see the second and third paragraphs of this section of the Planck’s law article in Wikipedia (or just take Google’s first hit with the phrase, peak wavelength peak frequency, without any quotation marks). You may also find the tables there very useful for getting a sense of how each peak splits up the total energy.

        Angular momentum and rotational motion has rules where the momentum does not just have orthogonal component but actually is orthogonal to the plane of rotation.

        I may not be able to predict the weather one month from now, but I predict you’ll go on saying mathematically meaningless things like “momentum does not just have orthogonal component” 18 months from now, based on the following exchange seen on scienceblogs in June of last year:

        BPL: I debated Jan Pompe and his friends for a long time on landshape.org, but I finally had to give up–I was drowning in the great floods of stupidity.

        Nathan: I have a quick read of the stupidity and it seemed like someone called Steve Short was vigorously debating Jan Pompe, and basically backed him into a corner. I agree that they live in a different universe with different physics, but it does look like the penny is starting to drop over there. It’s kind of funny.

        Nathan’s complaint echoes mine: you use different physics. To which I would now add, different mathematics.

        I have difficulty picturing you working at a reputable lab, but if you do you must be on extremely good terms with the director.

      • note we have a new thread for discussion on radiative transfer

      • Oh but Judith, that particular train of thought you picked up on was on its last legs, not for any lack of staying power on Jan Pompe’s part who I imagine could keep it for decades, but on mine. I called it quits just now and will not be responding to Jan Pompe any more.

        I don’t think you’re doing Jan any big favors by embarrassing him in this way (even if he can’t tell he should be embarrassed). How about deleting that thread before it gets under way (so far I don’t see any responses there that aren’t duplicates of those on this thread) and picking a more productive part of this thread to fork it from, for example the recent discussions between Fred Moolten, Pekka Pirilä, and myself, which are trying to focus on more substantive issues concerning radiative transfer models.

        What do you think, Fred and Pekka?

      • To which I would now add, different mathematics.

        Indeed it’s rather peculiar than you think that wavelength is not the simple inverse of frequency. How you managed to confuse your self is beyond me.

        as far as BPL is concerned people usually drown because they can’t swim and they are out of their depth.

        Both laws are wrong in the vicinity of the dominant wavelengths, where Planck’s law must be used, and each is badly wrong in the domain of validity of the other.

        Glad that despite an overly verbose attempt to show the opposite you are getting the message.

      • Indeed it’s rather peculiar than you think that wavelength is not the simple inverse of frequency. How you managed to confuse your self is beyond me.

        Right, anything beyond trivialities like c = νλ seems to be beyond you. It would be a waste of your time trying to follow the discussion where grav and Jeff Root were having the same conceptual difficulties as you. I have no idea whether those two ever eventually got it, but you’ve made it clear you will never get it.

        Glad that despite an overly verbose attempt to show the opposite you are getting the message.

        Stop digging yourself in deeper and admit that you didn’t know the Wien displacement law was different from the Wien distribution law. You’ve made it obvious to everyone at this point that you don’t understand the difference. Nor the difference between the wavelength peak and the frequency peak, nor the meaning of orthogonality.

        At this point I will follow the advice of PDA and Willard and look for more productive conversations on this blog, this is a waste of everyone’s time. Goodbye.

      • Right, anything beyond trivialities like c = νλ seems to be beyond you.

        Don’t do that again I nearly choked laughing. You can’t see the problem with that equation in the context of frequency being expressed in cm^-1. In that equation it’s s^-1.

        Stop digging yourself in deeper and admit that you didn’t know the Wien displacement law was different from the Wien distribution law

        Clearly Wien’s Displacement law gives wrong results for low frequency or long wavelength radiation. It was Wien’s displacement law I meant it’s Wien’s displacement law that fails.

      • You can’t see the problem with that equation in the context of frequency being expressed in cm^-1. In that equation it’s s^-1.

        You could complete that to a coherent sentence by supplying the requisite coefficient. I will send you US$2 by surface mail if you can. That’s meant as an incentive, not a putdown, please don’t make it the latter.

        Clearly Wien’s Displacement law gives wrong results for low frequency or long wavelength radiation. It was Wien’s displacement law I meant it’s Wien’s displacement law that fails.

        You don’t say.

        If there were such a thing as a Monty Python sketch for physicists, this would be it.

      • Since the coefficient for c = νλ has to be 1, let me be clear what I’m asking you for: I want the coefficient a in the formula c = aν̃λ. It must furthermore meet the following requirements.

        1. In SI units, not CGS.

        2. A real number, not a symbol.

        3. Accurate to a hundred decimal places.

        If you think these are difficult conditions, welcome to physics, it’s harder than you think (but easier than most people think).

      • Oops, forgot I already said goodbye. Sigh, a rounding error.

      • You can’t see the problem with that equation in the context of frequency being expressed in cm^-1. In that equation it’s s^-1.

        You could complete that to a coherent sentence by supplying the requisite coefficient

        It’s simpler than that
        s/

      • try again
        s/. I/ while i/

      • Doesn’t meet conditions 1-3 for the $2.

  156. The logarithmic dependence has nothing to do with Beer Lambert. For weakly absorbing lines B-L applies, as the lines become more strongly absorbing (and lines broaden) then the logarithmic regime is entered, even more strongly absorbing and the square root dependence applies. This has been known and used for many years (e.g. by astronomers ) and isn’t peculiar to CO2.

    • For a weak isolated line with a broad band source, the response is linear with concentration or mass path. That isn’t B-L either. As the center of the line becomes saturated, the response becomes square root as long as the bandwidth of the radiation is much wider than the line wings. But that’s an isolated line with a Lorentz line shape in a homogeneous medium with constant temperature and pressure. The behavior in the wings of the CO2 band in the atmosphere is complex with contribution from different altitudes with different temperature and pressure resulting in different line widths. Below 10 ppmv, the response becomes linear. Above 10 ppmv, the emission is a logarithmic function of partial pressure up to at least 1000 ppmv and possibly much higher.

  157. Phil. Felton 12/21/10 12:04 pm,

    Phil. Felton said: >>The logarithmic dependence has nothing to do with Beer Lambert. For weakly absorbing lines B-L applies, as the lines become more strongly absorbing (and lines broaden) then the logarithmic regime is entered, even more strongly absorbing and the square root dependence applies. This has been known and used for many years (e.g. by astronomers ) and isn’t peculiar to CO2.

    Where can the data be seen?

    We need to be sure that the data are for the total intensity, I(C), and not the slope, ΔI/ΔC. You say, “e.g. astronomers”, not “i.e. astronomers”. IN what other applications do you find the data? Why do astronomers care about a variable C? Do they have occasion to model the atmosphere in layers? But don’t let these concerns worry anyone. They will be resolved once the data can be examined.

  158. DeWitt Payne 12/21/10 3:51 am,

    Where exactly in the HITRAN model or in HAWKS, or elsewhere, are the features you discuss, i.e., line saturation, square root dependence, the Lorentz shape, the CO2 band wings, altitude dependence, linear behavior below 10 ppmv, and logarithmic behavior to at least 1000 ppmv. How is “at least” mechanized or determined?

    Previously you were careful to say this information was all in the line-by-line data base, and not in the summing. Now your results depend on the total bandwidth of the application and in the homogeneity of the medium. How is your information stored in the line-by-line data base?

    You say the logarithmic function is good “to at least 1000 ppmv and possibly much higher”. That number of 1000 is certainly important because it accounts for IPCC’s essential conjecture about CO2. We need to see the data to be sure your talking about total intensity, I, and not ΔI/ΔC, and that the break doesn’t occur in a lower region, e.g., between about 400 and 600 ppm. What is the standard deviation of the logarithmic fit at 1000 ppmv? Would it be correct to assume you talking about a fit of the logarithmic function to the output of HAWKS or some such calculator? If so, what is the standard deviation of the error between your calculator and the real world, measured absorption? Why do you say, “at least and possibly much higher”? Hasn’t the test been conducted yet?

    Where can your data be examined? Or is it to remain secret like the raw data at MLO? Or the temperature data at CRU?

    • The data’s available at HITRAN you can calculate it yourself, the transition from logarithmic to a square root dependence doesn’t occur at a single point but is spread out over a range. Look up ‘curve of growth’ to see the behavior for a single line. If you want to look at the ‘real world’ go find a nice FTIR with a long path cell, the experiment shouldn’t take more than a day.

      • Phil. Felton 12/21/10 9:52 am,

        Phil Felton said,

        >> The data’s available at HITRAN you can calculate it yourself, the transition from logarithmic to a square root dependence doesn’t occur at a single point but is spread out over a range. Look up ‘curve of growth’ to see the behavior for a single line. If you want to look at the ‘real world’ go find a nice FTIR with a long path cell, the experiment shouldn’t take more than a day.

        First I have no intention of repeating your work. I wouldn’t ask anyone else to repeat your work with no more information than you seem willing to divulge.

        DeWitt Payne claims that the various shapes of the curve are in the HITRAN database and not in the summing algorithm. Do you agree? If so, how do you think they got there? Or, what is the mathematics by which the curve is sometimes linear, sometimes, square root, and sometimes logarithmic? (But never saturating!)

        As we say in bridge, it’s time to review the bidding.

        IPCC claimed that the radiative forcing of CO2 in particular is logarithmic in the gas concentration. It provided neither data, nor references, nor science for its claim, regardless that the claim is essential to the entire AGW model. One highly knowledgeable poster here claimed that IPCC did not make the broad claim of logarithmic dependence, and had to be shown that the notion of a constant climate response to a doubling of the concentration is identically equivalent to a logarithmic assumption.

        So this discussion on this thread, as on other blogs around and about, has resulted in people coming forward to do exactly the same thing that IPCC did. Assert the truth of the claim, adding that the knowledge of the claim is too widespread to be questioned, but still without references or data or physics.

        This should be obvious, that the radiative forcing of CO2 cannot exceed the total OLR. Nor can it exceed the smaller number of the OLR reduced by the fraction of the sum of all absorbing CO2 lines or bands divided by the total OLR band. Yet the logarithmic dependence allows that to happen.

        The logarithmic dependence allows one to solve for the CO2 concentration that produces any RF from zero to infinity. If we all agree that that is impossible, then we need to know the limits of some better model that does saturate.

        A linear curve is the wrong answer. So is the square root. And the logarithm, too. And a transition from one to the other is equally wrong. The Beer Law result is on point and solves the problem, but the AGW movement doesn’t like it (obviously). IPCC’s model requires knowledge of where climate is on the curve, whatever model fits. Where, quantitatively, is the climate today with regard to saturation? What is required is ΔI/ΔC through the region of interest, apparently from 280 ppm to 560 ppm. An average would do nicely. Even knowing the value at 390 ppm would be reassuring.

        IPCC linearizes the climate problem as follows. ΔT = λ*ΔF, where λ is called the “climate sensitivity parameter”. AR4, ¶2.2, p. 133. (Previously, it was T2x = F2x/α. TAR, ¶9.2.1, p. 533.) IPCC applies the logarithm dependence on CO2 concentration to set ΔF = k*log_2(C/C0), where k = +3.7 Wm^-2. See AR4, ¶2.3.1, p. 140. IPCC appears not to identify k as a parameter, giving it neither a name nor a symbol. Net, IPCC’s model is ΔT = λ*k*log_2(C/C0).

        The parameter λ is not a tangential concept to the big picture. It is the core, the foundation of IPCC’s radiative forcing paradigm. IPCC said,

        >>In the one-dimensional radiative-convective models, wherein the concept was first initiated, λ is a nearly invariant parameter (typically, about 0.5 K/(Wm^−2); Ramanathan et al., 1985) for a variety of radiative forcings, thus introducing the notion of a possible universality of the relationship between forcing and response. It is this feature which has enabled the radiative forcing to be perceived as a useful tool for obtaining first-order estimates of the relative climate impacts of different imposed radiative perturbations. Although the value of the parameter “λ” can vary from one model to another, within each model it is found to be remarkably constant for a wide range of radiative perturbations (WMO, 1986). The invariance of λ has made the radiative forcing concept appealing as a convenient measure to estimate the global, annual mean surface temperature response, without taking the recourse to actually run and analyse, say, a three-dimensional atmosphere-ocean general circulation model (AOGCM) simulation. TAR, ¶6.2 Forcing-Response Relationship, ¶6.2.1 Characteristics, p. 354.

        The climate sensitivity parameter is a modeling parameter. It has recently been estimated for the first time from ERBE data. Otherwise, it is a direct or calculated output of the GCMs, but fatally flawed. IPCC runs its models open-loop with respect to cloud albedo, the most powerful feedback in climate, and the negative feedback that regulates climate against warming from any cause. Open loop λ is too large by a factor between 7 (ERBE estimation) and 10 (from an estimate of the threshold of albedo measurement accuracy) compared to its value closed loop.

        The factor k depends on the accuracy and formulation of the logarithmic approximation. When those who post here say the intensity is approximately logarithmic, without being faulted they could well mean that I = a + b*log(C/C0) ± err(C). What IPCC needs for its formulation is ΔI/Δ(C). That depends on the value of b and the nature of the unknown error, err(C).

        Some have responded here that, well, the logarithmic dependence is an approximation (not at all disputed or doubted), AND that it is valid throughout the IPCC range, or from some nominal concentration to 1000 ppm. This nicely covers what IPCC has done, but does it have a basis? This answer shifts the problem to the accuracy of the logarithm approximation. Some seem to suggest that it is accurate with respect to the output of models. That just pushes the problem down again , this time to the output of the models.

        We have yet to get these experts or students in radiative transfer to say how the models store or reproduce their claimed results. The quantity of experts or students who believe in something is not a basis in science. “Trust us. We know best.” The flat Earthers didn’t get to vote down Pythagoras. The Newtonian physicists didn’t get to vote down Einstein. As Einstein reportedly said, it only takes one. Getting more and more people to post the same thing, sans references, sans data, and sans science, is what the lawyers call cumulative evidence. It is a morphing of Einstein’s remark: one is enough. And the one that has published the conjecture, and relied on it for recognition, to frighten the public, to loosen government coffers, and to attack Western Democracy at its economic core, is the IPCC.

        If the logarithmic model has any validity by any route whatsoever, the validity must rest on measurements of actual absorption of CO2, either in the atmosphere or in the laboratory. If it is an à priori model, that is, based on reasoning alone, then it would be acceptable as a hypothesis, not yet a theory. When the à priori basis is established, scientists can then challenge the reasoning and the accuracy.

        Experimental validation, one way or the other, direct or indirect, is an imperative in science, honored in the breach by people here writing about science.

      • To be precise, the data for calculating line shapes is in the HITRAN database. See Tables 1 and 2 here for the record format for HITRAN 2004: http://eprints.ucl.ac.uk/1327/1/350.pdf

        There’s more data for calculated vs observed spectra in that paper too.

    • Judith Curry, et al.

      DeWitt Payne 12/21/10 3:51 am on the Confidence… thread claimed that the response of some absorption function (apparently the same function as Phil. Felton’s on 12/21/10 12:04 am) was linear at low concentrations, turning into logarithmic in a middle range, then becoming square root at high concentrations (when the center of the line “becomes saturated” (an impossibility; saturation is an asymptote)). Presumably, these investigators are equating the real world of gas absorption and their HITRAN based models of it.

      I wrote the most elementary equation for the logarithm of concentration, log_2(x), choosing base 2 because (a) it doesn’t make any difference and (b) log base 2 counts doublings, which IPCC prefers to use. I then wrote a simple equation for a linear, unity slope function, offset by a constant parameter, O(x) = x + φ. I did the same for the square root function, similarly offset by a constant, R(x) = SQRT(x) + ψ. I calculated φ that produced the minimum root mean square (rms) error for O(x) – log_2(x) for x ∈ (1,2,0.1), and ψ that produced the minimum rms error for R(x) – log_2(x) for x ∈ (6,10,0.1). The region (1,2) is a normalization of the region (280 ppm, 560 ppm), the CO2 concentration range from the estimated pre-industrial level to a doubling. The linear function and the square root function blend smoothly into the logarithm in the low and high concentration regions, respectively. Both O(x) and R(x), as are convex up, and, as optimized, lie above log_2(x), but tangent at low concentrations and high concentrations, respectively.

      The linear error for φ = 0.9848 was between -0.05 (-9.35% of the average of log_2(x) in the sub-region) 0.03388 (6.142%), with σ = 7.98E-8 (0.0000145%). The square root error for ψ = 0.1633 was between -0.0275 (-0.933%) and 0.00883 (0.2691%), σ = 1.236E-7 (0.0000041%). The point here is the exceptional accuracy possible by limiting the domain.

      I next computed a Beer’s Law response, RF = a + b(1-exp(-kx)), and computed a minimum rms for a, b, and k for x ∈ (1,2), the same region as used for O(x). A minimum rms error for RF – log_2(x) occurred for (a,b,k) = (-1.991, 3.6977,0.6959). The minimum error was -0.00290 (-0.5252%) and the maximum error was 0.002754 (0.4994%), with σ = 6.398E-7 (0.0001160%). Again, an exceptional accuracy is possible.

      The optimum Beer’s law approximates (a,b,k) = (-2,4,ln(2)), and the Beer’s Law response is quite close to RF’ = 2*(1-2^(1-x)), a result likely to be found by analysis.

      So to the extent that a model for the atmosphere might be logarithmic in the gas concentration, a linear fit in the low concentration regime and the square root fit in the high concentrations are mathematical consequences. The linear and square root fits are not independent results obtained from calculations using HITRAN data, but instead are results of the logarithmic function.

      Furthermore, a Beer’s Law fit is quite as good as the linear fit, that is, exceptionally close (no worse than a half percent), in the low region, but covering a full doubling of CO2 concentration. These are all mathematical results, providing insight into approximation claims for the linear and square root, and for the logarithm vs. Beer’s Law. They say nothing about the atmosphere, about a laboratory environment, nor about calculations with HITRAN.

      The intensity computed from HITRAN data is claimed by investigators here to be logarithmic. However, the linear and square root claims add nothing but mathematical tautologies. These asymptotes are a distraction. They are not exceptional, nor are they validating of anything.

      Beer’s Law on the other hand tracks the logarithm result until it breaks away to approach its horizontal asymptote, the saturation level for the gas. In the example above, that level is 2 (log_2(4) =2, and log_10(4) = 0.60206). At that breakpoint, Beer’s Law in the example is 1.74, an 18% loss due to saturation, an essential effect not found in the linear, log, or square root functions.

      This little analysis around the logarithmic function does not validate the band wings which some claim create the logarithm, nor does the hypothesis that those band wings exist, whether by analysis or extrapolation, validate the logarithm function. Validation awaits measurements of the band wings or measurement of the logarithm.

      The example can’t inform us what the saturation level is for CO2. That must be done by experiment, too. No evidence is to be found, certainly not by atmospheric measurements, but not even by laboratory measurements, to indicate where the atmosphere is with respect to the Beer’s Law, or to any other hypothetical saturation. The Beer’s Law curve fits the logarithm well over a region much great than the increase in CO2 from pre-industrial times to the present. Whether by Beer’s Law, or some other saturating curve, a knee will exist in the radiative forcing as the curve switches from its nearly linear asymptote at low concentrations to the constant asymptote corresponding to saturation. Where that knee lies, and the asymptotic level of saturation, are critical for the radiative forcing model. However, IPCC waives the problem away with the unsupported logarithm dependence, a model which happens also to be physically impossible.

      • Presumably, these investigators are equating the real world of gas absorption and their HITRAN based models of it.

        How are these different? I thought we’d dealt with that already.

      • > I thought we’d dealt with that already.

        That’s a recurrent problems with comment threads. In fact, blogs do not help that either. Telling people to “read the blog” never really helps. We should be able to easily link to relevant discussions. Hence the need to have a repository of these little questions.

        More on this next year.

    • Vaughan Pratt 12/24/10 at 1:49 am, posting on Confidence … thread.

      Quoting me, you wrote,

      >>>> Presumably, these investigators are equating the real world of gas absorption and their HITRAN based models of it.

      >>How are these different? I thought we’d dealt with that already.

      Muy excellent question.

      I don’t know what you might have thought, but the real world and scientific models are quite different. Physicists rather routinely make the mistake of confusing the two. Here is a list of things found in models but not found in the real world:

      Parameters, values, numbers, coordinate systems, rates, ratios, density, scales, units, equations, graphs, infinity, infinitesimal, categories, taxonomies, dimensions, weights, measures, standards, thermometers, meter sticks, clocks, calendars, uncertainty, mathematics, logic, language.

      The real world, whether from space or from a sample in the laboratory, provides a limited range of signals for our senses and our instruments. Science builds models from these intercepted signals, using things from the list, which you will note are all man made, and makes predictions using things from the list, with tolerance bands. The validation comes from real world measurements not used in forming the model that fit the predictions within the stated tolerance.

      So what is needed here are HITRAN predictions validated by real world measurements, and more particularly, what we’re looking for is how the total intensity of radiation after gas filtration depends on gas concentration. This is not a question about the spectral density of the filtered radiation, nor how well the modeled spectral density fits a measured or derived real world spectral density.

      The difference between the real and models contains the essence of science. The model tracks the real world as is known by data – measurements, observations quantified and compared to standards, also known as facts. When predictions have proven accurate, we have validation in basic science and closure in technology.

      When a scientist begins to believe his models are the real world, he begins to rely on model runs as if they were real world experiments.

      We have a number of posters here who are relying on the output of calculations with HITRAN as validation. They find models or analysis validating models and analyses. IPCC regards GCMs producing similar results as validation, or toy models validating AOGCMs. By reliance on this method, these investigators are not dealing with science.

      • So what is needed here are HITRAN predictions validated by real world measurements,

        How would this differ from the current HITRAN database? Extra columns? Someone’s stamp of approval affirming that they’d validated the database using real world measurements? Something else?

        and more particularly, what we’re looking for is how the total intensity of radiation after gas filtration depends on gas concentration

        You may be underestimating the complexity of that dependence. Here are some of the more important factors.

        1. Dependence of line widths on (total) pressure on account of pressure broadening. (This dependence btw invalidates Beer’s law for monochromatic radiation because the absorption coefficients of the constituent GHGs decrease with decreasing pressure.)

        2. Direction of the radiation. (Photons going straight up encounter the fewest opportunities for capture before reaching space; the probability of capture increases with increasing angle from the vertical.)

        3. Variations in lapse rate and tropopause altitude as a function of latitude and season. (Lapse rate and tropopause altitude are significantly less at the poles than the equator.)

        There is no one uniform dependence on mixing ratio. Determining the goodness of fit of the logarithmic dependence of surface temperature on CO2 level to actual total dependence is an extremely complex thing to estimate.

        Note that one never measures anything in the real world exactly. The best one can ever do is estimate it to within whatever your direct measuring and indirect calculating tools permit.

        Indirect calculation as a supplement to direct measurement is useful when it makes the estimate more accurate than the direct measurement. For complex things like dependence of surface temperature on GHG mixing ratios, good estimates depend crucially on supplementing measurement with calculation.

      • Dr. Pratt

        I will answer you on the new Radiative transfer discussion thread, which I think is Dr. Curry’s wish.

      • ”Indirect calculation as a supplement to direct measurement is useful when it makes the estimate more accurate than the direct measurement”

        Can anybody explain why this statement is any different from

        ‘Forget the friggin’ real world, our theories tell us the right answer’?

        Because if there is a difference it is so subtle as to have completely escaped me.

      • Vaughan Pratt

        ……”Indirect calculation as a supplement to direct measurement is useful when it makes the estimate more accurate than the direct measurement.”……..

        …..What !!!!!!!!!!!!!!
        Do you live in the “Twilight Zone” ….?

      • Latimer Alder 12/26/10 at 8:54 am, Bryan 12/26/10 at 8:00 am,

        You picked on Vaughan Pratt’s statement, >>Indirect calculation as a supplement to direct measurement is useful when it makes the estimate more accurate than the direct measurement.

        I didn’t mention it in my response because in many contexts, the statement is true. Here are a couple of examples.

        We detect and track an object in space with radar. At the same time, we detect and track an object at similar angles with IR, and ask if they are the same object to be tracked with both sensors. We create a hypothetical model in space with realistic inertia and fit it to the data from both sensors. Then so long as the sensor information remains consistent with the calculated model, we track with both sensors and estimate the parameters of the object from the calculated model, predicting its mass and trajectory.

        We measure Earth’s bond albedo via satellites and from Earth shine on the moon (the Twilight Zone, Bryan?). We need to know the global average albedo for climate modeling. We use the calculations to supplement the estimate. At the next step, we need to separate cloud albedo from surface albedo because of the differences in feedback. For that we include cloudiness measurements and cloudiness modeling.

      • Jeff Glassman

        If we have no reason to doubt the accuracy of the direct measuring device then the direct measurement will always trump the model prediction.
        For instance with a well known model such as the Kinetic Theory of Gases we most often use PV = nRT.
        Lets say we are dealing with a gas in a cylinder with a piston.
        If the model predicts a volume of 2 litres yet we are perfectly sure that the volume is not this value then we need to change the model.
        Van Der Waals equation gives a better account however even this equation has its limits.

      • See the Radiative transfer discussion thread for a response.

      • (I’ll post this here rather than chasing threads.)

        If we have no reason to doubt the accuracy of the direct measuring device then the direct measurement will always trump the model prediction.

        Bryan, let’s take a bathroom scale as a simple example. You have no reason to doubt its accuracy or you would not have purchased that brand. You weigh yourself, now you have a direct measurement, now you know your weight.

        But suppose you think to weigh yourself a second time and it registers 0.3 lbs heavier. (I have never owned a bathroom scale for which this has not occasionally happened.) What to do?

        1. Ignore the first reading as probably wrong?

        2. Use the average of the two weights?

        3. Weigh yourself a third time and use the median?

        While methods 1 and 3 might not seem like they involve much calculation, there is no doubt that method 2 does.

        Does method 2 improve the accuracy of the direct readings?

        Theoretically n weighings produce a more accurate result than one weighing, by a factor of √n. That’s an example where calculation can improve on direct measurement. No single direct measurement is trustworthy, but their arithmetic mean is √n times as accurate as any single one of those measurements. Yet not a single one of those measurements might have been the mean that you calculated.

      • So what is needed here are HITRAN predictions validated by real world measurements, and more particularly, what we’re looking for is how the total intensity of radiation after gas filtration depends on gas concentration. This is not a question about the spectral density of the filtered radiation, nor how well the modeled spectral density fits a measured or derived real world spectral density.

        The total intensity of radiation is simply the integral of the spectrum over the frequency or wavelength range of interest. For atmospheric IR radiation that would be 4-50 μm or 20-2500 cm-1. If the calculated spectrum matches the observed spectrum then the integrals will match as well. For a gas it’s not simple absorption either, the gas emits as well as absorbs so the spectrum of the radiation emerging from the end of the cell depends on the temperature of the source as well as the temperature of the gas in the cell. If the source (emissivity ~1) and gas are at the same temperature, the spectrum observed will be exactly the same as if the gas cell were not present: a blackbody spectrum for the source temperature. This fact is the basis for measurement of temperature by optical pyrometry, btw. If the source is cooler than the gas, there will be increased emission at the line wavelengths for the gas. If the source is warmer, there will be a reduction in emission at the same places. The Fraunhofer lines in the solar spectrum are an example of this.

        Try reading Chapter 31 in Vol. I of The Feynman Lectures on Physics.
        page 31-2:

        Now all the oscillations in the wave must have the same frequency

        [ emphasis added]

      • That would be 500 μm not 50. MODTRAN calculates for the range 100-1500 cm-1 or 100-6.7 μm, which gets about 90% of the spectral energy. Most of the rest is on the high frequency side.

      • Your comment has general value for a scientific dialog, so I posted a reply on the Radiative transfer discussion thread. Dr. Curry wants this thread transferred there.

  159. DeWitt Payne 12/21/10 10:56 pm

    The e-mail notice of your post says that it is in response to me at 12:36, and that is where it appears in the threading. However, the e-mail shows your response, but after a segment from my post citing a passage from Phil. Felton.

    You say, >>To be precise the data for calculating line shapes is in the HITRAN database.

    Without the context shown in the e-mail, I can’t know what you’re being precise about. The question had been about using HITRAN data to calculate “the transition from logarithmic to a square root dependence”. This must be a reference to the total intensity across the entire spectrum. Is your response about line shapes supposed to be responsive? Are you suggesting that the total intensity can’t be calculated from HITRAN data? You emphasized the word calculating, but I don’t see what distinction you are making by that emphasis.

    If you really want to be helpful and in the process contribute to Dr. Curry’s noble objectives on this blog, you might be specific in categorically answering the eight questions that I posed to you at 9:11 am, above. The ninth and tenths questions about secret data from MLO and CRU you may take as rhetorical if you wish.

    • DeWitt Payne claims that the various shapes of the curve are in the HITRAN database and not in the summing algorithm.

      My reply was a correction to that statement. I said no such thing, btw. I thought it was obvious.

      • DeWitt Payne 12/22/10 11:13 am

        DeWitt Payne: >>My reply was a correction to that statement.

        Do you see now how your reply came out ambiguous on-line? That was because you didn’t cite what you were talking about.

        D. P. >>I said no such thing, btw.

        Not literally, but that’s why my statement was not in quotation marks.

        D. P.>>I thought it was obvious.

        You made it obvious as well, hence my attribution to you. See DeWitt Payne 12/20/10 1:28 pm with D. P., 12/21/10 12:27 am. According to you, we can use any line-by-line program, even a home grown one, to calculate the claimed results. I deduce from this that the logarithmic dependence is not a result of the calculation, but a consequence of the nature of the database. What I attributed to you was more than just logarithmic. It was instead a claim about “various shapes of the curve”, linear, square root, as well as logarithmic. For that, see D. P., 12/21/10 3:51 am. Altogether, I deduce from what you said that the various shapes of the RTE solution arise from the HITRAN database.

        Why don’t calculations with the HITRAN database saturate?

        By the way, you should be congratulated for showing a measure of responsiveness when on 12/17/10 8:44 pm you provided three links, including one to MODTRAN output spectra, and another to atmospheric measurements, looking down, both from unnamed sources (unfortunately). That’s a start. Now, where is your promised comparison between the two, showing the fit? A visual comparison won’t substitute for a calculation, though. As Turner, et al., Curry’s authority, above, and Lacis’s authority on the new “RT discussion thread”, said, RT has to be accurate to within 1% for GCMs.

        Do the model and measurements represent the same atmospheric profile?

        Next, how do these results extend to the computation of the global average OLR?

  160. Curryja 12/21/10 Radiative transfer discussion thread,

    On 21st at 2:49 pm on the AGU … Part III thread, I posted a comment on albedo in response to Hunter. It didn’t translate correctly, so I added an explanatory footnote. Then at 3:44 pm, you replied,

    >>heres to hoping that we can move this discussion over to the new radiation thread

    If you literally want to MOVE discussion, I think you’re going to have to do that as the administrator, deleting there and reposting here. We posters may, with some difficult self-discipline, drop discussions there and continue them here, and in that spirit, and for other reasons, I will repair and repost the albedo comment here.

    By the way for those who might want to post with some ordinary symbols, I want to share how I believe WordPress misinterpreted my albedo post. I had posted this sentence, with the stuff in the square brackets, [], in symbol form, angular parentheses:

    >>Albedo warms when ΔT [less than symbol] 0 because ∂A/∂T [greater than symbol] 0, and when ΔS > 0 because ∂A/∂S < 0.

    WordPress dropped everything between the first [less than symbol (open right angular parenthesis)] and the first [greater than symbol (open left angular parenthesis)] as if the enclosed material were a non-printing comment, or an html phrase it couldn't decipher. The next two such symbols reproduced correctly I think because they did not form a pair around enclosed material.

    I wonder if WordPress will accept a slash, \, or some other symbol, as a command to interpret the next character literally? Next I will write html commands using &, ;, and lt and gt, using this post to test WordPress before rewriting the albedo post. Is this < the Less Than symbol? Is this > the Greater Than symbol?

    I was encouraged that the normal html code for ∂, comprising an &, the word part, and a semicolon, translated correctly except where dropped in the enclosed angular parentheses.

  161. Jeff Glassman | December 21, 2010 at 12:36 pm | Reply
    A linear curve is the wrong answer. So is the square root. And the logarithm, too. And a transition from one to the other is equally wrong. The Beer Law result is on point and solves the problem, but the AGW movement doesn’t like it (obviously). IPCC’s model requires knowledge of where climate is on the curve, whatever model fits. Where, quantitatively, is the climate today with regard to saturation?

    The Beer Law can’t be the answer since we’re dealing with broadband illumination. When dealing with broadband you have to take the width of the lines into account which depends inter alia on temperature and pressure. The line width is customarily described by the Voigt profile which is a convolution of a Gaussian and Lorentzian. When in optically thin conditions we get a linear dependence on concentration, once no longer optically thin the line center will saturate and the dependence on concentration will be mostly due to the Gaussian ‘wings’ and will flatten off, further increase in concentration will switch to the Lorenzian ‘wings’ and the dependence will increase but will not reach linear. This is the ‘curve of growth’. In the case of the 15μm band of CO2 there are many lines, some of which will be very strong (near the core) and some very weak (out on the edge of the band), the overall dependence is less than linear and at our atmospheric conditions is conveniently represented as logarithmic. This is the result of saturation, your naive concept of saturation doesn’t happen.

  162. Hunter 12/21/10 10:52 am, on the AGU … Part III thread, asked

    >>How is albedo a strong positive feedback?

    >>Does albedo now warm things?

    Taking the second first, albedo does not warm in the sense of a heat source. It warms things as a feedback. And even at that, it is in the backward sense of cooling things less as albedo decreases. Albedo warms when ΔT < 0 because ∂A/∂T > 0, and when ΔS > 0 because ∂A/∂S < 0.

    Albedo is the most powerful feedback in all of climate because it directly multiplies solar radiation to produce insolation at the surface. That power is known by examination of the radiation budget IPCC uses to begin its AGW model. See the ever-popular Kiehl and Trenberth (1997) diagram, e.g., AR4, FAQ 1.1, Figure 1, p. 96.

    Surface albedo is a powerful negative feedback causing Earth to lock into its cold state, climate warmed slightly by internal heat, a small solar contribution, orbital stresses but with heat lost to deep space for lack of greenhouse gases. When, for reasons that are a matter of speculation, Earth starts to come out of its cold state, the white begins to disappear and for a while the unveiling of darker surfaces is a power positive feedback. In a few score millennia, surface albedo effects lessen as the white covering retreats to the poles, and surface albedo becomes eclipsed by clouds.

    Cloud albedo is a powerful negative feedback to warming, and serves to regulate Earth’s temperature in the warm state. Surface warming increases humidity, which increases cloudiness when cloud cover is humidity limited, not CCN limited (as is usually the case), turning down the closed loop gain to the Sun. This is a slow process, governed by the heat capacity of the ocean.

    Cloud albedo is a powerful positive feedback to solar radiation. Cloud cover appears to be proportional daily to solar activity. Radiation from the Sun alone (no dependence on greenhouse effects) accounts accurately for Earth’s complex record of surface temperature (σ = 0.11ºC) over the last century and a half with just a handful of constants. See SGW, http://www.rocketscientistsjournal.com . However, that model shows that an amplifier to solar variability exists in Earth’s climate, and cloud albedo is the perfect, otherwise unused, candidate. The decrease in peak daily cloudiness is no more than the commonly witnessed burn off effect.

    IPCC does not model cloud albedo feedback. It parameterizes cloud albedo, apparently giving it a constant but statistically reasonable value. Being parameterized means that the loop response to the Sun and the loop response to surface temperature are both open, and open with respect to the cloud albedo feedback. Because that feedback is the strongest in climate, since it gates the Sun on and off, a fair characterization of IPCC’s GCMs is that they run open-loop. As a result it underestimates solar influences. This is because ∂A/&∂S < 0. As a further consequence, IPCC attributes measured global warming from the Sun wrongly to human activity.

    At the same time, IPCC also overestimates its climate sensitivity parameter, λ, which it puts nominally at 0.5 ºC/Wm^-2. TAR, ¶6.2.1, p. 534. This is because ∂A/∂T > 0. The overestimation appears to be by a factor between 7 and 10. The cloud albedo feedback, omitted by IPCC, regulates Earth’s surface temperature, mitigating all warming effects as long as the ocean remains liquid, and most importantly, it mitigates the greenhouse effect.

    The point with regard to this thread is this. Even though radiative transfer fails to meet its goal of 1% accuracy (in estimating radiative forcing for the GCMs (Turner, et al. 2005)), the importance of that failure, and thus of the significance of radiative transfer itself, are strongly mitigated by cloud albedo feedback.

  163. Okay, up front, need to admit that it took New Year’s
    Eve to bring me to this, but – it is like this – The physical models have accounted for all that you harp on. If if is physical, it is in the GCM models unless it is parametrized to some awful extent. The basic physics are known, it is just a matter of how it is bought to a level of computational reality. We have to find a path forward as Judith so profoundly advocates and most of you so desperately seek, as do we all.

    As for an introduction here – I suspect Judy knows who I am, I’ve been dealing with this all for awhile.

    Lurking-are-us,
    Ells

    • If if is physical, it is in the GCM models.

      Boy, that was a bolt from the blue. What does it even mean?

      Particle physics, quantum mechanics, Ohm’s law, are all physical. What does it mean that they’re in the GCM models? All models? At least one model? And why does membership in one or more models matter?

      And what question are you addressing here? Apparently not whether CO2 is a serious concern since the connection is not obvious. If how to “find a path forward,” do you have a suggestion?

      • Vaughan,

        Yes, a bit of hasty typing there. Should have read more like, “The relevant physics as we best know them are included in most (or many of the) models.” And yes, GCMs.

        A path forward, for me at least, would be to have a message previewer available here.

      • I may have missed it, but I haven’t seen much discussion of GCMs on this blog. I know very little about them myself. Are you up to offering Judith a post on the topic? (The lead item on a thread.)

        I don’t know about others but I’d be interested in knowing what are the most important physical phenomena to include in a GCM, and are they adequately represented in the preferred (or any) GCMs?

        (Presumably not the three I mentioned, particle zoo, QM, Ohm’s law.)

        Also, where would the physical principles underlying the hydrologic cycle rank among your top candidates?

        And are the Hadley cells relevant to GCMs? If not, why not? If so, is their physics understood yet? Is it clear why there must be three per hemisphere? What would it take to make it two, or four?

        Is, or should there be, any difference between GCMs for studying climate change, vs. those for climate?

        Maybe others will have questions about GCMs they’d like to see answered too, before you’ve finalized your post.

      • actually i am planning a post on climate model parameterization (the stuff not explicit in the dynamical core), but i am falling pretty far behind here. If someone wants to do a guest post, that would be great.

      • Steve Fitzpatick is summarizing important aspects of radiative energy transport and its interaction with material in the Earth’s atmosphere, at Jeff Id’s tAV.

        I think many of these phenomena and processes are represented by parameterizations.

      • Very interesting material by Fitzpatrick. His reasoning in ” If aerosol effects are in fact small (for example, near 0.4 att/M^2), climate sensitivity is almost certainly quite low” points up a fundamental weakness of the theoretical computation of climate sensitivity based on physics, that we likely don’t have either enough data or enough insight into how the many complex agents of climate change interact to make reasonable predictions.

        This is why I like working directly from long-term data, incorporating just enough reliable theory into it as needed to round it out to a workable model capable of reliably answering a very limited range of questions. That’s the basis behind the model used in my “prediction” of the last 30 years of global temperature based solely on the data up to 1980, and the barely perceptible effect on the model when the (strikingly different) data between 1980 and now is used to retrain the model. Note in particular how the instantaneous climate sensitivity changed only from 1.832 to 1.837 °C per CO2 doubling, a difference of .005, despite being fed three decades of temperature data that no one would have considered credible in 1981. (However the respective 20-year-delay transient climate responses as defined by the IPCC and computed by the model were 2.808 and 2.692 degrees per doubling, a much bigger change of 0.116, which shows amongst other things that the IPCC’s notion of transient climate response is harder to estimate purely empirically than the naive instantaneous notion based on zero-year-delay.)

        The black curve labeled “Residue” is the black mark against the model, showing where it and reality part company. The departures could be caused by any or all of suboptimal choice of parameters (maybe the 75-year PDO is really 70 years, maybe we should be allowing 20 years instead of zero for CO2 to impact temperature), faulty reasoning about the principles (maybe ocean oscillations are systematically non-sinusoidal, maybe anthropogenic CO2 didn’t grow exponentially before the start of the Keeling curve), and unplanned impacts (volcanoes occur so often as to be almost planned, not so for the Tunguska event and World War II).

        What I believe this shows is that if you base your model largely on past history, backed up only with a little elementary theory that you believe to be accurate beyond reasonable doubt, and if you permit your model to answer only a very limited range of questions (such as “what will the global climate averaged over the last 12 years be in 2047?”), you have a shot at getting reliable answers to them.

        These answers don’t depend on Steve Fitzpatrick’s concern as to whether we’re taking sufficient notice of aerosols because we let nature calculate that dependence to date for us.

        Granted some new factor may enter the picture to change that dependence in the future, e.g. permafrost meltdown, but what are the odds of any computer model introducing that factor on its own initiative before it enters? When that happens we’ll know the Kurzweil singularity has arrived.

      • I found Waughan’s reconstruction a highly valuable for pedagogical purpose. I even found it relevant ‘here’, given the fact that, facing our common difficulties with the theory, Waughan recommends to rely on a few good strong basis which he puts in the field of observation.

        I must say I haven’t got any grade nor job labelled ‘pure and applied logics’. However, my own understanding in basic logics tells me that a correlation itself doesn’t provide any evidence, whereas the lack of any correlation indeed proofs something. I even think those are two good strong principles of which there can’t be any doubt. Ignoring the unknown is something that nobody can avoid, and this kind of basic logic stuff precisely takes away the problem: sure, it won’t help much finding the Graal, just keep you on the grounds of genuine science. Annoying details.

        That’s one thing. But cherry-picking implies another: of all the known but unproven candidates as causes of climate variability, which one(s) do you prefer? For the main – exponential – shape in such graphs, why not take a look at the trend in solar magnetic intensity, for example?

        Sorry it was hard for me not to think about Mann’s (and others’) methods to get hockey sticks. Not least because of the strong feeling I had when looking at the striking correlation between CO2 and temperatures in the first place.

        The main inconvenient ‘details’ were of course to be found in the ‘principal components’. I’d say, the two first ones at least. Possibly at odds with the history of sciences and with what we know about past climate. Jackpot?

        Second top rank, a scoop : I didn’t know we’d discovered such a striking correlation between oceanic oscillations and a time function merely composed with two sinusoids. Now, even oceanic oscillations frequencies are found in the spectral analysis of solar activity all the same. So I suppose combining gravitational effects of planets on the dynamo, even for half a dozen planets, could give such a very smooth function… as long as your take a small window, let’s say a dozen decades. Why not?

        As for the blade (nice one), we’re left with the same old two-fold trillion dollars question: why on Earth were temperatures so many times higher than now (about every 1000th year in the Holocene, for example) and why are the reconstructions made at any time scales so obstinate to show lack of correlation between CO2 levels and temperatures most of the time, and the rest of the time, nothing to suggest that sharp increase in CO2 levels in the atmosphere can trigger a significant (detectable) increase in temperatures ?

        (It’s far beyond the field of our basic logic stuff to suggest, like Miskolczi did, than the former could even be a damper for the latter).

      • (Dummy reply to terminate the italics (phew). Judith, if you fix Sam’s comment please delete this one.)

    • hi ells

    • Ells,

      You write: “The basic physics are known, it is just a matter of how it is bought to a level of computational reality.”

      I’m sorry, but I call BS. The basic physics are not known. There is a great deal we do not know about clouds. The CERN experiment is just starting to make progress. Roy Spencer has put forward a cloud hypothesis that may help, but it has not really been tested yet. LeMouel just found a solar signature in the temperature data. People are still arguing about aerosols (although I honestly think most of the uncertainty there has been cleared up by Chylek’s work). Plus, there could be dozens of other factors we don’t know about.

      The observation system for the ocean is much improved over 10 years ago, but there are still uncertainties. If the basic physics were known, then we would be able to explain where the heat was going. The best we can tell, the heat is not going into the oceans the way the theory calls for.

      The basic physics are not known. We don’t even know all the unknowns. Quit overstating our knowledge. You only embarrass yourself.

      • Maybe Els meant that we know the governing equations for fluid flow, so we don’t need any “basic research” to discover new physics. But we don’t have the computing horsepower to directly simulate accross all of the relevant scales, so we do need plenty more experimental work, and the things you mention are among the reasons why.

        Assume good faith every now and then. It will surprise people. Happy New Year!

      • jstults,
        I don’t know what Ells meant to say. I just know what he said wasn’t true. I hope nothing I wrote could be seen as assuming bad faith. How is one to react when someone writes 2+2=5?

        This overstatement of the physics is the biggest reason physicists tend to be skeptical of climate change. The statement made by Ells does more damage to his side than good. Every time someone makes a statement like that, the IPCC cause loses credibility.

        I did not wish to be ungracious. Ells appears to be someone who should know better. My attempt to wake him to the facts may have been unkind. For that I apologize.

        Will you forgive me, Ells?

      • No need to apologize, yours was not an unreasonable interpretation of what I wrote, although that was not exactly what I meant. I did not mean to indicate that all the physical understanding is included (sort of took the for granted), but instead that GCMs constitute an assemblage of all the relevant physics that are known. However, there are limitations due to numerical computation realities and some rather crude parameterizations. As you point out, there is plenty that is not known (and that I did not make that explicit.) Nonetheless, the best known and arguably the most important components of the models are what lead to the suggested range of warming results, which need to be improved upon. My question was how to best achieve this? (Here I am using GCM to refer to the comprehensive 3-D General Circulation Models, not what all might be considered Global Climate Models, which can be also useful.)

    • The basic physics are known, it is just a matter of how it is bought to a level of computational reality.

      I will say, there is no single physical phenomenon or process modeled in any GCMs, or even more focused models / codes, that are based on the complete and un-modified basic physical principles for that phenomenon or process. Consider two simple examples; the equation of state for air and heat conduction in solid materials. The former always, as far as I know, is assumed to be an ideal gas and at the same time some constitutes and their effects are completely ignored, and the latter generally requires simplifications / idealizations of both the geometry and the properties of the materials.

      The statement is another example, in my opinion, of the extreme over-simplification and in-completeness associated with the public face of almost all aspects of the global climate change problem. From the concept of equilibrium radiative energy balance and idealization of the Earth as a black body, to equilibrium of the Earth’s climate systems, to radiative-energy transport in a homogeneous mixture of gases. The Earth’s climate systems correspond to none of these.

      Many important aspects will never be attacked from a first-principles basis; the range of spatial scales will always be beyond resolution by discrete approximations and numerical solution methods. Some phenomena and processes associated with small spatial scales have equally small temporal scales, and these, too, will never be resolved. Some of these might not be critically important. But some will be.

      The original statement maybe could be modified to limit its range of applicability to only those critically important basic physical basis. But in my opinion the mathematical models and calculated results so far presented do not yet indicate correctness of such a limitation.

      • Dan,
        You picked a bit surprising examples as the equation of state of air is very close to that of a ideal gas as long as the relative humidity does not reach 100% and as the heat conductivity in solid material is also one of those things, where the idealization gives very good results for normal homogeneous materials. You could equally well have picked Navier-Stokes equation, turbulence and the dynamics of condensation of water in atmospheric conditions. Calculation of atmospheric processes from first principles is very much based of these phenomena, where the distance from first principles to real world models is significant.

        I have learned that the details of GCM follow to a reasonable extent well known physical principles and models, but this is indeed quite different from being directly based on the basic laws of physics.

        Your comments about unavoidable problems present in all complex models of continuum, are also true. How much these and numerous similar issues affect the reliability of climate models is certainly a very difficult question. The climate modelers know well the difficulties, but public discussion on these issues is not as common and open as I would like to see. I have read some articles on the reliability of the models, but all too few to form a good picture about the status of the field.

      • How much these and numerous similar issues affect the reliability of climate models is certainly a very difficult question. The climate modelers know well the difficulties, but public discussion on these issues is not as common and open as I would like to see.

        And this is the reason why discussions of climate models on the internet are full of so many uncritical cheerleaders who have no idea what they are cheering for. Even many “climate scientists” only have what I’d consider an informed layman’s understanding about what goes on “under the hood” of a GCM. Lots of the outspoken, but peripherally involved activist types (e.g. computer scientist, biologists, ecologists, various impacts researchers, well-meaning science communicators) have no freaking clue what it takes to solve Navier-Stokes’ equations or what a “first principles” code actually is (or how very far GCMs are from ever doing/being that).

      • or what a “first principles” code actually is

        Ok, I’ll bite. What is it?

        If “first” means coming even before solving the Schroedinger equation for the hydrogen atom I’ll be suitably impressed, especially if the code takes less than a week to predict the weather ten seconds from when it starts running (i.e. less than five orders of magnitude short of real time).

      • I took a cut at differentiating between a computational physics approach and a process model approach here.

      • Just as I’m thinking about the comments by Pekka and Josh, it occurs to me that the public face of every aspect of climate science is the spherical cow version. Starting with temporal chaos, to its a boundary-value problem and on thru to radiative-energy transport in homogeneous mixtures of gases and equilibrium states.

        The IPCC reports are very thin on characterizations of the distance between the spherical cow versions and the nitty-gritty of actual implementation into mathematical models and solution by numerical methods. The software itself is mentioned only in passing and does not receive any considerations at all relative to its critical importance.

    • Ells’ “bolt from the blue” distracted me from his next sentence,

      The basic physics are known, it is just a matter of how it is bought to a level of computational reality.

      until Ron Cram’s objection to it drew my attention to it. While I disagree with Ron that science is most effectively based on the analysis of other people’s emails (I find tea leaves work slightly better), I find myself in agreement with him on Ells’ stand on the place of physics in climate modeling.

      For me the problem with Ells’ sentence is the same as if a physicist had written that predicting the susceptibility of James Watson to certain diseases from his complete genome can with a sufficient level of computational reality be done on the basis of physics alone. Or to take a time-reversed example, predicting the rate of recovery of the economy can with enough computational reality be done on the basis of macroeconomic and microeconomic principles alone.

      I predict with considerable confidence that no one is going to assassinate Osama bin Laden during the next seven days. No one would dream of making that prediction by simulating the physics of all the people likely to come in contact with him in order to establish for each the probability of a successful attack. Instead I base my prediction on the special care reportedly taken to protect him, and the apparently low incidence of enemies getting anywhere near him.

      If Albert Smith of Woodside, CA, impressed by my track record at predicting Bin Laden’s survival, hired me to predict his safety from murderers, I’d tell him he had an even better chance, despite the fact that he has no bodyguards and I have no information on how many attempts have been made on his life in the last ten years. Instead I’d point out that he’s too harmless to have enemies and he lives in a very low crime rate neighborhood.

      If you argue that climate is different from people, being much simpler than them, I’d first point out that my arguments about Bin Laden’s and Albert Smith’s safety were already pretty simple. I’d then ask whether anyone has worked out even the basic structure of the Hadley cells from first principles, let alone its fine structure in any useful sense. The whole climate system is somewhere between one and four orders of magnitude more complex than the Hadley cell machinery, so if you haven’t yet accounted for the latter it is wishful thinking imagining you have even a prayer of accounting for the former in a useful way based just on the physics.

      Much as I love physics (as hopefully my posts have made clear), I am a realist when it comes to its limitations. Ells seems to be claiming that we will be able to predict what the climate will be generally like based on physics alone. I beg to differ.

      Instead we should study the climate and determine what factors influence what observables. We can then use our understanding of the physics (broadly construed) to come up with plausible explanations of those influences, leading to techniques for accurate estimation of the relevant parameters.

      This is very different from thinking about the physics of the situation and predicting what ought to happen. For one thing if factor X has no empirically discernible influence on observable P, even though theory offers us a wonderfully detailed account of that indiscernible influence, it is a waste of time for a computer program to spend many hours estimating the parameters of that influence, only to discover at the end that they’re irrelevant. Or worse yet, to continue to believe in their relevance thereby pointlessly slowing down further modeling.

      I should clarify that I am not making any claim here about what GCMs do or do not do. I am merely disputing the feasibility of Ells’ apparent program of basing climate modeling first and foremost on physics. For something as complex as climate it should be based on our empirically determined understanding of the agents of climate change.

      Physics can only make reliable predictions about well understood systems. It is an elementary mistake to think that we can reliably make the passage from a poor understanding of a system to a good one by theory alone, which is what Ells appears to be arguing for when he says ” it is just a matter of how it is bought to a level of computational reality.”

      Until a system is intimately understood, theory is at risk of explaining phenomena that on inspection turn out not to even exist in practice.

      • Vaughan,

        You might want to take a look at my reply to Ron above for some further clarification on this. In any event, thanks for your comments, I do agree that it is always good to have a simpler approach.

      • I did not mean to indicate that all the physical understanding is included (sort of took the for granted), but instead that GCMs constitute an assemblage of all the relevant physics that are known.

        The key word in your response to Ron that addresses my point about empiricism being preferable to pure science in the case of climate modeling is “relevant.” It would be nice to see some elaboration on it.

        If the purpose of GCMs based on physics in the way you seem to be implying is to demonstrate that physics can handle climate modeling on its own, then I have no problem with that, especially if it turns out to work better than other approaches to climate modeling.

        If however the rationale for GCMs based on physics is that this is the best approach, then I would find that hard to accept. I would distinguish such GCMs from celestial navigation programs that are based on physics, which I could more readily imagine as the best way to go because there are far fewer open problems in celestial navigation, or so I imagine.

        I do agree that it is always good to have a simpler approach.

        Which in fact you are likely to get if you emphasize your term “relevant.”

        But for me simplicity and relevance are just nice spinoffs of a more fundamental point, that the empirical must take priority over the physics in modeling so complex a system as climate. That’s not to say there will be any Nobel prizes in physics coming out of the empirical contradicting the physics, but only that such an inconsistency may be indicating a bug in how the relevant physics was applied. Applying physics to simple systems should not have too many bugs, but for complex systems it is easy even for the professionals to make mistakes, which may pass unnoticed until some discrepancy emerges.

        Observation and theory can guide each other, but in some situations one of them has to take the lead. QED can predict some things with stunning accuracy such as the natural width of spectral lines (relevant here), but not the mass of the top quark, which is outside the scope of QED and requires observation.

        It seems to me that climate science is one of those areas where observation needs to take the lead.

      • Vaughan:

        Observation and theory can guide each other, but in some situations one of them has to take the lead. QED can predict some things with stunning accuracy such as the natural width of spectral lines (relevant here), but not the mass of the top quark, which is outside the scope of QED and requires observation.

        It seems to me that climate science is one of those areas where observation needs to take the lead.

        QED is a complicated theory that adds few predictions to earlier theories. It can predict purely electromagnetic details with extreme accuracy. Its validity is confirmed strongly already by one single experimental comparison, because of the accuracy of the most stringent comparisons. QED is also a theory that cannot in any sense be derived from earlier theories, but introduces new basic principles.

        Climate science is in a sense in the opposite extreme. There is a huge amount of empirical observations about atmosphere and other earth systems, but no single decisive proof of validity. It is expected that no new basic physical principles are involved, the well known principles are taken into account as far as scientists are able to do it, but very many details must anyway be described by semi-empirical correlations and simplifying parameterizations. Whatever can be kept of firm theoretical understanding is one component for building trust in the models. The rest comes from empirical comparisons.

        The comparisons cannot be described by one single number as for QED, but are numerous and diverse. Each single comparison proves very little, but their totality perhaps more. The totality of very diverse observations is, however, impossible to judge or even use at all without the help of models that connect them to each other and to the underlying theory. This is the reason for dependency on large GCMs.

        You may optimistically assume that a simple time series is informative and gives useful understanding about the future, but this approach does not offer any means of judging, where the limits of validity are, or whether some new processes are becoming important to the extent that a historical correlation fails badly already in near future.

        Back to the big models. Keeping the numerical methods stable enough for giving any meaningful results sets limits on their structure. It is known that these choices make the models incapable of describing well some types of real physics, but it is difficult to judge how much of significant real dynamics is suppressed and how much this affects the results relevant for climate.

        Even basic physical principles like exact conservation laws cannot be wholly included in the models. The finite element -approach is favorable for handling conservation laws, but in a complicated system not all of them can be implemented exactly for finite volumes, only in the limit of zero volume and zero time step, can all be simultaneously valid. Grid based models have different advantages and weaknesses. The deviations from exact conservation laws are typically corrected by some average method, but this is not fully equivalent to the real physics.

        The atmosphere is not a single phase system, but a two (or three) phase system with phase transitions and flows involving both gases, fluid droplets and solid particles. The real dynamics of phase transitions is complicated. The problems of handling cloud formation and its connection with aerosols is well known to be very difficult and lacking full theory even on microscopic level. All kinds of turbulences and eddies are impossible to handle fully even in purely aerodynamic flows etc.

        One of the sources of thrust has been told to come from the similarity of results of different groups, but this tells little about difficulties common to all models due to the common deficiencies in understanding of basic processes or inherent in requirements of getting models to converge.

        How much the climate modelers really know about the problems of the above type and how well they can justify the validity of the results in spite of the open issues, are questions that I would like to learn much more about. I would expect that modelers have good answers to some of these questions, but less convincing to others, but I do not know, whether I can find good discussion on these issues anywhere in a form understandable to anyone but the very best among the active modelers themselves.

        My purpose is not to add to the doubts on the value of the models, but rather to tell, what kind of information would be needed to influence people, who have fairly strong background in physics and in modeling. If these people would have better basis for their judgments, they might gradually contribute to the trust of wider audience. As long as only very few people deeply involved in the research can really judge the value of the model results, it is difficult to add to the trust. This is true in particular, because it is impossible to judge even who among the modelers really understands the limits of his or her own work. From other fields I know all too well that many scientist use complicated models without even rudimentary understanding of the limits of their validity. Only a fraction of them is really up to the task.

      • Pekka:

        I was unable to find a single essential point of yours on which I disagree–unusual since I can usually do much better. I’ll try again tomorrow, but for the moment I doubt if any disagreement I come up with will be terribly significant. I get the feeling we see pretty much eye to eye on these things, difficult though they may be conceptually.

        Vaughan

      • Richard S Courtney

        Pekka and Vaughan:

        I am a AGW-skeptic but I agree with your views on this.

        Perhaps we are at last starting to find the common ground that Dr Curry established this blog to attain?

        Richard

      • I am a AGW-skeptic

        Me too. I’m sceptical that modeling should begin with physics rather than with observation, and I’m sceptical of any conclusion obtained from first principles.

        As far as explaining observations goes, I have no objection to using physics for that, or any other aspect of our understanding of nature that might account for the observations.

  164. I found it a highly pedagogical value (and that way relevant ‘here’). I must say I haven’t got any grade nor job in pure or applied logics. However, my own understanding in basic logic tells me that a correlation itself doesn’t provide any evidence.I even think this is a principle of which there can’t be any doubt. I also ‘believe’ that the lack of any correlation indeed proofs something.

    Ignoring the unknown is something nobody can avoid, and this kind of basic logic stuff precisely takes away the problem: sure, it won’t get you the Graal, just keep you on the grounds of genuine science. Annoying details… That’s one thing. Sometimes cherry-picking partly implies preferences.

    Among all the known but unproven candidates as causes of climate variability, why not take a look at the trend in solar magnetic intensity, for example?

    The main inconvenient ‘details’ were of course to be found in the principal components. I’d say, the two first ones at least, respectively at odds with the history of sciences and with correlations to what we know about past climate. Jackpot?

    Second top rank, I was amazed to acknowledge we’d discovered such a striking correlation between oceanic oscillations and a time function merely composed with two sinusoids. Now, oceanic oscillations frequencies are found in the spectral analysis of solar activity all the same. So I suppose combining gravitational effects of planets on the dynamo, even for half a dozen planets, could give such a very smooth function… as long as your take a small window, let’s say a dozen decades. Why not?

    As for the blade (nice one), we’re left with the same old two-fold trillion dollars question: why on Earth were temperatures so many times higher than now (about every 1000th year in the Holocene, for example) and why are the reconstructions made at any time scales so obstinate to show lack of correlation between CO2 levels and temperatures most of the time, and the rest of the time, nothing to suggest that sharp increase in CO2 levels in the atmosphere can trigger a significant (detectable) increase in temperatures ? (It’s beyond the field of our basic logic stuff to suggest, like Miskolczi did, than the former could even be a damper for the latter).

    NOTE: this landed in spam, showed up as one big link, i took out all of your formatting to fix it

    • why on Earth were temperatures so many times higher than now (about every 1000th year in the Holocene, for example)

      Well, what do you know, I had no idea. Where can I apply to study under the people that might bring me up to speed on this very interesting point of view, about which I know very little? I confess I have been operating under the delusion that 2010 is the hottest year on record for many millions of years.

      and why are the reconstructions made at any time scales so obstinate to show lack of correlation between CO2 levels and temperatures most of the time, and the rest of the time, nothing to suggest that sharp increase in CO2 levels in the atmosphere can trigger a significant (detectable) increase in temperatures ?

      Even more remarkable. You evidently move in a circle of people who are a veritable fount of novelty.

      If I were only as clear-eyed and wise as you I would see the world as it is, and make better decisions. You can help me there by giving me insight into where I have gone astray, so as to most efficiently set me on the path to the truth.

      The best way to understand how I came to my deluded state of mind is to invest the time required to enjoy Richard Alley’s talk “The Biggest Control Knob: Carbon Dioxide in the Earth’s Climate History” at the 2009 AGU meeting in San Francisco. This is a talk dumbed down to the kinds of people with nothing better to do than attend the annual American Geophysical Union meeting. You should therefore find it a little slow paced, but you may enjoy it nonetheless if you should happen to be interested in the question of how CO2 and temperature have been related in “deep time,” meaning a great many millions of years ago.

      At the 6-minute point of this talk, Professor Alley says “CO2 keeps inserting itself everywhere we look. If you leave CO2 out nothing makes sense. If you put CO2 in a whole lot of it makes sense. And then you can put the other pieces into the puzzle and make it work.
      CO2 keeps being the only explanation for a lot of what happened which is validated, that works.”

      I have no idea how I would refute that, especially after seeing the rest of his talk where he backed this up. I would therefore greatly appreciate it if you could help me see your point of view by pointing out all the things that are wrong in his talk, which currently I find very convincing.

    • I’ve watched the AGU video, Waughan. ‘Interesting times’, yes…

      Is he on drugs or something? Sorry I sincerely don’t like these ways (especially for an introduction – please see the ‘rest’), but this guy’s show (and sharp voice) awfully resembles that of a tele-evangelist (he leads the public applause in the first place when saying that sceptics should loud; is he too mad to be so unconscious of his own behaviour?) So you must understand that it was very painful for me to watch that all the way. Not to mention the cascade of flaws in what he says, and the facts that he doesn’t provide the slightest evidence for ‘his’ big conclusion. Let alone the intrinsic terror in the message. In short: scary.

      Of course it’s impossible for me to give detailed replies to even 1/10 of what he pretends. I’m much too busy for that, with my own job (engineering – mainly modelling vibrations, thermo-mechanical fatigue, & design on turbo compressors for automotive, that’s what pay my bills these times), politics tasks (several big issues such as national and EU institutions, currency policies and other economical stuff), family… and climate.

      However, I indeed pretend you’ll find remarkable information and, all in all, striking refutation of this show content and conclusions, in innumerable papers, and even a great deal of books, written by internationally known geologists, physicists, …
      As our over-excited R. B. Alley talks a lot about geology, I would especially recommend Ian Plimer’s ‘Heaven and Earth‘ (2 300 references, 99% of them being scientific publications).

      When you talk about ‘2010 [being] the hottest year on record for many millions of years‘, I suppose your kidding? But you and me somehow started with humour… and as a result I have a real doubt… As far as I remember, one of the most extensive collection in a book of thousands of studies blasting away that ridiculous idea is F. Singer and D. T. Avery, Unstoppable warming every 1,500 years‘.

      Knowing that you’re mostly interested in observation, I underline that numerous studies involve the ‘closest possible proxies’, from incredibly patient work with human archives (and old languages) to signs of tropical plants and animals in Siberia 8 000 years ago, apart from much longer and diverse series than ice cores.

      Please believe me that it’s just for lack of spare time that I won’t give more direct links. At least see:

      – these reconstructions of temperatures and ice accumulation in central Greenland for the last 10 000 years (up to 1900, bad luck): http://www.pensee-unique.fr/images/sam1.jpg I made this graph from data provided by NOOA ftp (indicated on the picture – last time I intended to send the address to the data but that’s what have made my post sending fail). I deeply recommend that you make your graphs from those data, especially to include the glaciation period (from 50 000 years ago) in order to look at the striking events (violent warming – violent return to cold) which occur every 1 500 years in average (I forgot their name, associated to two researchers having complicated surnames). So I add the direct address here in the text, cut in 2 in order to avoid post failure:
      .ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/greenland/
      summit/gisp2/isotopes/gisp2_temp_accum_alley2000.txt

      – Maybe a good start for a personal inquiry in the issue of correlations CO2 / temperatures in the last million years, there: http://wattsupwiththat.com/2010/12/30/the-antithesis/

      Now, I feel mostly engaged to try a more direct answer to the other face of the two-fold billion dollars question, concerning the lack of correlations between atmospheric CO2 and temperatures levels, when looking at reconstructions of past climates at any time scale. Of course, I was stroke by what R. B. Alley says looking at his big knob. Precisely, I’ll now have a look at one sequence of the AGU video.

      Here we have another big issue with elementary logics. I’m afraid it’s worth repeating the main point of my previous post, because that’s exactly the point here: 1. a correlation itself proves absolutely nothing, whereas 2. the lack of any correlation indeed proves something.

      At 19:18, our climate evangelist tells a ‘very nice story’ (his own words), gesticulating in front of a graph showing reconstructions of atmospheric (?) CO2 levels with indicators for glaciations.

      1. At best, the graphs suggests that a low atmospheric CO2 mean level sustained for a very long time could contribute in triggering a glaciation(s) period.

      2. More valuable piece of information, at least regarding the conjecture of ‘man-made global climate change’: for all we can see in that same graph, one deglaciation clearly started well before the CO2 levels got higher; for the other ones, there’s not any link at all.

      Absolutely nothing there to suggest (let alone demonstrate) about CO2 (contribution to) triggering temperatures elevations or even maintaining high temperatures levels. Though incidentally, our professor uses that in order to ‘demonstrate’ what his knob wants. The point is not what the graphs says, nor is it what it doesn’t show. The point is what the actor says the graph show. And that’s bright evidence of scientific misinformation and abuse (if it’s deliberate or not is another question). Just divide the time scale by 500 and you get a mere remake of a famous scene of misinformation in Al Gore’s best seller film (including actor show, minus the unbearable sharp voice). In fact, it’s even worse, far worse.

      Among big ‘details’: Alley’s graph involves absolutely no proxy for high or even medium levels of temperatures. Listening properly (remember I’m French) I noticed what a careful reading should have told me: here the only vague index of ‘global temperatures’ doesn’t even concern ice thickness at the poles but ‘paleolatitudes’ of ice presence.

      Incidentally, you can see on that very same figures that the mean value of the ice extent towards the equator for the last 10 millions of years or so is shown to be about 40 degrees (North or South)… And you’re telling me that the people providing your information pretend 2010 is the hottest year ever ‘recorded ‘ for many millions of years‘? No, seriously…

      Besides, I’d rather insist on the fact that the time scales in this graph are incredibly long, and of course, far from us (let’s forget the issue of confidence in the reconstructions). According to that graph: ‘low CO2 levels’ in my point # 1 above means less than 1 500 ppmv (a level which the human race is most probably unable to make happen even in a very far future). As for the time scale, ‘a very long time’ in my point #1 is of the order of 10 millions of years. Our terrifying joker himself, in the video, evokes (no doubt there’s similar ‘half reading’ there) some of the chemical processes which are good candidates to explain the low trends of CO2 level in the atmosphere in geological times. And that’s one of the big problem with our professor, here: the data he uses to show that ‘all works when you take into account the CO2, and all is fuzzy if not’ are essentially as irrelevant as his equally dominating gesture. ‘Plus c’est gros, plus ça passe.’

      In other words, all we ‘see’ there is information about situations which are clearly incomparable to the possible present and far future problem for mankind. But that heavy irrelevant figures are somehow efficient… – the appropriate expression in French for this top rank technique of any good propaganda (misinformation) is ‘T’as vu l’avion ? ’. Except from that, all those precisions about this ‘crushing’ stuff are just ‘a fortiori’ to my point, which is about basic logics with correlation issue, and applies equally to the four different time scales I considered: those for that graph, Gore’s, Mann’s VS Mona Loa + ice core ‘measures’ for CO2, and yours.

      One (more) very strange thing with this graph is that, if the professor wanted curves for temperatures reconstructions at the same scales, it was not more difficult to collect them than for CO2. So why didn’t he gave that relevant superimposition instead of that with his index of ‘ice paleolatitude’? The big clue is of course to be found in the range of high and medium temperatures. In order to shorten this already long reply, may I suggest that you try this very interesting exercise yourself?

      Yet another thing that single graph doesn’t show – I’m afraid ‘hides’ would be more relevant here. In fact, for CO2 part, this graph is quite well known. But that one has smoothed values (he says ‘5 points smoothing’… everybody understands). The very high range of uncertainty, the differences between curves (you’ll have noticed Alleys say it’s rather good) are one thing, but it’s worth noting that other well know versions of this CO2 graphs show a much higher time resolution. Just take a look at them and then tell me if you see anything to support the idea that variations in atmospheric CO2 levels induce some temperatures variations. And please, just do it before you want to have those mysterious signals unfiltered with some of your nice techniques (of course, I’ve got nothing against those tries to draw away residues, as long as you don’t do that instead of whatever else can be done).

      • > I would especially recommend Ian Plimer’s ‘Heaven and Earth‘ (2 300 references, 99% of them being scientific publications).

        George Monbiot has a serie on Plimer’s work:

        http://monbiot.com/index.php?s=plimer&sbutt=Find

        The most interesting one is certainly his Showdown with Plimer:

        http://www.monbiot.com/archives/2009/12/17/showdown-with-plimer/

      • Willard,

        excuse-me, but this information is of relatively low interest. Of course Ian Plimer, like everybody, makes mistakes (even I saw several candidates when reading his book – though I must say I can’t remember Plimer’s claim that satellites and radiosondes show no warming; I thought he only said that they show less warming in the middle of the troposphere than at the surface). And those ones, if true, are really minor ones. I proposed this book because of the huge quantity of geological issues in it.

        I think this series of posts (before and after this reply) I proposed lift much more decisive questions regarding the problem as a whole. Nobody will say you’re wrong focusing on any mistakes. But we’re still waiting on the first bit of proof that we indeed have a problem with anthropogenic global warming. Sorry, I think we’d rather start from there. And I even suggested that if we don’t, then we are probably facing an endless issue. Tell me it’s just my opinion, if you think it; if you just drop the question, you’ll understand I would think you’re essentially wasting your time running after infinite number of inquiries.

        Unfortunately for academicians (‘believers’ or ‘deniers’), they just can’t have the freedom to put things this way; they’re forced to go on researching in the details. And there’s no question if they make mistakes; everybody does. Whereas me (and you?) are free even to question the way the general question is put.

      • Jacques,

        Not only Ian Plimer says things that seem to be false, but he repeats them and does not back from them when confronted with their falsehood. The errare humanum est seems to beg the usual ending of that latinism.

        If we were to start with the overall argument, and not details, as you seem to wish, we might as well try to argue against Allen’s formulation of the argument:

        > CO2 keeps inserting itself everywhere we look. If you leave CO2 out nothing makes sense. If you put CO2 in a whole lot of it makes sense. And then you can put the other pieces into the puzzle and make it work. CO2 keeps being the only explanation for a lot of what happened which is validated, that works.

        Please bear in mind that I do not want to argue with you, but only want to understand how all these questions fare against Vaughan’s main argument, which seems to me to be that bit quoted from Allen’s presentation.

        Speaking for myself, I’ll simply note that talking of a “decisive” question hints at the myth of the experimentum crucis, which kinda tells me that your position rests on shaky epistemological ground. This impression is reinforced when reading the laius surrounding the proof you are asking for CO2. Evidential-based reasoning, and most of the empirical sciences, are best construed abductively.

        Dismissing most of empirical sciences by asking for their theories to be supported by formal deductions is a trivial matter.

      • Waughan, Willard,

        I was happy this morning to discover my real identity is in fact Jacques Duran (http://www.pensee-unique.fr/auteur.html)! Thanks a lot. At least you’ve offered me an additional occasion to laugh today. As a matter of fact, I haven’t even had the chance of a personal relation with that guy.

        Having no weblog myself (let alone a blog specialized in climate stuff) I once sent him my graphical treatment of some data, which he put (among others) in an appendix to one of his online articles for lay man… It was me who indicated the link to J. Duran’s site… Do you think this guy (or me) is so stupid? You sympathetic detectives may want to check it yourself, before I once again evoke certain researchers who often seem satisfied to make their own reality happen… So what about searching the words “merci Sam” in this page: http://www.pensee-unique.fr/oceans.html ?

        Once again, regarding our common concern here, you’re essentially wasting your time prattling on people rather than scientific content.

        Please call me Sam, it’s just how family call me, and colleagues as well.

        Yours sincerly,

        Samuel Schweikert

      • Sam,

        Sorry for my misunderstanding. It was only meant to acknowledge the fact that you linked to Jacques’s site when showing Allen’s graph. Happy to know that only the graph (and other small bits perhaps) have been authored or edited by you on that site. Pensée unique is certainly a website that merits due diligence. Another time.

        Sadly, I see that you have taken this as an opportunity to forego all the arguments in my reply, You fail to acknowledge that Plimer has been caught misrepresenting. You still profer that Popperian claptrap, which incidentally is not quite what Popper was holding.

        Your argument rests on concepts of causation and proof that merits due diligence.

        Until next time,

        w

      • You’ve gone in the wrong direction all the way, here, Waughan. I’ve rarely seen such misfortune: obviously mistaken from the beginning to the end.

        For a start, see Richard S Courtney’s answer (just bellow), which is as good as short. All of the rest follows.

        Then see mine (bellow Richard’s), also addressed to people having difficulties with reading and elementary logics, and also bringing us back to the big issue: not only we’re still waiting for the first bit of proof that CO2 has any consequences in surface temperatures in the real world but, while waiting for the first one, scientifically speaking, there are already enough proof of the contrary.

        A nice story

        It’s equally worth reminding how the IPCC dealt with that serious matter.

        Knowing that you can’t fool anyone entering those marshy grounds, the institution chose another approach, the Big Bertha, consisting on perfectly avoiding the issue, while focusing on an entirely irrelevant one (which, incidentally, is nonsense).

        In short, the IPCC said: of those curves we can’t draw any conclusion, given the fact that there are different ice cores taken in different places

        Let’s forget the fact that other conclusions of that same report indeed rely on the same curves. Obviously, the point is that, for a given time, each point in both CO2 and temperature proxies curves comes from the same bubble, whatever the place of the ice core. (And even if we consider the issue of uncertainties in absolute dating, it is of very low importance here).

        Re-using Alley’s words: a nice story.

        Willard,

        regarding the main issue about causation and correlation, see the other posts I’ve sent this morning, in particular the one sent on ‘January 6, 2011 at 6:54 am’ and the one after that.

        As for discussions about mistakes made by anyone, I thought my previous answer said:

        – you’re probably OK there. Though the author could also have interesting things to say;

        – anyway, please understand that I consider it’s not decisive point. My own conclusions at this stage the whole problem is probably an endless issue. Sorry I don’t want to ‘kill the debate’, but this is what I think. The ability to analyze the questions and not only the answers is a very important aspect of democracy. Here, one of the consequences of my ‘hypothesis’ (in terms of methodology but also on the conditions and even the future of the debate) is that then it could be very important to restart from basic principles. So, please, don’t focus on people. People is of course the big point in the end (including in the issue of an hypothetic problem with Co2 induced warming), but this doesn’t’ change what I say about the starting point.

      • Pretty sorry: yet another problem with the content of my posts… (only the answer to Willard was to be sent here). Maybe it’s time for me to have a short pause. :)

      • Let me take advantage of the pause to apologize for going along with willard’s identification of you with Jacques Duran and making far too much of it. I also apologize to Dr. Duran for taking out on him, as a consequence of this misidentification, my frustration with your line of reasoning.

        I did look for confirmation that willard had made the right connection, and found it in your “http://www.pensee-unique.fr/images/sam1.jpg I made this graph from data provided by NOOA ftp” from which I inferred it was very likely to be your website. Now that I understand how your name ended up on M Duran’s website it greatly changes the probabilities, which I readily acknowledge and for which I’m sincerely sorry.

      • I again encountered Plimer’s name recently:

        > Plimer often simply reverses the conclusions of papers cited when it suits his purposes, a fact he didn’t deny when it was put to him three times on ABC television. Astrophysicist Michael Ashley described his book Heaven and Earth as “scientifically worthless” in The Australian.

        http://scienceblogs.com/deltoid/2011/01/tolgate

      • Sam,

        I agree that Plimer might have interesting things to do. Even his claim that volcanoes emit more CO2 than humans is interesting, in my opinion. This is both interesting and wrong.

        Moreover, that Plimer still maintained that claim in front of the camera when questioned by Monbiot. This is more than being wrong, or it’s not the same sense. The evidence is such that the only reason to refuse to admit that humans output more CO2 than volcanoes bears a name in psychology:

        http://en.wikipedia.org/wiki/Denial

        That is not to say that Plimer can’t say things that are true, or cite interesting works among the 2 300 references, 99% of them being scientific publications he cites, or provide interesting contrarian hypothesis, or whatnot.

        But you have to admit that saying simply:

        > Plimer’s book has 2 300 references, 99% of them being scientific publications.

        certainly does not portray the same kind of dedication to the pursuit of truth than saying

        > Plimer’s book contains many claims that are already contradicted by the best evidence we have, claims that Plimer still maintain when confronted by journalist, in front of the TV.

        Details matter here. Being wrong is not the same thing as denying the evidence. Citing an author’s book and supporting its importance by the number of references is not the same thing as mentioning that an author is being known to deny the evidence.

        Details also matter regarding causation. Vaughan Pratt has a very strong rebuttal there:

        http://judithcurry.com/2010/12/05/confidence-in-radiative-transfer-models/#comment-28894

        I might be able to add some more general remarks regarding causation another time.

      • A reviewer of Plimer’s book:

        > The arguments that Plimer advances in the 503 pages and 2311 footnotes in Heaven and Earth are nonsense. The book is largely a collection of contrarian ideas and conspiracy theories that are rife in the blogosphere. The writing is rambling and repetitive; the arguments flawed and illogical.

        Source: http://www.theaustralian.com.au/news/ian-plimer-heaven-and-earth/story-e6frg8no-1225710387147

        Michael Ashley is professor of astrophysics at the University of NSW.

      • Willard,

        once more: this thread is about confidence in radiative models; dealing with evidence (or lack of evidence) that those models describe valid theories is of the uttermost importance; so are counterarguments to counterarguments… as long as they’re linked to the issue.

        Citing Plimer’s book for the purpose to reply to secondary arguments, I, like almost everybody else, made a mistake. If you’re only interested to in that point, then I consider you’re out of the subject. Anyway, you’ll find delectable counterarguments to claims from ‘both sides’.

        Now, with or without relation to the issue in this thread, I suppose it’s worth investigating further in this particular affair. However, I’m surprised to see you sending links like that very short and empty comment from Michael Ashley’s (then Tim Lambert only quoting the former), who invoke peer-review and aligns unsupported claims when broadcasting in a mass media… Have you realized that, in the ’The Australian’ article, our astrophysicist hasn’t even found useful to give a single point in his own speciality out of those numerous Plimer’s claims supposed to be laughable? Nothing about the Sun, nothing about extraterrestrial dusts and comets, … and of course nothing about the relations between solar activity and climate, but only not informed laymen would think that’s a question for astrophysicists. The only thing he says is about one other book saying silly things about what’s inside the Sun… And Ashley’s also found a lot of room in the newspaper to talk about other such silly books, about his own speciality and about an uninteresting development in Plimer’s book introducing his foreword… Maybe you should read Plimer’s book to realize how feeble those comments are, and than it’s doubtful that Ashley have red more than a few pages of Heaven and Earth. Sorry, this is laughable. Now please, go on like that if your intention here is to show that Sam rather avoid answering when asked to: sure you’ll win by exhaustion.

      • > The only thing he [Ashley] says is about one other book saying silly things about what’s inside the Sun…

        Here is the relevant paragraph from Ashley’s review:

        > Plimer probably didn’t expect an astronomer to review his book. I couldn’t help noticing on page120 an almost word-for-word reproduction of the abstract from a well-known loony paper entitled “The Sun is a plasma diffuser that sorts atoms by mass”. This paper argues that the sun isn’t composed of 98 per cent hydrogen and helium, as astronomers have confirmed through a century of observation and theory, but is instead similar in composition to a meteorite.

        In that paragraph, Ashley notes that Plimer is reproducing “almost word for word” a contrarian paper in astrophysics, a paper that contest that the Sun is made of hydrogen and helium…

        This suffices to show that to say, as Sam does, that Ashley did say there is

        > Nothing about the Sun […]

        is plainly false. It also suffices to show that to say, as Sam does, that

        > [O]ur astrophysicist hasn’t even found useful to give a single point in his own speciality out of those numerous Plimer’s claims supposed to be laughable?

        We clearly see that Ashley mentioned that the Sun is considered to be composed of hydrogen and helium. In fact, Ashley comments thus:

        > It is hard to understate the depth of scientific ignorance that the inclusion of this information demonstrates. It is comparable to a biologist claiming that plants obtain energy from magnetism rather than photosynthesis.

        ***

        Finally, we acknowledge Sam’s admission:

        > Citing Plimer’s book for the purpose to reply to secondary arguments, I, like almost everybody else, made a mistake.

        We agree.

        If details matter, we can see that

        > Sam est dans les détails.

        is less certain than Vaughan Pratt might have suggested.

      • Willard,

        I’m still waiting for you going back to the issue of this thread in one way or another. Perhaps you could do as announced and add more general remarks regarding causation. Of course, I’m ‘here’ to discuss about the proofs that we have a problem with the CO2 level in the atmosphere.

        In the meantime, you’re acting exactly like Ashley did with Plimer’s book (you haven’t red). The method consists in taking very small bits of what someone has said, even if entirely irrelevant to the issue, and even distorting references to other people who themselves have said something wrong, even if the former’s statement you’re supposed to criticise have no relationship with the invalid words of the latter. Then just because someone have made a mistake somewhere, you to try and discredit all he says. Moreover, as you behave like an authoritarian detective… you ask the others to do the job properly themselves. I’ve already asked you to allow me to focus on that and avoid wasting a lot of time with irrelevant issues. You’re going on, though. That’s a harassing method, which speaks by itself.

        You’re right, there were not zero but one bit of the book content in Ashley’s paper. Now, having red the passage at stake (it seems you didn’t do it yourself), I judged Ashley’s reference doesn’t deserve to by called a quotation or even a faithful reference. Not to mention the relevance to the issue of the climate on Earth.

        Ashley notes that Plimer is reproducing “almost word for word” a contrarian paper in astrophysics, a paper that contest that the Sun is made of hydrogen and helium

        As Ashley put it himself, what Plimer reproduced is a passage of the abstract of that paper (see footnote # 516 in Plimer’s book, page 120: Manuel, O., Kamat, S. A. and Minoza, M. 2007: The Sun is a plasma diffuser that sorts atoms by mass. Astrophysics 654: 650-664). Ashley haven’t said what the passage quoted by Plimer says; he rather lifted something in the content of that ‘plasma diffuser’ paper. However, in Plimer’s passage, there’s nothing about contradicting the idea that the sun isn’t composed of 98 per cent hydrogen and helium or pretending it’s similar in composition to a meteorite. This is only a very short passage (9 lines), providing nothing but a general view of a selection between lighter and heavy elements between the Sun’s core and surface, and is drowned in long developments about the Sun’s belt conveyor, magnetic field, internal structure, etc, using plenty of other references. Plimer’s chapter entitle ‘The sun‘ enclose 294 references in footnotes, associated to studies concerning the Sun but also other celestial agents and, of course, (cor)relations with the climate on Earth.

        This very passage says that ‘light elements like hydrogen and helium and lighter isotopes of other elements’ are conveyed toward the surface; that hydrogen ions [are] generated by emission and decay of neutrons at the core, are accelerated upward by deep magnetic fields, thus acting as a carrier gas that maintain separation of lighter from heavier components in the Sun, etc. As you can see, this describes a “living” process nobody would imagine in “dead” bodies like stony or iron meteorites… Anyway, there’s obviously nothing there to explain or contradict something about the variations of climate on Earth.

      • I thought you’d have understood that conclusion of mine, but maybe I’d rather express it clearly: not only those methods are clearly unworthy of science and fair debate but they also show the authors’ spirit of sufficiency and faith. In short, ideology.

        What Ashey chose to pick and say or not to fill the format in a newspaper comment is something that ‘speaks for itself’. Same thing for your behavior, as far as I know it.

        Of course it’s not laughable at all… knowing that the issue has become a very serious political matter.

      • Last thing, Willard: for the same reason I’m not at all surprised by the fact that Plimer had difficulties with journalists interrogation on those bit of unfortunate passages.

        Now, thinking about similar situations for “global warmists”, I don’t think one can reasonably intend them to ‘completely loose the face’ before the public. As science always progress with errors, it is obvious that human dramas of that kind happen all the time. Not to mention the cases when scientists are so involved with political decisions that have already been taken (think about Mann, for example). So, unlike you, I try and forbid myself to focus on that human aspects (I’d rather consider human aspects as the central issue at last, not in that part of the scene) and instead chose to analyse the content of the arguments from both sides.

      • Ok, close to midnight and I’ve managed to get enough stuff off my desk for the moment so I don’t feel guilty about returning to Climate Etc.

        Regarding Sam’s point,

        a correlation itself proves absolutely nothing,

        I knew from Jean-Yves Girard’s work on logic that France has a very different view of logic from the US. This seems to extend to the concept of correlation.

        In France, apparently correlation proves absolutely nothing. In Australia, the US, and I would hope the UK, scientists take correlation very seriously. Otherwise why would they be interested in orthogonality of vectors as an indication of lack of correlation? Why would quantum mechanics take the bra-ket concept seriously if correlation meant nothing in physics? Correlation is also fundamental to statistics: without covariance matrices statistics would not exist as a subject. In medicine correlation is tremendously important, in many situations a matter of life and death. In systems theory homeostasis is an expression of correlation in which the agents take turns being cause and effect, just as with temperature and CO2, and with credit card debt and interest payments. Maybe when credit cards are introduced into France, Alley’s use of them to illustrate his point will make more sense to Sam.

        Supposedly Sam has a background in quantum mechanics. That he seems unaware that correlation is fundamental to quantum mechanics suggests that perhaps his role was more administrative than technical, as one might infer from his former title of Director of Studies (1996-2003) at the Ecole Supérieure de Physique et Chimie de Paris (ESPCI).

        This is further borne out by his mindless repetition of all the standard denier cliches at his pensee-unique website. Sam, if you have an original thought buried in all the usual strawman arguments and misquotations of climate scientists, you could save us all a lot of searching by pointing it out. For example you attribute “Unless we announce disasters, no one will listen” to Sir John Houghton, which neither he nor you came up with, he because it would make no sense and you because you’re just mindlessly parroting what your fellow deniers say.

        Such misattributions are routine in this debate. I’ve been surprised at how many things I myself am supposed to have said that I can find no evidence for.

        Then there’s the “discredited hockey stick.” Gee, how creative of you, Sam. However did you come up with the brilliant idea that maybe there was some problem with Mann’s hockey stick analogy?

        Had Sam had something original to say it might be worth coming up with an original rebuttal, whence my interest in his pointing me at any such. Everything I’ve found so far has been rebutted so many times that it’s a waste of time doing it yet again.

        Re-reading his responses in this thread more carefully, I can now see what I misinterpreted in my earlier too-hasty reply, for which I apologize. You can ignore my request for a comparison with the scale of GISP2 and global Holocene, Sam, it’s clear from your replies that I’m not going to get a straight answer from you on that one, perhaps because you’re wondering whether the answer might undermine the relevance of that graph to global warming. Your caution is well placed.

        I was thrilled that I was even able to get Sam to watch the Alley talk at all. So you can imagine my disappointment that he was unable to follow its logic. The only explanation I could come up with was that to appreciate the reasoning underlying Alley’s talk involves an understanding of the concept of feedback that physicists and chemists are less familiar with than credit card companies.

        I realize Santa will consider me more naughty than nice to put it this way, but under the circumstances I’ll forgo a toy or two next Christmas and do it anyway. You can lead horses to water, but you can’t make them drink. And you can cast swine before pearls, but you can’t make them think.

      • Richard S Courtney

        Vaughn Pratt:

        Well, you used many words to present nonsense.

        The facts are:
        Absence of correlation proves lack of causation.
        Presence of corelation proves nothing about causation.

        There, that corrects your misunderstanding.

        Richard

      • I believe that Vaughan replied to what Sam said earlier:

        > However, my own understanding in basic logics tells me that a correlation itself doesn’t provide any evidence, whereas the lack of any correlation indeed proofs something. I even think those are two good strong principles of which there can’t be any doubt. Ignoring the unknown is something that nobody can avoid, and this kind of basic logic stuff precisely takes away the problem: sure, it won’t help much finding the Graal, just keep you on the grounds of genuine science. Annoying details.

        Annoying details indeed.

        I hope this corrects any misunderstanding.

      • I believe that Vaughan replied to what Sam said earlier:

        > However, my own understanding in basic logics tells me that a correlation itself doesn’t provide any evidence, whereas the lack of any correlation indeed proofs something. I even think those are two good strong principles of which there can’t be any doubt.

        I hope this corrects any misunderstanding.

      • The facts are:
        Absence of correlation proves lack of causation.
        Presence of corelation proves nothing about causation.
        There, that corrects your misunderstanding.

        Thank you, Richard, for adding another item to my growing stack of things I haven’t said. I have never said anything to contradict your first statement.

        I must however take exception to your second statement. Presence of a reliable correlation between two variables is a strong indicator of a causal effect relating the two, either by one causing the other, each causing the other, or the two having a common cause. This is particularly true when you can control one of the variables yourself, since you can then use the correlation to implement a communication channel, with the controlled variable as the transmitter and the other as the receiver. When you have the choice of which you can control you have a half-duplex or two-way channel (full-duplex is an opposite pair of simplex or one-way channels), and for such a correlation obviously each variable has to act causally on the other. If you have a counterexample to any of this I’m sure we’d all be very interested in seeing it.

        Here’s a simple experiment anyone can do with two equal weights, a cord, and a pulley suspended from something. (For best effect use a large diameter pulley with a low friction bearing; for this experiment I found a pulley on a Bowflex exercise machine and two 1 lb weights worked well.) Run the cord over the pulley and tie one weight to each end. Use a flat surface to raise both weights a few inches then let them slide off together. Whichever leaves first will end up being the one moving down when the cord tightens. The height above the floor of one is beautifully correlated with the depth below the ceiling of the other.

        But what is causing what, would you say? Presumably the falling weight caused the other to rise; on the other hand the rising one is causing the falling one to fall at a steady speed instead of accelerating. Beautifully correlated, yes, causation, who can say, yet the mechanism is patently clear!

        One can use either weight to send PCM (pulse-code modulation) messages to the other. This setup therefore can function as a half-duplex channel.

        Had you paid close attention to Alley’s talk you would realize that he was describing a relationship between CO2 and temperature that works essentially the same way. They move together, like tango partners, though far harder to see.

        The example Alley used was in some respects even better: credit card debt and interest, where it is clear that the debt kicks off the interest, just as the Earth’s orbit kicks off the deglaciation, but once you’re in debt they drive each other, just as CO2 and temperature drive each other.

        It’s the same deal with the math for the debt-interest relationship in home mortgages. People think of their mortgage payments as paying off their house, but initially most of it is interest and relatively little goes towards what you actually paid for the house. Only when the principal gets so low that you don’t owe that much interest on it does most of your monthly payments go towards the house itself.

        A mortgage is an example of an amplifier whose gain is increased by a positive feedback. The gain is the ratio of principal-plus-interest to the principal alone. The gain is greater for 30-year mortgages than 15-year. In the pulley system the gain is unity, or slightly less with friction, because the feedback is neutral, being neither positive nor negative.

        This example of Alley’s is slightly harder to visualize than my pulley-and-weights example if you’re more of a physicist than an accountant, but it’s the same general idea: debt causes interest, but interest also causes debt, so there’s a correlation there that isn’t strictly causal in either direction.

        This indefinability of causation was the whole point of Alley’s talk. He pointed out some lingering doubts in the scientific community about just how well correlated CO2 and temperature were in “deep time,” and discussed recent work narrowing that gap.

        To believe that two signals can remain locked in remarkable phase with each other over hundreds of thousands of years without some common cause is to think like Leibniz, the coinventor with Newton of calculus. Leibniz’s theory of monads postulated that mind and body were set in parallel motion eons ago by a perfect God who timed them so perfectly that whenever your brain tells your arm to rise it does so with no causal connection, even 6,000 years after God started the monads running. The deglaciations of the 2.6 million year Quaternary Period have been synchronized in that way over 400 times longer than the Biblical age of the Earth.

        Their synchrony is remarkable. To use slight imperfections in the phase locking of CO2 and temperature that are down in the noise as determining which is cause and which effect is grasping at straws. By that logic one can argue that the falling weight is the cause and the rising weight the effect because the former was travelling slightly faster when the cord tightened. Fine, but that doesn’t mean the rising weight has no effect on the falling one.

        Your notion of cause and effect in nature is too simplistic.

      • Absence of correlation proves lack of causation.

        I have never said anything to contradict this because I have never had occasion to address this particular denier argument before.

        The bottom line is that this argument displays a woeful lack of understanding of statistics, physics, electrical engineering, and anywhere else that either orthogonality or oscillations can arise. Let me address it now.

        If you have two variables f and g such that f(t) = c*g(t) for some constant c then we would both agree that there is a clear correlation.

        If however the integral of f(t)*g(t) over their whole common domain is zero, then we would also both agree that they are orthogonal as standardly defined.

        Where we part company on whether orthogonality is a sufficient condition for lack of correlation. Here are two counterexamples.

        1. f(t) = sin(t), g(t) = cos(t). Here f and g are without doubt orthogonal. But in what sense could anyone think they are uncorrelated?

        2. Take two sticks and nail them together at right angles. They are now orthogonal, but if I wave one around the other waves too, while the two sticks remain at right angles. Orthogonal yet correlated.

        The second example becomes the same as the first when the sticks are rotated anticlockwise with unit angular velocity in the XY plane about their common point, taken to be the origin. This is because the y coordinate of the two non-fixed ends traces out the respective functions cos(t) and sin(t), with the leading stick tracing out cos(t).

        This is relevant to temperature and CO2, because the action of each on the other necessarily has some delay. There is no such thing in nature as perfectly timed coupling between correlated variables, this only happens in mathematics.

        Were CO2 to consistently lag temperature, this would not prove absence of causation from CO2 to temperature. It would at best prove that, if CO2 influenced temperature, then the associated delay was longer than for the converse influence.

        I would accept claims that CO2 consistently lagged temperature from the scientific community, but not from the denier community. In neither case however would I accept that a consistent lag in either direction established the absence of a causal connection, any more than I would accept that a consistent lag of 90 degrees by the trailing stick in example 2 demonstrates that it cannot act causally on the leading stick.

        Such a situation is encountered in a “tank” circuit or tuned LC circuit. Thinking of the inductor L as a flywheel, it can be seen that current lags voltage since voltage is needed to get the current started. But in the capacitor current leads voltage because current is needed to get the capacitor to charge: initially the voltage is zero, and as the current rushes in at top speed (cos(t)) the voltage gradually rises (sin(t)) while the current is falling, until the voltage reaches its peak at almost the instant the current has dropped to zero (mathematically the exact instant, but not quite in practice due to inevitable imprecisions in definition, measurement, etc).

        In any system capable of oscillatory behavior, all bets are off as to what is causing what in the oscillation. You can’t just say that the leading variable is the cause and the lagging variable the effect when there multiple variables differing in phase such as current and voltage in the LC tank example.

        But harmonic motion is not the only such situation. The functions exp(t) and exp(t + 1) − 5.05366896… are orthogonal on the unit interval [0,1]. Yet any value of one determines the corresponding value of the other, so if you could control one you could control the other.

        On the face of it one would think exp(t+1) must be leading exp(t) by one time unit. However exp(t+1) = exp(2)*exp(t-1), a multiple of a function lagging exp(t) by one time unit.

        So just as with oscillations, exponential decays and rises that are orthogonal can be ambiguous as to which is leading and which lagging.

        This is particularly relevant to our exponentially growing population with its exponentially growing per capita consumption of fuel, which becomes an exponentially growing CO2 contribution to the atmosphere. In this case one would like to say that CO2 surely leads temperature. But even here that cannot be said reliably, since each acts on the other as we well know.

        So we cannot meaningfully say that CO2 leads temperature even in the scenario in which we are responsible for CO2, since we lack a mathematical definition of “leads” that gives an unambiguous answer.

        I’m sure deniers will figure out some way to turn this observation to their advantage. Whatever their shortcomings in scientific rigour, they more than make up for them in the department of creative writing.

      • It would at best prove that, if CO2 influenced temperature, then the associated delay was longer than for the converse influence.

        Oops, got that backwards: shorter, not longer.

        Suppose we observed temperature leading CO2 by 100 years. This could happen because the actual delay from temperature to CO2 was 500 years while the action of CO2 on temperature took only 300 years. Or with the same mathematics, because the actual delay from temperature to CO2 was 100 years while the converse delay was −100 years, but of course that’s just mathematics, not physics. If CO2 acted instantly on temperature, we would only need the converse action to take 200 years.

        These long delays would be typical of the slow rises and falls found in the geological record. 800 year delays would not be at all unreasonable. With the unprecedented modern rate of around 0.2 °C/decade seen from 1980 to 2000, much shorter delays in each direction are to be expected.

      • Have you heard of a guy called William of Occam?

      • Bryan, if you had to choose between Newton’s theory of light as consisting of corpuscles, and Maxwell’s equations, which would you pick and why?

      • You’ve gone in the wrong direction all the way, here, Waughan. I’ve rarely seen such misfortune: obviously mistaken from the beginning to the end.

        For a start, see Richard S Courtney’s answer (immediately above), which is as excellent as it is short. All of the rest follows.

        Then see mine (bellow), also addressed to people having difficulties with reading and elementary logics, but also going on to bring us back to the big issue: not only we’re still waiting for the first bit of proof that CO2 has any consequences in surface temperatures in the real world but, while waiting for the first one, scientifically speaking, there are already enough proof of the contrary.

      • Sam – does your keyboard lack a V?

      • Awfully sorry, Vaughan.

      • Carefully and meticulously opens mouth and inserts foot… all the way up to the knee:)

        Crow fork anyone??

      • You really do believe that denier mantra, “Absence of correlation proves lack of causation. Presence of corelation proves nothing about causation,” don’t you. I would guess you’re better suited to creative writing than a scientific career. There is not a shred of scientific content to either of those two sentences, it is just stuff deniers like to repeat.

        I hope you feel that “crow fork” works both ways. It would hardly seem fair otherwise.

      • Vaughn,
        The tail of this thread had me in stitches. What a riot! You are certainly a bright guy who could probably run rings around me in integral calculus, interpersonal skills… not so much.

        Gentle suggestion: More science and less snark will serve you well. Peace:)

      • Sure. You go first.

    • Fuzzy logics

      Waughan, Willard,

      regarding ‘Allen’s formulation of [his ‘knob’] argument, I precisely sent a long comment. Now I understand it was diluted with too many other thoughts, so let’s make it a bit shorter and more direct.

      This is no surprise ‘CO2 keeps inserting itself everywhere we look’ in past temperatures signals: temperatures changes induce changes in atmospheric CO2 levels, in the same direction.

      Waughan’s quick reading and thinking led him to claim I said correlations are uninteresting. I’ve said exactly the opposite: that’s of uttermost importance. And that distortion of my writings (let’s drop the silly argument about non universal view) mostly aims to mask the main point in what I’ve said, to which Waughan precisely don’t want to answer:

      A correlation itself provides no proof. One trivial demonstration is of course that one can reverse cause and consequence.

      – Moreover, the lag obstinately reveals itself in the same direction: temperatures  CO2. Any available refined analysis on the scale of ten millennia shows that persisting phenomenon.

      A lack of correlation proves something. For example, there are plenty of times in the GRIP data were violent D-O events are in no way correlated to CO2.

      Not only those correlations – and lack of correlation, and lags – matter, but this very argument was sufficient to demonstrate that it would be plainly wrong to claim that a CO2 increase triggers temperatures increase.

      Most interestingly, Alley himself clearly admits it, even publicly (see the AGU video, just before that other global affair with banking credit enter the game…), and so do many others (alarmists like ‘deniers’), for example J. Jouzel in France, but Waughan and lot of other skilled profanes seems not to hear.

      People like Alley and Jouzel were especially involved with ice cores showing persistent lag or simple lack of any correlations on the scale of ten millennia. Moreover, both of them in particular had arguments on how to explain this correlation matter. In short, those alarmist re-use (here) the very argument that a correlation itself proves nothing… but they also burry an even more decisive inconvenient part of the analysis.

      No wonder why they prefer not to show those analyses to laymen: doing so, they would unearth not only very inconvenient questions but also thoughts accessible to anybody.

      Besides, this simple case would be enough to demonstrate that, far from understanding the effects of CO2 on climate (at the scale of ten millennia), the researchers precisely are unable to provide a bit of convincing theory, given the fact that there is not any bit of proof to show in the real world. However, Alley prefer to insist on the fact that ‘we have no reasonable explanation’ than CO2. Up to this stage, you can consider it’s still OK, whereas no convincing at all. But it gets even worse (for them and any CO2 fanatic).

      So, without any valid reason, they rather keep that peculiar affair in the obscure field of some vague top-level science, and imagine things themselves cannot understand, invoking dark concepts such as ‘stochastic resonances’ (do we need to underline that is such a concept is clear and relevant, here, it’s only to strengthen the idea of apparently random character of the input variables are)… Incidentally, if scientists like Alley and Jouzel are specialists in certain compartments of geology, there’s no reason to believe they are top-rank in statistics.

      Now let’s give a more precise view of that: here, alarmist themselves (re-use the very argument that a correlation itself proves nothing… to) temporarily conclude that, even if an increase in CO2 never triggers any increase in temperatures, it could contribute to… kind of ‘maintain temperatures’ after that. To what, exactly?

      See Solé, Turiel & Llebot (2007): after several centuries (possibly about one millennium CO2 higger levels could contribute to prevent temperatures to get back to a lower level! Fichtre.

      In other words: those carbocentrism priests treat we the people like idiots and invent an initiatory branch of science: the higher you get into, the fuzzier.

      After having been so obstinate not to put any a priori confidence in Nature’s incredible ‘negative-feedback’ mechanisms, they invoke a new mysterious force of Nature to add some hypothetical ‘positive-feedback’ once a again because they need It…. First hypothesis: those charming people sowing germs of climate wars are silly enough not to see that, with such a ‘resonant’ mechanism, the Earth would have changed for a burning hell. Second hypothesis: they are not that stupid, so they would have conclude that, if ever their hypothetic exponentials-builder of a mechanism had some physical sense, it has no effective impact in the real word; so they would also have deduced that at least one unknown phenomenon was at stake. But this would precisely sweep away their own ‘theories’ and terrifying claims.

      Now, given that kind of endless affair, I suggested to restart from basic principles. What about this one: in sciences, only a single disproof is enough to invalidate any theory (needless to add: until further analysis).

      • A nice story

        It’s equally worth reminding how the IPCC dealt with that serious matter.

        Knowing that you can’t fool anyone entering those marshy grounds, the institution chose another approach, the Big Bertha, consisting on perfectly avoiding the issue, while focusing on an entirely irrelevant one (which, incidentally, is nonsense).

        In short, the IPCC said: of those curves we can’t draw any conclusion, given the fact that there are different ice cores taken in different places

        Let’s forget the fact that other conclusions of that same report indeed rely on the same curves. Obviously, the point is that, for a given time, each point in both CO2 and temperature proxies curves comes from the same bubble, whatever the place of the ice core. (And even if we consider the issue of uncertainties in absolute dating, it is of very low importance here).

        Re-using Alley’s words: it is ‘a nice story’.

      • It’s equally worth reminding how the IPCC dealt with that serious matter. Knowing that you can’t fool anyone entering those marshy grounds, the institution chose another approach, the Big Bertha, consisting on perfectly avoiding the issue, while focusing on an entirely irrelevant one (which, incidentally, is nonsense).

        You’re hardly the one to complain about this given that you were the one who diverted the discussion about modern warming to Quaternary glaciation when you first posted here. You’re now further diverting it by addressing what the IPCC has to say about Quaternary glaciation.

        Since we’re now two steps away from the topic you initially engaged with, modern warming, this is at least one step too far for me. Consider the floor yours concerning the IPCC, they’re professionals and can represent themselves directly just fine, as they have done in their reports. It would be presumptuous of an amateur climatologist like me to pretend I could speak for them.

        My interest is in understanding the science. I don’t consider memorizing the IPCC report and being able to quote it chapter and verse “understanding.” I want to work out every step for myself, with the literature as a guide but not as a slave-driver.

        I welcome criticism of my own attempts at understanding from both sides of the present debate. At one point I was interested in criticisms by deniers of the work of others, but gradually lost that interest because none of those criticisms turned out to improve my own understanding (though I found them helpful in understanding the denier movement/party/whatever). I find the published literature in general a very helpful source of data, techniques, and ideas that I can integrate into my own understanding. When inconsistencies arise I try to work out who or what is at fault; most often I misunderstood something or reasoned fallaciously, but every now and then I see something that I think improves on the literature. If and when I have a paper’s worth of these I’ll write them up as such.

        I do this primarily for my own satisfaction, so that I feel confident no one on either side of this debate has pulled the wool over my eyes. And I do it secondarily in case other like-minded individuals find my conclusions helpful hints for their own work.

        That’s all I have to say about the IPCC in this context. I’ll address some of your non-IPCC points in a separate response.

      • Pratt-made global warming

        Such misattributions are routine in this debate. I’ve been surprised at how many things I myself am supposed to have said that I can find no evidence for. (Vaughan)

        Dear Vaughan,

        that’s more than enough for me. I don’t have so much spare time to waste it with patent deniers. Properly speaking, one can’t deny something that is unreal (i.e. an hypothetical phenomenon lacking evidence of its supposed consequences in the real world), whereas one can deny things that indeed happened.

        So, my advice to you for the future: apart from reconsidering your absurd position on the matter of correlation CO2 / temperatures, at least try and avoid revisionism when your personal (very recent) history is at stake, mostly when interlocutors can find disproof in the same page.

        As you know, the first message I sent to you (which is also my first post on this blog) was about Flaubert, ‘the détails’, ‘good God’ and ‘the Devil’…

        Let’s forget some of the silly claims about my identity (excused), credit cards and over-indebting people yet not discovered in France, …
        ’Absence of correlation proves lack of causation’
        I have never said anything to contradict this
        because I have never had occasion to address this particular denier argument before.
        As anyone can read it.
        Only the last case prattling about that in this very same page (Vaughan, January 6, 2011 at 6:59 pm) :
        The facts are:
        Absence of correlation proves lack of causation.
        Presence of corelation proves nothing about causation.
        There, that corrects your misunderstanding.
        Thank you, Richard, for adding another item to my growing stack of things I haven’t said. I have never said anything to contradict your first statement.
        I must however take exception to your second statement. Presence of a reliable correlation between two variables is a strong indicator of a causal effect relating the two, either by one causing the other, each causing the other, or the two having a common cause.

        Then back again (Vaughan to ivpo, January 6, 2011 at 6:59 pm):
        You really do believe that denier mantra, “Absence of correlation proves lack of causation. Presence of corelation proves nothing about causation,” don’t you. I would guess you’re better suited to creative writing than a scientific career.
        … and again:

        Regarding Sam’s point,

        a correlation itself proves absolutely nothing,

        I knew from Jean-Yves Girard’s work on logic that France has a very different view of logic from the US. This seems to extend to the concept of correlation.

        In France, apparently correlation proves absolutely nothing. In Australia, the US, and I would hope the UK, scientists take correlation very seriously.

        … and again:

        [Very long prattling about orthogonal functions, quantum physics and whatever. Conclusion:] Your notion of cause and effect in nature is too simplistic.

        … and again, with still increasing patent deafness:

        Correlation is not causation.
        I never claimed otherwise. Maybe the temperature is driving up the CO2. Or maybe Leibniz’s monads are at work here. (Remember them?)
        4) Changes in co2 level lag behind changes in temperature at all timescales. You can prove this to yourself on woodfortrees too.
        What are you talking about? You seem wedded to the concept that CO2 cannot raise temperature. Do you imagine either Miskolczi or Zagoni believes that?
        […] With the exception of Lindzen and Michaels (and for all I know our host, whose position I can’t quite figure out), the scientists on the Congressional panel the other week, along with most actively publishing climate scientists, believe that the current correlation between rising temperature and rising CO2 should be attributed to the causal influence of the latter, not the former.

        Possibly total and irreversible deafness:
        (Sam) ‘[…]and why are the reconstructions made at any time scales so obstinate to show lack of correlation between CO2 levels and temperatures most of the time, and the rest of the time, nothing to suggest that sharp increase in CO2 levels in the atmosphere can trigger a significant (detectable) increase in temperatures?’
        (Vaughan)Even more remarkable. You evidently move in a circle of people who are a veritable fount of novelty.

        Who’ll put an end to this carnage scene on air?
        (Sam:) Not only those correlations – and lack of correlation, and lags – matter, but this very argument was sufficient to demonstrate that it would be plainly wrong to claim that a CO2 increase triggers temperatures increase.
        Most interestingly, Alley himself clearly admits it, even publicly (see the AGU video
        , just before that other global affair with banking credit enter the game…), and so do many others (alarmists like ‘deniers’), for example J. Jouzel in France, but Waughan and lot of other skilled profanes seems not to hear.

        They rather go back to their endless tentative to demonstrate something which is not only supported by no evidence in the real world but already proved to be wrong.

        I was thrilled that I was even able to get Sam to watch the Alley talk at all. So you can imagine my disappointment that he was unable to follow its logic.

        Incidentally I did and demonstrated his logic is nonsense. More notable fact is your own inability to understand even the climate priests’ messages. But no surprise there: as I said, one symptom of scientism, like any initiatory process, is that the higher you get into, the fuzzier. One last good reason to land, Vaughan. Not at last, but as soon as you can. Once again, in sciences one single evidence (disproof) is enough to invalidate any theory.

        Had Sam had something original to say it might be worth coming up with an original rebuttal, whence my interest in his pointing me at any such. Everything I’ve found so far has been rebutted so many times that it’s a waste of time doing it yet again.

        Drop it, Vaughan: the fact is that you simply don’t want to deal with genuine rebuttal.

        Now, I have something original: I suggest the famous correlation between onanism and deafness to be reconsidered: possibly the causative relation at stake is reversed…

      • Now, I have something original.

        Perhaps, but it would seem to have more to do with your interest in Flaubert than with climate.

        Sam is raising so many side issues that we’re at risk of losing sight of the woods for the trees. Let me try to get it back on track by narrowing the discussion to Sam’s claim that there’s nothing to worry about. He bases this claim on the following two claims.

        (i) CO2 does not influence temperature.

        (ii) The Medieval Warming Period was warmer than now.

        I don’t recall seeing any other argument by him except these two standard denier arguments.

        In support of (i) Sam points to CO2 lagging temperature in the ice core records, and infers that there cannot exist any causal relationship “CO2 à temperature.” He has also expressed doubt that there is a strong correlation between the two (in opposition to the main point of Professor Alley’s AGU 2009 lecture, that the correlation is excellent not just in the Quaternary but in “deep time”). However if I understood him correctly he seems willing to grant that temperature can increase CO2.

        Sam’s argument depends crucially on the idea that if A causes B then B cannot cause A. In communications theory this is the assertion that only simplex (one-way) communication channels can exist. That is certainly true in some cases, such as firing a bullet into a target: the bullet entering the target cannot cause the gun to fire. But in more tightly coupled systems causation can be bidirectional. In the example I gave earlier of two weights on a cord running over a pulley it is easily seen that the weights act causally on each other, and one cannot single out one weight as the cause and the other as the effect.

        More generally, in any equilibrium situation all the active constituents can be considered to play causal roles. When you shift one constituent the other constituents adjust in order to restore equilibrium. The CO2-temperature relationship is such an equilibrium situation. If you add CO2 the temperature adjusts to bring things back into equilibrium. If you warm the system the CO2 adjusts to bring things back into equilibrium. It’s much like moving the two weights under the pulley—pull either one down and the other goes up. Adding reagents to a chemical reaction is another example of shifting the equilibrium.

        In the presence of bidirectional causation, all bets are off as to which variable leads and which lags until one understands the delays in each of the two feedbacks reinforcing each other. Were the delays equal one would not be surprised to find perfect synchrony. But when they are not, which Sam most assuredly has failed to rule out, then one should be equally unsurprised to find one variable leading the other. If CO2 lags temperature then we can only conclude that temperature takes longer to influence CO2 than vice versa. We cannot conclude that CO2 does not influence temperature. And furthermore we have reasons based on physics why CO2 does act causally on temperature, first observed empirically by John Tyndall in the 1850s and further analyzed in detail by Arrhenius in 1896.

        (ii) Sam’s other point was the standard denier argument that Earth is now much cooler than in the Medieval Warming Period. In this case he’s using Greenland data up to 1905 (95 years before the publication date of 2000), and pointing out that 1905 is colder than the MWP, which no one has denied.

        The proper question for that graph, which I asked Sam when he first brought it up, is how to scale the approximately 1 °C rise in global temperature from 1905 to now in order to embed it correctly in Sam’s graph. This is easily answered by matching it to Easterbrook’s graph making the same point as Sam, that the MWP was hotter than now.

        It is plain to see that the Arctic fluctuations correspond pretty well to the black curve on Easterbrook’s graph except for scale. In particular the peak at 8 kya is -28.7 °C while its counterpart on Easterbrook’s graph is at 0.3 °C (as an anomaly, not an absolute temperature). Sam’s graph has one earlier trough, at -32.4 °C, a downwards swing of 3.7 °C. There are two troughs on Easterbrook’s graph; let’s be generous and pick the lower one, which is at -0.3 °C, a downwards swing of 0.6 °C.

        So this tells us that the correct way to scale global temperature fluctuations when comparing them with Greenland fluctuations at -30 °C (brrr!) is to multiply them by 3.7/0.6 ≅ 6.2.

        From the HADCRUT3 data we see that the 1905 end of Sam’s graph corresponds more or less to −0.4 °C on the HADCRUT3 data, maybe −0.6 °C in 1905 but let’s be generous and stick to −0.4. Several of the recent hot years are a full 1 °C higher. This scales to 6.2 °C when transplanted to Sam’s graph. But 1905 there is −31.6 °C in Greenland. So we add 6.2 to −31.6 to get −25.4 °C.

        But while still numbingly cold, this is a full 3.3 °C warmer than anything Greenland has seen in the last 50,000 years! That’s almost as far again above the hottest temperature in Sam’s graph, at 7.8 kya (also 3.3 kya), as Sam’s hottest temperature is above his coldest.

        This is not a new point. Deniers doggedly keep insisting that the MWP is hotter than “today,” which they always do in the same way Sam did, by defining “today” to be the most recent time for which these ice core data are available. That how Easterbrook did it, and now Sam.

        What we see here is that the rise in global temperature since 1905 is twice the entire range shown in Sam’s graph.

        One wonders how Sam managed to overlook this little detail of getting the scale correct, given the obvious huge differences in scale involved.

      • Vaughan’s habit to distorting others’ claims and reasoning forces me to reply once again.
        Once again, Vaughan’s post is a bright demonstration of his inability to understand (even) the basic principles of science, like so many others who reverse the burden of proof (Arrhenius being the first one in this affair – see quotation in DT09). The point is: any theory is to be considered wrong as long as there is not any evidence that the phenomenon it describes has the claimed effects. Unless people who make a statement bring evidence for it, it’s wrong. Bringing thousands of counterproofs to any counterarguments wouldn’t lead to a slightest reduction of the problem: it would be like replying that Dracula or God .
        Sam’s claim that there’s nothing to worry about. He bases this claim on the following two claims.
        (i) CO2 does not influence temperature.
        (ii) The Medieval Warming Period was warmer than now.
        I don’t recall seeing any other argument by him except these two standard denier arguments.

        Needless to be an expert in logics to that statement (ii) belongs to the category of underlying statements leading to (i). In other words, statement (i) should be sufficient to describe a ‘denier’ position… which can’t be properly called ‘denial’, as one can’t deny something that is unreal (the point being that we’re still waiting for evidence that CO2 levels in atmosphere influences temperatures).

        Even (i) is an erroneous statement, as I didn’t discuss (then admitted) the possibility that atmospheric CO2 could contribute to slow down temperatures decrease – by the way, that’s exactly what I would ask a ‘blanket’ to do when no addition of an heat source is involved.

        Statement (ii) is of course only one example out of many others in the category of underlying points. Of course, one of them (I forgot to add) is that temperatures reconstructions from satellites and radiosondes [from 1979] fail to show that the higher you climb in the troposphere, the quickest the temperature increase.

        Moreover, it is not necessary that (ii) proves to be true (unlike the big claim that ‘we have a problem with CO2’), if other statements of that same category of underlying statements leading to (i), even only one of them, were shown to be true.
        However, statement (ii) could at best be only a part of any reasoning leading to such underlying statements. Even if Vaughan’s reasoning on this point made sense, he would bring anything but a decisive demonstration there.

        Anyway, it’s easy to check that I didn’t focus on a so-called “MWP” (and never used those words), whereas: I’ve evoked a great number of temperatures peaks in past times, the last ones having occurred roughly every 1000th year during the Holocene.

        Moreover, the main point I underlined was not about comparing maximum temperatures at peaks, but the fact that the reconstructions available show a great deal of past (natural) events, like DO oscillations, involving a violent increase in temperatures which can’t be explained by variations of atmospheric CO2 levels. Vaughan’s compulsion to cherry-picking of course leads him to drop those kind of developments (he says he doesn’t recall seeing them) and rather focus on less important and isolated statements.

        [Sam] has also expressed doubt that there is a strong correlation between the two [CO2 and temperatures] (in opposition to the main point of Professor Alley’s AGU 2009 lecture, that the correlation is excellent not just in the Quaternary but in “deep time”). However if I understood [Alley] correctly he seems willing to grant that temperature can increase CO2.

        Obvious distortion of my writings. Such a reply is even clearly dishonest. I said that the lack of correlation appears when one observes the curves at the scale of several millennia.

        Moreover this is diversion. The fact that Vaughan doesn’t point that demonstrates either dramatic deafness or cherry-picking even in what Alleys said. See AGU video (http://www.agu.org/meetings/fm09/lectures/lecture_videos/A23A.shtml) from 36:55.

        Alley himself says without any ambiguity that the [supposed] CO2 effect on temperatures only enter the game later, after its own level is increased by an increase of temperatures. (So we can say more precisely: after 400 to 2 000 years).

        Alleys indeed admits that first the temperatures increase. Interestingly, his document put that increase as a consequence of ‘orbits’, which is a mere lie: he, like any scientist, know that ‘orbits’ can’t explain at all the violent changes in temperatures in glaciations pseudo cycles, DO oscillations occurring every 1470 year in average (and peaks which happened in the Holocene). Let alone the fact that some orbital components intended to have a major role don’t appear in the effects (400 000 year component in eccentricity ‘cycle’; major changing in the influence of precession with the passing of million years).

        Sam’s argument depends crucially on the idea that if A causes B then B cannot cause A.

        Entirely untrue (and ridiculous). At best, there’s only one interesting point here, which is what your own understanding shows about your own reasoning.

        The CO2-temperature relationship is such an equilibrium situation.

        I’d intend radiative phenomena to play their role at the speed of light. Not centuries later.

        And if you want to add the thermal inertia of oceans (by far the main factor among the rare natural agents you choose to take into account), why not recall that your own ‘reconstructions’ puts a sinusoid with a period of about 60 years on the front of the scene?

        In the presence of bidirectional causation, all bets are off as to which variable leads and which lags until one understands the delays in each of the two feedbacks reinforcing each other. Were the delays equal one would not be surprised to find perfect synchrony. But when they are not, which Sam most assuredly has failed to rule out, then one should be equally unsurprised to find one variable leading the other. If CO2 lags temperature then we can only conclude that temperature takes longer to influence CO2 than vice versa. We cannot conclude that CO2 does not influence temperature.

        Silly development, belonging to the category of onanism induced by deafness.

        Your argument is qualitatively valid but entirely irrelevant here as long as one looks at the quantities of CO2 and temperatures variations, and as quantities for delays as well. Once that is taken into account, it is perfectly obvious that the effects of CO2 on temperatures, if existing, is by far less important than the effects of temperatures on CO2. This is especially obvious when looking at DO oscillations (the young Dryas being the last one). And as one single counterproof is enough, your argument fails.

        Your development regarding statement (ii) is as wrong, though even more stupid and of very low importance.

        The main point in the GRIP data is about showing the violent events like DO oscillations and about the issue of correlation between CO2 and temperatures.

        Of course, my intension when pointing at that data was not to suggest that we are that short with data supporting the idea that there were other periods, even in close history, when temperatures were higher than now (in 2000). There are innumerable studies for that. Besides, there are hundreds of studies to demonstrate that those warm periods have shown not local but global warming.

        Now, let’s have a quick look at your arguments with the amplitudes of temperatures variations in central Greenland and at global scale (let’s go on forgetting that a ‘mean temperature’ makes no physical sense).

        First, when looking at the GRIP data, why forget that that last increase in temperature in central Greenland (like the melting of so many glaciers) has started near 1850, and was remarkably sharp till the beginning?

        Your merely virtual factor of 6,2 (3,7 / 0,6) comparing both local and global scale has no significance, of course but, FYI, a similar reasoning would lead to about 3 (1.4 / 0.4) when using the last 30 years of temperatures reconstructions by UAH NCDC, with the assumption that the trend in the North pole ‘mean temperature’ is close to the trend for central Greenland.

        Now, apart from that this silly reasoning, we have much bigger problems and significant basis and data. For a start, according to Hansen et al. (2001),
        the temperature trend for central Greenland is about zero for the XXth century, and less than zero (around – 0,5°C) for the whole [1950-1999] period… (http://pubs.giss.nasa.gov/abstracts/2001/Hansen_etal.html – see page 23)

        Speaking of local trends and of hazardous comparisons: one could also consider the fact than Hansen et al. (2001) show an increase of nearly 1,0 °C in Berlin (Germany) and in Le Bourget (near Paris, France) between 1900 and 1999, and also notice that it’s in good accordance with Hadley subset. However, those same Hadley subset data show that the temperature (11 year sliding mean) in 1761 in Berlin was the same as in 2001 and that the temperature in Le Bourget in 1770 (11 year sliding mean) was only 0,1°C less than in 2000… (see Jones & Anders, 2009 release of stations data, respectively file # 103 840 and 71 500) Unfortunately, those raw data are said by Jones to be private ones, though provided by public institutions from other countries. Just ask them… In the meantime, it’s up to you relying or not on the files collected by certain hackers. Of course, one would legitimately intend other scientists to have treated the same data.

      • Soory, missing end of the sentence:

        … it would be like replying that Dracula or God have longer arms than previously claimed by anyone.

      • Sorry, Sam, you’ll need to either speak up or turn down the background noise. Your signal is getting drowned out by the noise of your abuse, which is making it impossible to tell whether you’re arguing that I’m wrong because there’s a genuine error in my reasoning or because I’m physically and mentally handicapped, which seems to be the main cornerstone of your arguments. A vague handwaving dismissal is no substitute for a precise identification of my error.

        Eliminating the noise would have the further benefit of considerably shortening your comments, making it easier to identify their substance. However if you believe that your SNR is no worse than mine then we should simply agree to disagree on the ground of a clear failure to communicate.

        ivpo’s suggestion below that it is I and not you that is the one being rude here is a particularly transparent example of denier illogic, not to mention ironic in light of ivpo’s own rudeness.

      • Dear Vaughan,

        maybe I have problem (with English ) understanding Ivpo’s apology below but it seems to me that his post didn’t express the idea that one of us is a bad guy. But there’s something I’m quite sure about his post: it was sent earlier than my messages you referred to. So your claim about ‘a particularly transparent example of denier illogic’ is (once again) obviously inexact (is that formula polite enough?)

        As for the important issue in our discussion (the one directly linked to this thread, of course), sorry, Vaughan, you’ll also need to either speak up or turn down the background noise. As I said already, I’m not asking to go on with this dialogue, and I (also) consider my interlocutor doesn’t play fairly (which is why the last 2 posts were indeed beyond the limit of courtesy).

        Now if you want to resume our discussion (with the issue of the correlations), it’s clear to me that you should go beyond qualitative approach (not to mention philosophical developments with matters like quantum physics which are obscure to me), introducing real cases from existing data in order to provide precisions about the values and shapes of both CO2 and temperature curves (for GISP2 in particular) and same quantitative approach for the lags. For a start, do me the favour to try and refer to

        Remaining with your abstractions / qualitative approach, it seems you suggest that, as one can find nearly constant value for lags from temperature to CO2, then one can also find some nearly constant delays for the reverse possible causation. But this could only be the case if the curves were mostly periodic ones, which most of the time is not the case. Worse (for you), in most of the sequences there is nothing like a ‘delayed-correlation’.

        As for Alley’s public position on that issue, I won’t comment it again as long as you don’t want to reply to the precise explanation I gave about it. Maybe I’ve been impolite, but you can see that I’ve kept on dealing with the material: from this viewpoint, it seems you gave up for the second time.

        I wanted to send you some data by email but failed (I had an error message when trying your email at Standford).

      • Missing end of the sentence:

        For a start, do me the favour to try and refer to the figures in Solé, Turiel, Llebot (2007).

      • (I had an error message when trying your email at Standford).

        Entirely understandable. Little did the good senator realize in 1890 how well he would be protecting the teachers at his institution against spam 120 years later.

      • Thank you Sam. For what it’s worth I think you have made many valid arguments and posed some key questions regarding our current state of climate science. Allow me to express an apology for the absence of polite discussion by a few of the notable Denizens here at Climate Etc.
        Till we meet again…

      • Allow me to express an apology for the absence of polite discussion by a few of the notable Denizens here at Climate Etc.

        Were you including yourself here?

      • Till we meet again.
        Coming back last week.

      • … next week. :) (Sometimes hard to be quick with translations…)

      • Here’s the “separate response” I promised at the end of this.

        I apologize again for attempting to respond under pressure of time, which predictably produced bad results. Hopefully the following will be at least slightly more careful.

        today I spent more than 2 hours looking at that video you proposed, then replying to you. I think a great deal in the response was replying directly. Not to mention every other points in my last posts.

        I appreciate that enormously. I apologize for my earlier snarky remarks about pearls etc.

        For a start, see Richard S Courtney’s answer (immediately above), which is as excellent as it is short. All of the rest follows.

        You and Richard certainly are on the same page. I’d be more interested in hearing things I didn’t already know, please.

        let’s make it a bit shorter and more direct.

        17 paragraphs and 5200 words. I’ll try to keep my response roughly the same size.

        Moreover, the lag obstinately reveals itself in the same direction: temperatures  CO2.

        (For the benefit of those whose browsers showed this symbol as a box (IE and Chrome) or F0E0 (Firefox), this mysterious symbol in “private Unicode space” everywhere except French-speaking countries denotes the ordinary à, the one-letter French word for “to.” Why the French don’t use the standard Unicode symbol for it remains a mystery to me.)

        In my reply to Dp. Courtney I pointed out that in the presence of a positive feedback of the kind we know has to exist between CO2 and temperature (or are you proposing to doubt even the most elementary facts of physics?), if the delay from A to B exceeds the delay from B to A then B will appear to follow A. (This confused me when I first tried to work it out, until I realized that feedbacks are backwards and hence reverse the effects of delay.) Hence if our understanding of the physics tells us that CO2 and temperature have to drive each other, and we see CO2 lagging temperature, we do not need to infer an inconsistency compelling us to revise our assumptions, which is what you’re proposing. Instead we need merely infer that temperature takes longer to impact CO2 than vice versa.

        For example, there are plenty of times in the GRIP data were violent D-O events are in no way correlated to CO2.

        Those events are very interesting and well worth understanding. But you can’t have your cake and eat it too. If you are going to accept that temperature causes CO2, as you seem to be prepared to do, then you are going to have to come up with some explanation for why those correlations disappeared on those occasions.

        Most interestingly, Alley himself clearly admits it, even publicly (see the AGU video, just before that other global affair with banking credit enter the game…), and so do many others (alarmists like ‘deniers’), for example J. Jouzel in France, but Waughan and lot of other skilled profanes seems not to hear.

        If I said the opposite at some point (where?) then I retract it. In any event I don’t think we have any disagreement here.

        In short, those alarmist re-use (here) the very argument that a correlation itself proves nothing… but they also burry an even more decisive inconvenient part of the analysis.

        You raise an excellent point here. Not one that contradicts the physics so much as one that questions the logic of the experts. This would be very interesting to dig deeper into.

        No wonder why they prefer not to show those analyses to laymen: doing so, they would unearth not only very inconvenient questions but also thoughts accessible to anybody.

        One rule of this blog (can’t remember if it’s a written one) is not to attribute motives, though it’s often broken. I’m only able to resist doing so because I can think of so many very different motives that I never know which one to pick.

        the researchers precisely are unable to provide a bit of convincing theory, given the fact that there is not any bit of proof to show in the real world

        Again you’re assuming a nonexistent inconsistency.

        Now let’s give a more precise view of that: here, alarmist themselves … conclude that, even if an increase in CO2 never triggers any increase in temperatures,

        You must be a fan of Niki de Saint Phalle, suddenly you’re looking like one of those figures in La fontaine Stravinsky. Are you saying there exists an alarmist who says CO2 does not increase temperature? Name him or her and we’ll round him up for brainwashing right away. ;)

        Incroyable. (Exit right muttering.)

        The rest of your message got even more extreme and I was unable to follow any of it. If you feel some important point was thereby lost you might try to rephrase it.

        For the record here is my own view of the relevance of the Quaternary glaciations to modern global warming.

        1. The former shows that CO2 and temperature track each other very well, modulo a great many details.

        2. The anthropogenic component of CO2 rose from (387.9-280) = 107.9 ppmv to (390-280) = 110 ppmv during the past 12 months. That’s an increase of 2% p.a. Furthermore this rate would appear to have been sustained for at least the past two centuries. Since nothing remotely like that rate has ever been seen in the geological record before, going back not just millions but billions of years, the fine structure of the details of just how CO2 and temperature tracked each other thousands or millions of years ago is completely irrelevant to modern global warming.

        Those two short paragraphs constitute my complete story on the relationship between the Quaternary glaciation and modern warming. This makes me a large target, and moreover one that is not moving. This should make it much easier for you to shoot me down than if I were a small and moving target. Go for it.

  165. The message above is a reply to Waughan’s post of January 3, 2011 at 3:53 pm.

    Sorry I wasn’t able to properly use the html tags for references on this site (hope I’ll soon fix that).

    • Quick remarks in addition.

      Re-reading my post: is it possible that ‘alley’ at the end of the adress I indicate refers to our brilliant prophet? If yes, why doesn’t he show this beautiful graph, showing striking systematic lag + lack of correlation CO2 / temperatures in the period?

      26:20 As his new graph shows, in the last 50 million years the ‘global temperature’ enjoyed a dramatic drop. Do you imagine what a 15°C drop means when concerning the temperatures of the deep ocean? But the drop in the last 2-5 million years is already remarkable. As for the sharp increase just at the end, I suspect misleading manners: 1. the mean value doesn’t seem to correspond to the range in his own graph; 2. looking at other long series such as sediment cores (see for example the link to WUWT I gave) you can see that, for the last millions years, there was indeed a significant increase in maximum (interglacials) temperatures… but also a drop in minimum temperatures, which is more important and is related to longer times (glacials).

      31:40 At first glance, nothing to say except, of course : ‘which of the egg or the hen was there first?’ (Sorry, straight translation from French…)

      37:20 …. and ‘How do we know CO2 adds to warming? We can’t explain size of warming without CO2‘. As you can see, Mr Allay, like everybody else, is forced to enter this endless grounds, precisely because there’s no proof. And that’s exactly the point. But the situation is far worse: the extended meaning of that approach ‘by exclusion of the rest’ is that we’ll never know about CO2 effects on the climate, due to the fact that we’ll have to wait to identify, quantify, combine… ever other possible agents in the universe. Of course, it will never be the case. And even if we could go close to that, as science is never settled, this will never be sure and final results. So we can go on endlessly postulating the CO2 problem

      … This wouldn’t be a problem… if there was no claim of important consequences. But of course, there are. Terrifying ones.

      Are you aware that this is exactly the conditions of any scientism? And if the matter involves terrifying hypothesis, you just add the building of a monstrous social machine around that. Last thing: if both the problem and the solutions are fictions and if, additionally, the solutions themselves maintain the problem (I’m afraid there are plenty of reasons to say we’re on that track too), then you enter the dream of any average guru, or, when extended to the entire society, what we call a totalitarianism.

      So of course we need to be very careful with the data and the theory, but as it is already shown to be an endless issue, I mostly recommend to be careful with logics. And that’s your job, Waughan: good news. :)

  166. Waughan,

    I indeed find Alley’s choices strange when, speaking about correlations CO2 / temperatures he focuses either on distance past or on very close past, bravely short-cutting thousand-to-million-year-old scale, of which there’s no doubt he has plenty of data.

    I found an online free version (http://webs2002.uab.es/jellebot/documents/articles/Phis.Lett.A_2007.pdf) of J. Solé, A. Turiel & J. E. Llebot’s paper ‘Classification of Dansgaard–Oeschger climatic cycles by the application of similitude signal processing. See what they tell about a refined analysis of CO2 / temperatures correlation during the last glaciation.

    • Well, you certainly have a lot to reply to, so let me take your most compelling argument for 2010 not being the hottest year in several million years, namely this graph of GISP2 ice core temperatures that you referred to.

      Would you say the amplitude of the temperature swings over the last 10,000 years as shown in those cores was equal to that of the globe as a whole for the same period, or half of it, or double, or more? I’ll await your answer before continuing that train of thought.

      A more challenging task: find the data on which your graph is based and note that it actually goes back 50,000 years, not 10,000. You’ll find the period 10,000-20,000 a real eye opener, and very relevant to my question above.

      I indeed find Alley’s choices strange when, speaking about correlations CO2 / temperatures he focuses either on distance past or on very close past, bravely short-cutting thousand-to-million-year-old scale, of which there’s no doubt he has plenty of data.

      Sam, here’s an exercise for you. The surname of a world expert on the Younger Dryas (the most recent cold snap in the Holocene) can be found on your graph (it is the only surname there, besides the initials of a well-known prophet). See if you can find a connection between that surname and the televangelist you so dislike.

      It would be a little odd if he hadn’t considered your point himself, don’t you think?

      He’s just the same clown in classrooms, if not more so, so it would be interesting to see his (American?) student evaluations, which I’m sure are easily found online, and compare them with the reaction of a French observer.

  167. Waughan,

    today I spent more than 2 hours looking at that video you proposed, then replying to you. I think a great deal in the response was replying directly. Not to mention every other points in my last posts.

    The 1st part of your answer (about the 50 000 years extent) just shows you haven’t even red the beginning of the first of my posts addressed to you… As for the second part, I can’t see any technical arguments in it, and I don’t care about that kind of approach of social networks (and I’m not that far from the academy to juge researchers on the basis of their students like or dislike, even if this is sometimes partly relevant). I’m there to argue on scientific basis.

    May I conclude you’re not interested in discussion (with me)? Sorry to say you’re not fair play (to stay polite). Of course I won’t go on spending hours answering to you until you try and offer a fair reply.

    (*) By the way, I had no other climate dialogue on the internet today and this week than with you – en ce qui me concerne, je ne prends pas les gens pour des imbéciles. As for the site hosting that graph I made from NOOA data, it’s a very good French one, but needless to say it’s another one blog (he’s an ex-colleague of Pierre-Gilles de Gennes’, I such details are valuable to you.)

    • Sam, sorry that I haven’t been able to spend as much time as you on the net the past couple of days, things have gotten quite hectic at school and I also have some other backed up tasks I need to do. I had barely enough time to dash off a short couple of replies to you this morning before I had to go to a meeting (which I ended up being late for). Will get back to Climate Etc. when this all calms down.

    • The 1st part of your answer (about the 50 000 years extent) just shows you haven’t even red the beginning of the first of my posts addressed to you…

      Sincere apologies. I was racing to get to a meeting and your messages were so long that I had to go through them all too quickly for them to sink in properly. In retrospect I should have waited until I had time to do them justice.

  168. Note: easy to understand why I’m sarcastic to evangelists activists (I pointed ‘tele-evangelists’) : I happend to grow up in an evangelical sect, and I’m glad to believe I’m cured… Yes, I admit my tolerance has certain limits. Now, there are evangelists in my own family, with whom we have very nice relations, so nothing personal. As for the word ‘climate evangelist’, of course, I was mainly thinking about their faith in certain scientism, which a priori has no relation to born again stuff.

  169. Coming in very late in response to A Lacis:

    “The basic point being that of all the physical processes
    in a climate GCM, radiation is the one physical process that
    can be modeled most rigorously and accurately.”

    Is is possible that because radiation is the easiest to model, the models might tend to over-state the relative importance of the process, relative to other, harder-to-model, effects?

    Even stipulating that human activities are warming the climate, there remain concerns about land use changes, Urban Heat Island effects, deforestation, soot, dust and a dozen other hard-to-model processes . A Lomborg could argue that a dollar spent in CO2 mitigation efforts is less effective than a comparable dollar spent in soil-conservation, dust reduction efforts. If the two concerns are not equally easy to model, and the economic questions can’t be resolved LACKING a model, doesn’t it follow that the easier-to-model problem might attract an excess of mitigation funding?

  170. What strikes me is that the issue is not really about confidence in the radiative transfer underlying equations, but their relevance to determining if the world is under going dangerous climate change.
    I believe that based on the evidence,the answer is ‘not very’.