by Judith Curry

Albert Einstein on thermodynamics:

*A theory is more impressive the greater the simplicity of its premises, the more different are the kinds of things it relates, and the more extended its range of applicability. Therefore, the deep impression which classical thermodynamics made on me. It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown.*

**Nonequilibrium thermodynamics and maximum entropy production in the Earth system: Applications and implications**

Axel Kleidon

**Abstract **The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

*Naturwissenschaften* (2009) 96:653–677 DOI 10.1007/s00114-009-0509-x [link to full paper].

This is the best paper that I’ve come across that clearly explains nonequlibrium thermodynamics and maximum entropy production with application to the climate system. The paper can probably be understood by anyone with an undergraduate degree in engineering, physics or chemistry. Its a long and complex paper, I will try to do it justice with some excerpts from the background and then cutting to the part that interested me most, on feedbacks:

**Introduction**

*The parts of thermodynamics that we are usually most familiar with deal with equilibrium systems, systems that maintain a state of thermodynamic equilibrium (TE) and that are isolated, that is, they do not exchange energy or matter with their surroundings. In contrast, the Earth is a thermodynamic system for which the exchange of energy with space is essential. Earth system processes are fueled by absorption of incoming sunlight. Sunlight heats the ground, causes atmospheric motion, is being utilized by photosynthesis, and ultimately is emitted back into space as terrestrial radiation at a wavelength much longer than the incoming solar radiation. Without the radiative exchanges across the Earth–space boundary, not much would happen on Earth and the Earth would rest in a state of TE.*

*Systems that are maintained far from TE dissipate energy, resulting in entropy production.*

**Background**

*The first and second laws of thermodynamics provide fundamental constraints on any process that occurs in nature. While the first law essentially states the conservation of energy, the second law makes a specific statement on the direction into which processes are likely to proceed. It states that the entropy of an isolated system, i.e., a system that does not exchange energy or mass with its surroundings, can only increase, or, in other words, that free energy and gradients are depleted in time.*

*In the absence of external exchange fluxes, gradients would be dissipated in time, and hence, entropy production would diminish in time, reaching a state of TE. To sustain gradients and dissipative activity within the system, exchange fluxes with the surroundings are essential.*

*A steady state of a system is reached when the entropy change averaged over sufficiently long time vanishes*

*The proposed principle of MEP states that, if there are sufficient degrees of freedom within the system, it will adopt a steady state at which entropy production by irreversible processes is maximized. While MEP has been proposed for concrete examples, in particular, poleward transport of heat in the climate system, entropy production in steady state is a very general property of nonequilibrium thermodynamics, so that MEP should be applicable to a wide variety of nonequilibrium systems.*

**MEP and feedbacks**

*One of the most important implications of MEP is that it implies that the associated thermodynamic processes react to perturbations with negative feedbacks in the steady state behavior. This follows directly from the maximization of entropy production, which essentially corresponds to the maximization of the work done and the free energy dissipated by a process, as explained above. Imagine that a thermodynamic flux at MEP is perturbed and temporarily reduced. This reduction in flux would result in a build-up of the thermodynamic force, e.g., temperature gradient in the case of poleward heat transport. In this case, the process would not generate as much kinetic energy as possible. The enhanced temperature gradient would then act to enhance the generation of kinetic energy, and thereby the flux, thus bringing it back to its optimal value and the MEP state. If the boundary conditions shape the optimum change, then a perturbation of the state would be amplified until the new optimum is reached, which could be interpreted as a positive feedback to the perturbation.*

*What MEP states is that the functional relationship itself takes a shape that maximizes entropy production and thereby results in negative feedbacks. This maximization can be understood as the direct consequence of the system to achieve its most probable configuration of states, as in the case of equilibrium statistical mechanics.*

*This discussion of feedbacks and MEP is quite different from the conventional treatment of feedbacks in climatology, which are usually based on temperature sensitivities. In the usual analysis, the total change in temperature ΔTtotal is expressed as the sum of the direct response of temperature to the change in external forcing (ΔT0) and the contribution of feedbacks (ΔTfeedbacks): ΔTtotal = ΔT0 + ΔTfeedbacks.*

*If the total change in temperature is expressed as ΔTtotal = f · ΔT0, with f being the feedback factor, then a positive feedback is defined as f > 1, while a negative feedback is defined as f < 1. The feedback framework plays a very important role in the analysis of anthropogenic climatic change.*

*In principle, one could develop a similar feedback framework using entropy production rather than temperature as the central metric under considera-tion. The change in entropy production Δσtotal would then be expressed as the sum of the changes due to the external forcings and due to feedbacks: Δσtotal = Δσ0 + Δσfeedbacks, or Δσtotal = f · Δσ0. In steady state, MEP would be associated with f < 1, i.e., a negative feedback as discussed above. In case of changes in the external forcing, these would result in a change in the boundary conditions while the feedback would be associated with the change in internal configuration of the flux and gradients. With a change in external forcing, the tendency of systems to maximize entropy production would then state that, after the change, the feedback factor would initially be f > 1. That is, a small change in the flux would be amplified since the flux is no longer at the MEP state. This tendency would continue up to the point when the flux again reached the optimum value, at which point the feedback factor would change to values of f ≤ 1. This points out that optimality is a strong nonlinear aspect that is unlikely to be adequately treated in a linearized feedback framework. However, more work needs to be done to place MEP and optimality into the common feedback framework.*

**MEP and Earth system evolution**

*However, the Earth system has changed dramatically in the past. The early Earth very likely had an atmosphere with a high carbon dioxide concentration and in which free oxygen was basically absent. Over time, carbon dioxide was removed to trace-gas amounts, while oxygen increased substantially during the great oxidation event some 2.3 billion years ago, and again about 0.5 billion years ago, to near current levels. So how can nonequilibrium thermodynamics inform us about how the evolution of the Earth system has proceeded in the past?*

*Kleidon (2009) proposes that the Earth system over time has evolved further away from the planetary TE state towards states of higher entropy production, and suggests that this overarching trend can be used to drive how the Earth’s environment has changed through time. Central here is that the reference states of TE with respect to motion and fluxes of water and carbon, as described in the section “Entropy production by earth system processes” above, are interconnected. TE at the planetary scale would be associated with the absence of large-scale motion,since only in the absence of motion would there be no frictional dissipation, hence, no entropy production by motion. Such a state of an atmosphere at rest would be saturated with water vapor since atmospheric mo- tion acts to dehumidify the atmosphere. A saturated atmosphere in turn would likely be associated with high cloud cover and no net exchange of moisture between the surface and the atmosphere. This implies that there is no continental runoff, and no associated cycling of rock-derived, geochemical elements. For the geologic carbon cycle, this implies no carbon sink, so that the atmospheric carbon dioxide concentration would be high, in turn resulting in a strong greenhouse effect and high surface temperatures. High surface temperatures would result in ice- and snow-free conditions. Overall, because of the high cloud cover, absorption of solar radiation would be low, as would be planetary entropy production. While it is unlikely that the Earth actually ever was in a state of TE, what is shown in Table 1 nevertheless provides an association of what the Earth’s environment should look like closer and further away from a state of planetary TE.*

*A basic positive feedback between the water, carbon, and atmospheric dynamics was also postulated to be modulated by life: Stronger atmospheric dynamics (“motion”) would result in an atmosphere in which the hydrologic variables would be maintained further away from TE, which would imply a drier atmosphere, higher fluxes of precipitation and evapotranspiration, higher ocean–land transport, etc. This in turn would drive the geologic carbon cycle to lower carbon dioxide concentrations, resulting in a weaker greenhouse effect, which in turn would cool the Earth. A cooler Earth could maintain more extensive snow and ice cover, thus enhancing the radiative forcing gradient between the tropics and the poles. This, in turn, would strengthen the atmospheric dynamics and close the positive feedback loop.*

*This positive feedback would cause fundamental, thermodynamic thresholds in the whole Earth system. These thresholds would imply that planetary entropy production would unlikely increase continuously during the evolution of the Earth system, but in a step- wise fashion. Once such a thermodynamic threshold is reached, the positive feedback would cause the Earth system to rapidly evolve to a state of higher entropy production, after which the system would be maintained in a stable, MEP state.*

*These climatic trends associated with how far the Earth system is maintained away from TE at the planetary level could help us to better reconstruct and understand the past evolution of the Earth system. This would, however, need to be further evaluated, e.g., with more detailed simulation models that explicitly consider the nonequilibrium thermodynamic nature of Earth system processes.*

**Summary and Conclusions**

*At the same time, a more solid foundation of MEP is needed. Once this foundation is successfully established, it implies that the dynamical description of complex systems far from TE follow from the maximization of entropy production. This would have quite far-reaching implications for how we model the Earth system and understand Earth system change. It will provide us with a fundamental approach to understand the success of optimality approaches that have previously been used to understand complex systems. Nonequilibrium thermodynamic measures such as entropy production may also be a more useful property to express climate sensitivity than the conventional temperature measures, as it is closely associated with the dissipative activity of the process under consideration.*

*In conclusion, nonequilibrium thermodynamics and MEP show great promise in allowing us to formulate a quantifiable, holistic perspective of the Earth system at a fundamental level. This perspective would allow us to understand how the Earth system organizes itself in its functioning, how it reacts to change, and how it has evolved through time. Further studies are needed to better establish the nonequilibrium thermodynamic basis of many Earth system processes, which can then serve as test cases for demonstrating the applicability and implications of MEP.*

**JC comments:** The 2nd law of thermodynamics is an underutilized piece of physics in climate science. It is not a simple beast to wrestle with, but I think there are some important insights to gain. Optimality, self-organizing criticality, and nonlinearity are factors that are not adequately accounted for in traditional climate feedback analyses, and an entropy-based framework would be more consistent with the climate shifts that are actually observed.

With regards to the previous too big to know post, it is this kind of analysis and conceptual framework that is needed to advance our understanding, an idea that provides a blueprint for assembling the bricks into a structure.

Thank you for the non-paywalled link here

Yes, Axel Kleidon’s opening statement, “The Earth system is maintained in a unique state far from thermodynamic equilibrium, . . .” is beautiful and could have been referenced in the 2011 paper Karo Michaelian and I wrote on the origin and evolution of life, “Life arose as a non-equilibrium thermodynamic process to dissipate the photon potential generated by the hot Sun and cold outer space.”

Humility is the admission price to reality.

World leaders print money.

Mysterious science solved.

Leaders of nations and experimental sciences compromised observations in an attempt to control reality after ~1971*.

We have limited control over things in their own ego cages: Cause-and-effect controls everything outside – in reality.

*Science 174, 1334-1336 (1971); Nature 240, 99-101 (1972); Trans. MO. Acad. Sci. 9, 104-122 (1975); Science 195, 208-209 (1977); Nature 270, 159 – 160 (1977); Science 201, 51-56 (1978); Geochem. J. 15, 245-267 (1981); Meteoritics 18, 209-222 (1983); Astron. Astrophys. 149, 65-72 (1985); Meteoritics Planet. Sci. 33, A97 (1998); ibid., 33, A99 (1998); J. Fusion Energy 19, 93-98 (2001); 32nd Lunar Sci. Conf., paper 1041, LPI Contribution 1080, ISSN No. 0161-5297 (2001); J. Fusion Energy 21, 193-198 (2002); National Geographic Magazine, feature story: “The Sun: Living with the Stormy Star (July 2004)].

“Nonequilibrium thermodynamic measures such as entropy production may also be a more useful property to express climate sensitivity than the conventional temperature measures, as it is closely associated with the dissipative activity of the process under consideration.”

Separating work and entropy in the maintaining of the lapse rate is one of my issues. It is also one of the issues that seems to be throwing the Unified Climate guys and the atmospheric mass fans into a Tizzy. Even Willis bouncing around the issue, conductivity. CO2 enhances the energy transfer in collisions of molecules in a mixed gas environment.

Call me the Girma of conductivity. :)

Which brings me back to MLEV. The Arctic has large enough variations in mixed-phase clouds to be noticeable. I see no reason why the same situation, on a less noticeable scale, is not equally possible anywhere in the atmosphere approaching the same temperature range and moisture level. Low clouds in the Arctic would be similar to a little higher, but still lower with respect to the average radiant layer clouds as you approach the tropics. Virtually transparent clouds are still a bit of a thermodynamic mystery and quite capable of being mixed-phase under some conditions.

Anywho, that is a part of my crackpot theory, because below that cloud layer region, conductive transfer would be more dominate with respect to radiant transfer that seems to be estimated.

“CO2 enhances the energy transfer in collisions of molecules in a mixed gas environment. ”

So, CO2 acts as an atmospheric cooler, not a heater. I agree with stefanthe denier on this. CO2 is misconceptualized as a GHG from the very start of the theory or should I say hypothesis.

@ “bollocks”

I’ve been wondering about this and wanting to discuss it. I don’t actually see a “refutation” of the GHGs cool the earth idea on, say, SkS, but here is what I think:

You get absorbption of the IR via GHGs. That “excites” GHG molecules in various vibrational quantum states. Since they’re constantly colliding with other gas molecules, including non-GHGs, some of that energy is transferred via collisions with the non-GHGs into rotational or translational motion which does not re-radiate significantly, leading to net warming.

OTOH, GHG molecules are constantly getting smacked by other molecules and absorbing thermal energy that way, which they are then free to re-radiate, leading to net cooling.

I think the key is that for cooling, the probability of the kinetic absorption coinciding with an energy state that allows the GHG molecule to dissipate the energy via radiation, is much lower than the probability that it will transfer some of the absorped IR kinetically. So net warming results. Perhaps in a highly saturated, IR opaque atmosphere, GHGs would provide a net cooling.

BillC,

Far from the surface the warming by absorption if IR and transfer of that energy to kinetic energy is almost exactly as strong as the cooling due to the inverse process. That’s what a thermal equilibrium is about. There’s a very small warming effect, because the extra radiation from warmer layers below is stronger than the reduced radiation from cooler layers above, but the net effect is small.

Near surface there’s extra IR from the surface and warming is significantly stronger. That’s one of the ways surface transfers energy to the atmosphere.

Pekka,

This appears also to be the mechanism by which increasing CO2 concentrations cool the stratosphere and higher levels…basically, the CO2 is able to absorb more collisional energy here and release it by radiation, than it can absorb upwelling IR and release it via collision.

It’s interesting what it says about rates.

If I am an individual CO2 molecule near the surface, bumping into other molecules every nanosecond(?) or so, it is still more likely that I will absorb an IR photon between collisions, than collide and gain enough energy to radiate an IR photon (and even if I do, I radiate it in all directions, whereas the “incoming” radiation is greater from below than above). Yet I imagine that collisions of all sorts must happen more frequently than IR absorption (since we are talking about near-saturated conditions), and so the PDF of my energy gain from any collision must reside pretty far below the energy of the relevant IR photons.

Stratosphere is, indeed, different, in particular the upper stratosphere that’s heated by solar UV. There the temperature of air is significantly higher than the temperature equivalent of the intensity of IR. Gas is heated by UV and looses energy by IR the coupling between them is rather weak as the collision rate is relatively low.

Under these conditions the “temperatures” related to UV induced molecular excitations, kinetic energy of the molecules and molecular excitations that correspond to the IR energies are all significantly different. The different degrees of freedom are not in thermodynamic equilibrium ot even close to that as they are in the denser troposphere.

I put the word “temperature” in quotes, because the temperature is not perfectly defined without full local thermal equlibrium and the deviations of the occupancies of the vibrational states are also a breakdown of local thermal equilibrium.

Pekka

Sreekanth Kolan extends to above 11 km Robert Essenhigh’s detailed thermo model of the lapse rate. See:

Study of energy balance between lower and upper atmosphere

The variation in temperature is shown in his Fig. 17 p 53.

Do you have any thoughts on his development?

David,

The Master’s Thesis that you linked is quite interesting and has some results that I haven’t seen elsewhere. The altitude profiles shown in Figure 16 are perhaps the best example.

The model used is, however, based on a simplified integral equation, which is known to be only approximately valid as stated also in the paper and in Essenhigh’s earlier papers. It’s a nice simplification, which gives certainly qualitatively correct results and some new insight into the phenomena. Still it’s not accurate enough to serve as a real alternative for more accurate numerical models. It may be very near the best that can be achieved by so simple methods, but not more than that. The most severe approximation is probably the use of a single gray-body equivalent absorption-emission factor for the atmospheric gas rather than a set of banded absorption factors.

As in most cases, where such simplified models are used, it’s not easy to estimate, how far the quantitative results are from the correct ones or where they deviate most seriously. The insight obtained by the simple models is useful, but more accurate models must be used to check the quantitative validity of the insight.

All the above applies to, what we can learn from the basic assumptions comparing simpler approximate solutions of these assumptions to more accurate calculations. The further and very essential problem concerning the validity of the basic assumptions applies to both. Therefore I consider it an objective statement that Essenhigh’s approach is less accurate than many alternatives. That statement by itself is not a claim on the good accuracy of any model in describing the real atmosphere, only a comparison of two classes of models.

The comment on the relationship between the temperature and ozone concentration appears to apply to the approach of the study rather than science in general as there are numerous other studies where the properties of the stratosphere are analyzed, while the paper doesn’t link to any of these. Here we see again that this is a Master’s Thesis, not a fully finalized scientific paper, where such a comment without references would not be acceptable.

Thanks Pekka

These could be enhanced by Ferenc Miskolczi’s Line By Line quantitative infra red absorption models. He includes all 11 greenhouse gases across 3490 IR lines with 9 different view directions, for 150 layers etc.

David,

I think that it’s not possible to use Essenhigh’s approach with line by line absorption coefficients. That’s at least what I remember from the time I looked more closely at his work.

Pekka

Is the difficulty with using LBL absorptivity evaluation because that would vary the absorptivity with elevation, making it difficult to do the analytical integration?

Could the method otherwise be used to with numerical integration?

The full equations with wavelength dependent absorptions cannot be solved analytically. Solving them directly leads essentially to those numerical methods that have been developed over years by scientists and that are in regular use like Modtran.

One of the major differences is due to the fact that a fraction of radiation penetrates long distances in the atmosphere or even through tho whole atmosphere without being absorbed on the way. The integral equation is based on the assumption that all radiation has a free path that’s so short that the temperature is only little different at the points of emission and absorption.

As my Venus/Earth temperature comparison definitively demonstrates, increasing atmospheric carbon dioxide neither increases nor decreases atmospheric temperature. Adding more carbon dioxide (Venus has 96.5%, to Earth’s 0.04%) merely increases the efficiency, thus the speed, with which deviations from the thermodynamically predominant, gravitationally-imposed lapse rate structure are dissipated. The Standard Atmosphere rules, the atmosphere is stable — not balanced on a knife edge between runaway heating and ice age as “consensus” science believes — and particularly so against changes in carbon dioxide concentration. These are the simple facts provided by the proper comparison of Venus and Earth, which current science needs to face, but which everyone seems determined to ignore.

“which current science needs to face, but which everyone seems determined to ignore”

because it’s bollocks

@lolwot – “which current science needs to face, but which everyone seems determined to ignore”

because it’s bollocks

Can you elaborate on why this comment is bollocks?

yes

See my response above to Sam NC, which probably should have been a response to Capt. (or lolwot or Veritas) in order not to lose the threading.

This is an interesting perspective by Kleidon – one that he has apparently developed over many years. It will take me a while to digest the entirety of the full text version (which is worth visiting). My impression is that the MEP is reasonably consistent with existing data, but oversimplifies. For example, Kleidon illustrates one element by postulating an MEP feedback relationship in which surface warming induces an increase in cloud cover, but that appears to be an overgeneralization – see for example Variations in Cloud Cover. Apparently, the climate system is more complex than encompassed in Kleidon’s perspective. This doesn’t invalidate the MEP as a principle, of course, but it raises two questions: (1) is the MEP falsifiable, or is it so flexible that it can be adapted to fit any set of observations?; and, related: (2) is the MEP a more useful construct for understanding climate dynamics than the more conventional approaches? It should be noted that the two are not obviously in contradiction, and if the MEP is flexible enough, they never will be. (The issue of falsifiability has of course been raised many times in these threads, but relating it to mainstream climate science is a topic far too broad to resolve here)

For an alternative to Kleidon’s perspective, although not an outright rejection, see Ken Caldeira’s editorial in Climatic Change.

Kleidon discusses Caldeira’s argument in the paper

Yes, his comment is as follows: “Kleidon (2007) points out the lack of

appreciation of the thermodynamic nature of Earth system processes and several misunderstandings. Caldeira (2007) raises the comment that thermodynamics and MEP ‘may be true, but trivially so.’ Without going into much further detail on these discussions, one issue that becomes clear from these criticisms is that the thermodynamic basis of the Earth

system far from TE seems to be mostly misunderstood and needs to be clarified further. At this point, no one would argue that MEP is well established and that applications are without their limitations”.

I think that’s an appropriately cautious statement, although exactly by whom the thermodynamics are “misunderstood” isn’t specified. I think it’s the “limitations” Kleidon refers to that need to be better defined by testing against real world data. I also want to reread the paper to better appreciate some of the specific principles that Kleidon calls on to develop his MEP approach.

Fred, I am going to have to either get better glasses or fix my printer, but it deserves a good pouring over. From what I have read so far, it tends to agree with what I would expect from a non-ergodic system. The biggest problem I see is the magnitude of the various impacts with changing conditions. If CO2 forcing is over estimated for current conditions, which is what I suspect, the relative magnitude of the various feed backs take on a different light.

I’m not sure what point you’re making, Dallas. Kleidon doesn’t quantitatively go into the magnitude of feedbacks, nor does he dispute mainstream estimates of their magnitude. This is one of the reasons why it’s not clear how his approach would differ in its ultimate conclusions from mainstream approaches.

Fred, Kleidon’s approach uses the 2nd law of thermodynamics. The mainstream approach uses the first law of thermodynamics. Different physics are in play here, there is no reason to expect the same answer, but also no evidence here of a different answer. The linear approach that evolves from energy balance models is almost certainly oversimplistic, IMO.

It’s my impression that mainstream approaches recognize the need to conform to both laws of thermodynamics. I’m not sure I see the particular relevance to energy balance models here, but if their limitations are a concern, this must be spelled out mathematically, explaining what and by how much a preferable approach would deviate. I actually think that those who utilize these models have done that to a commendable extent (e.g., Gregory and Forster, Padilla et al, and others), and have provided uncertainty estimates that are reasonable in regard to transient climate responses, but if anyone disagrees, then we need the exact numbers that represent the area of disagreement..

Nope, 2nd law does not appear in the traditional analyses of feedback.

The paper parallels some of the things I have been trying to quantify. When I work from the glacial conditions to today, I get a better fit for the CO2 temperature change with an indication that CO2 is approaching the limit estimated by Calendar in the 1930s and going back to Arrhenius’ paper, his estimate, once the overly optimistic H2O feedback is removed, (the 1.6 (2.1) with water vapor). I don’t know if you are aware, but the approximate concentrations in his 1986 paper are 187PPM for the 0.67K and 420PPM for the 1.5K, where K is the CO2 concentration he used relative to his time. It is listed in the last table of his paper. You can compare the current observed to his estimates by latitude which tends to drive me toward Calendar and Manabe.

That significantly reduces the surface impact of CO2 doubling. approximately 0.8 to 1.2 C, which changes the relative magnitude of the feed back potential of clouds and even my radical conductivity angle, though that is an approximate millennial scale feedback.

It is interesting to me, but I am looking at the entire range of climate, glacial to interglacial not just end of the interglacial.

Judy – I would argue that the Second Law is critical to feedback analysis, because it’s the basis of the Planck Response (via Stefan-Boltzmann) that limits feedback amplification of forcing to defined levels rather than allowing runaway climates due to positive feedbacks from water vapor and other moieties. It’s implicit in standard feedback parameter estimates, and is reflected in the Taylor series describing feedback iterations of fractional value f that lead to the parameter 1/1 – f by which no-feedback responses are multiplied. Without the Second Law, the other feedbacks would destabilize climate rather than simply leading to stabilization at a new temperature.

I agree 2nd law is critical to feedback analysis, but I don’t see the 2nd law explicitly used in what you discuss

I don’t think he means feedback in the sense that it’s used in climate sensitivity. I think he’s using it to mean the way a steady state is achieved. I think this does need to have a few more equations.

P.E. – I agree that Kleidon is using feedback as the process generating a steady state, and that “feedback” as typically (but not invariably) used in climatology lingo only refers to part of that process. Nevertheless, the final principles are the same, because the part often excluded from the formal use of “feedback” in climatology is the Planck Response- the tendency of a warmer body to shed more heat in accordance with the Stefan-Boltzmann equation and the Second Law of Thermodynamics.

Even though it is not called a feedback, the operation of the Planck Response is fully incorporated into climate models and feedback analysis. Therefore, from both the standard and the Kleidon perspective, the climate follows what in standard control theory would be referred to as negative feedback. It’s too bad climatology has adopted its own terminology, but we have to accept it because it’s currently too deeply entrenched to change. Some references to feedback, however, do in fact include values for the Planck Response – e.g., Soden and Held 2008.

In response to Dr. Curry, I agree that the Second Law, explicitly addressed in the MEP approach, is not explicit in conventional feedback analysis.

You’re reading it the same way I am, Fred. It might have some impact on climate feedback, but it’s not obvious that it will.

All of the thermodynamics is totally dependent on the use of the second law. With the first law alone almost nothing can be said. As a specific example the derivation of the adaibatic lapse rate is directly linked to the second law as the concept “adiabatic” gets quantitative meaning only through the second law.

One can proceed without an explicit reference to the second law, when one uses concepts, which contain implicitly the second law, but that doesn’t mean that the second law would not be used in a totally essential way, when that’s done.

“It will take me a while to digest the entirety of the full text version”. Kleidon has obviously complicated simple thermodynamics very succefully! I admit I did not bother to go for the full text.

This paper is a good follow-up to the Too Big post because it applies free energy arguments to demonstrate how to simplify our thinking about perturbed steady state systems. Notice how Kleidon aggregates entire categories of dissipation structures to extract the pertinent info.

I use the maximum entropy principle to estimate distributions in many applications and can see how the maximum entropy production principle is a natural progression to the basic idea. The http://AzimuthProject.org blog has recent posts on this topic and the related minimum energy principle. I sense that there is momentum in the belief that the general approach will help solve some of the climate change problems mired in complexity.

As a nitpick, Kleidon probably should have used the acronym MEPP to distinguish it from MEP. There are three different principles with the same acronym (Maximum Entropy Principle, Maximum Entropy Production, and Minimum Entropy Production).

WHT, any specific links at azimuth we should be looking at?

The Azimuth post called Quantropy is interesting, and the comments are still active:

John is trying to unify the methods that get to an energy minimum (dynamics, MEPP) and the methods used at an energy minimum (statics, MEP). It’s fun to follow because they never know where the math will take them.

The believers will be displeased.

The deniers will be displeased also, CO2 has an impact, just not the exact impact predicted. If it is any consolation, land use has a much bigger impact :)

Capt, the numbers who believe CO2 has no role at all has been inflated far more than those who believe CO2 is THE driver of climate. The biggest points this paper makes imho is that the science is not only not settled, it is not even well defined by the AGW leaders. The other is to also down the positive feedback runaway tipping point boogeyman.

Hunter, I agree, though I am not sure how inflated the numbers may be.

From both the empirical and modelling work Iv seen on land use, I dont know how you arrive at such a certain conclusion. its hardly a settled matter

Those who scorn the CAGW consensus, because they don’t get basic radiative physics, are right for the wrong reason. If they are voters, then we can call them useful idiots.

Steven Mosher said, “From both the empirical and modelling work Iv seen on land use, I dont know how you arrive at such a certain conclusion. its hardly a settled matter”

It probably will never be a settled matter. It is hinted at quite strongly though. Since you have all the surface station data, if you compare minimum temperatures for true rural, the remote state and federal parks with suburban, farm land and urban, you should see a land use related trend. The UHI is obvious, but weighted properly should be included.

How significant that trend is would depend on how significant the true CO2 trend is. Your estimate is 1.5 for doubling, mine is 0.8 at the surface for a doubling.

Capt.,

I don’t think he has any data from areas that are truly rural. They did not put thermometers, where there weren’t any people to read them. Remote sensing posts in parks, etc. are a recent development. See the RAWS system:

http://www.raws.dri.edu/

Not much to work with.

Funny how any uncertainty has to work in one direction.

Once again no-one even suggests that uncertainty here might make climate sensitivity even higher. Everyone just presumes it would make it lower if anything.

lolwot, sensitivity can get be greater at times and less in others since the response times of the mixing layers vary. This just indicates that there are limits, maximum/minimum entropy, which are not exactly easy to figure out for each variable.

lolwot,

Yes, for decades the team’s consensus was never doubted.

Now it is clear that they were massively wrong. The science is not even well framed, much less settled.

It is astonishing that you have forgotten the Lovelock/Hansen school of fear mongering, with Earth becoming Venus.

We have had plenty of push to the extreme side of sensitivity.

Now we are presented with a sound case that shows this to be wrong, but you are complaining? It seems you are only really complaining about your side not controlling the conversation so much.

“Yes, for decades the team’s consensus was never doubted.”

Not true. The dominant theory has always been questioned and always will be. Every year alternative ideas are produced that either proclaim to overturn the dominant theory, or potentially could. From “Pressure-induced Thermal Enhancement” to “Skydragons” to cosmic rays. All these ideas hold the possibility of changing the mainstream position on climate sensitivity if they turn out correct.

But they also have a history of being shot down in droves, which is why the mere existence of such alternative ideas doesn’t alter my acceptance of the dominant theory one bit. Until such alternative ideas actually get traction in the scientific community and become widely accepted I will regard them as unlikely possibilities. Entirely compatible with my view that there is an *unlikely chance* that climate sensitivity is low.

Some people on the otherhand will cling to these alternative ideas and overplay their likelihood of them turning out true so they can pretend the dominant theory is in severe doubt.

Yet if these alternative ideas *were* so challenging to the dominant theory then when they fall (which many of them do as mentioned above) it should provide a credibility boost to the dominant theory – ie it has just survived an important challenge.

But certain people will just throw alternative ideas down the memory hole when they fall and move on to new ones while not altering their perception of the dominant theory at all.

Is the part about “controlling the conversation” an admission that you only talk about these alternative ideas so that the subject matter is anything but the dominant theory? As if we had thousands of threads about the greenhouse effect being a fraud somehow that would rub off enough uncertainty that we could excuse ourselves for not accepting the GHE?

Certainties or uncertainties here have no effect whatsoever on climate sensitivity. In your mind you might be a all deity, but in reality, you are just a dumb human like the rest of us.

JC wrote (emphasis by this poster);

“The 2nd law of thermodynamics is an underutilized piece of physics in climate science. IT IS NOT A SIMPLE BEAST TO WRESTLE WITH, but I think there are some important insights to gain.”

With all due respect, the Second (and also the First) Laws of thermodynamics are in fact very simple beasts to deal with. If your hypothesis or analysis or model appears to violate the Laws of Thermodynamics it very likely does (99.999%) and you should step back from your computer screen and start over in a few weeks.

When I present my engineering design for a peer review (yes, we do those too) I would be ashamed if my peers suggested that my design violates the Laws of Thermodynamics, that is like the ultimate shame, I would slink away.

Apparently in the climate science field when professionals from other fields suggest that your theories may violate the Laws, the response is to ridicule the suggestors, Well….. you can take that approach if you want to, but the Laws of Thermodynamics will “bite you (someplace uncomfortable)” if you choose to ignore them.

Back in the day all engineering curriculums required a “thermodynamics 101” course which (if you passed it) gave you plenty of tools to “wrestle with the beast”. Perhaps climate science curriculums should add this course? There are lots of textbooks and the basics have been “settled science” for about two centuries.

The basic summary is; Heat (also water) flows to colder (lower) locations, it does this all the time, at all locations. It does this at different velocities depending on the material(s) it is travelling through. Electromagnetic radiation (i.e. Infrared Light) travels through the system at about the speed of light, which is SIGNIFICANTLY faster than any known velocity of heat flow.

Any climate science hypothesis that is described with or alleges “Net Energy Gains” or “Extra Energy” violates the First Law and properly trained engineers shake their heads when they hear these terms.

In summary, my hypothesis is that increases in “GHGs” in the atmosphere only cause the Gases in the atmosphere to warm up/cool down faster when the arriving amount of energy increases/decreases (i.e. sunrise/sunset).

In the Electrical Engineering field we refer to this as the “response time” of a circuit/system.

Cheers, KevinK (MSEE, Georgia Tech 1981)

I think the main issue with Climate science and the second law is the merging of control theory. When they use the feed back parameter 1/(1-f). Perfectly proper looking, but f should be limited to the range 0 to 0.5, to allow for minimum entropy, or the perfect return for isotropic greenhouse gas. At least that is what I come up with using multi disc models.

Well, your hypothesis needs work. Modifications to the Planck response by GHGs can obviously change the average temperature of an emitting body. This is perfectly in keeping with energy conservation and the laws of thermodynamics. Statistical mechanics is an outgrowth of this, and climate scientists do understand these fundamental principles.

Web, Of course, they can, but what is the limit? I say the limit is 2 for perfect insulation or perfect return of OLR. Where do that require modification? Venus?

Dear WebHubTelescope;

Please define exactly what you mean by “Modifications to the Planck response by GHGs can obviously change the average temperature of an emitting body.”

My understanding is that the “Planck response” is a model that predicts the spectral content of the radiation emitted by a surface. This model is in fact pretty good, but NO REAL emitting surface EXACTLY matches this model (not the SUN, not the Earth, not Edison’s lightbulb).

In any case the “GHG’s” are not capable of modifying the “Planck response” of a surface UNLESS they can raise its temperature. Per the Second law a COLDER MEDIA CANNOT RAISE the temperature of a WARMER MEDIA. Yes I know that a colder media can deliver heat to a warmer surface (i.e. “warm” it) but unless the colder media can slow the rate of cooling enough to cause the warmer media’s temperature to rise it cannot modify its Planck response.

My hypothesis states that the “GH” effect does not slow the rate at which a surface cools, in fact by displacing “non-GHG’s” the “effect” actually INCREASES the rate at which a surface cools (or ironically enough the rate at which it warms), albeit by such a small amount we will probably never be able to measure it.

Climate scientists seem to have confused two effects, one is the rate at which a surface emits radiation while cooling (the number of ping pong balls I can throw every minute) with how fast the radiation travels away from the emitting surface (the speed of the ping pong balls, some of which return (i.e. back radiation) which I then throw again). Climate scientists have done a good job on calculating effect number one, while completely overlooking effect number two. To truly know how many “extra” ping pong balls there are left at the surface you need to consider BOTH effects.

Cheers, Kevin.

The ping pong analogy is for ding dongs.

It all sounds like SkyDragon talk.

Hey there WebHubTelescope, I certainly appreciate your equating my ping pong ball analogy with the “ding dong” ball case.

However, please note that the the believers in the “greenhouse effect” have failed after THREE FULL DECADES to observe their alleged effect.

So, please continue with your beliefs, but the REAL WORLD is not accommodating you at this time.

Cheers, Kevin.

I don’t quite understand this post. It is intriguing that there is another maximization principle we can use in our modeling. These usually give rise to superior methods because they conserve critical quantities. For example the Bateman variational principle in fluid dynamics.

However, and this is a critical point, these principles are ONLY useful when they are discretized and used to predict dynamics. These principles are very weak constraints in terms of what the actual behaviour is. To state that energy is globally conserved tells us virtually nothing about a system of any importance. To say that mass, momentum, and energy are conserved over EVERY control volume tells you virtually everything about the system.

One thing is indeed true and that is that formulating things in terms of an optimality condition subject to constraints can yield a tremendous variety of ways to look at the problem, most are of no utility whatsoever.

David – Like you, I’m concerned about the utility of the Maximum Entropy Production (MEP) Principle. To me, there is great appeal in the notion that the principle of maximum entropy defining an equilibrium state can be extended to MEP for a steady state. I’d like to have it turn out to be true, and I want to reread Kleidon for more insight into the evidence so far, which I take to be inconclusive. On the other hand, I suspect it’s easier to compute what maximizes entropy to produce an equilibrium than to compute what might maximize entropy production to produce a steady state – I’m not sure about this and would like to learn more, but the number of possible real world variables may be too formidable to render the concept very useful.

It’s also true that our climate is affected by external factors that follow their own entropy considerations. These include varying solar output and Earth/sun geometry, as well as tectonic shifts that affect circulation patterns and the carbon cycle. .Sorting out the contributions of these factors vis-a-vis internal climate MEP will be difficult. A critical question will be whether MEP can help us understand or predict climate dynamics better than standard approaches. Predicting the past may be a rough guide, but a real test of competing theories is how well they predict future events. Given the slowness of climate change, that may require some time to find out.

When you start documenting everything that can be potentially modeled with the maximum entropy principle, the list starts to get impressive.

For example, distributions of:

Wind speed

Wave height

Atmospheric pressure

Planck response

etc

This is directly a consequence of nature tending to disorder, and as Jaynes explains the close ties between maximum entropy, statistical mechanics, the second law, and conservation laws. Applying constraints under uncertainty is what makes the approach so practical. In many cases, all we know are the moments of physical observables, such as the mean, and that is perfectly adequate for the maximum entropy principle.

WHT,

But the question remains: How should the problem be framed? What is the system, whose entropy production is considered and what constraints are applied to restrict immediate transition to the state of maximum entropy.

The Kleidon paper is a live example that framing the problem is rather arbitrary and results are totally dependent on the particular choices made. That becomes obvious, when one starts to go through it’s basic example and to ponder, why it’s set up as it is.

I’m sure that there are problems, where some particular choices are more natural than some others, but the problem of arbitrariness remains at some level. I haven’t looked at your work, and don’t say anything on that, but the Kleidon paper is not to the least convincing.

Pekka, It is true that some of the variational problem solving approaches can seem kind of arbitrary, and that is why I have been chipping around the edges instead of trying to solve the whole ball of wax (so to speak).

Take the example of wind energy, for example. Would you tend to believe that the aggregate wind energy summed over the entire planet approaches a constant value? I can’t imagine this amount of kinetic energy fluctuating that wildly. In my interpretation, this leads to a spatio-temporal distribution of wind speeds which complies with a maximum entropy principle. That is one chip off the block, and the entire system can get similarly decomposed. Kleidon, in my view, may be treating these as interlocking pieces in the bigger puzzle.

One of the seminal papers in this field – proposing a maximum entropy principle for poleward energy movement – was by Garth Paltridge in the 1970s. Paltridge is a prominent skeptic.

Paltridge looks to have run into the same problem most scientists have, it’s complex. Different boundary conditions lead to different preferred steady states which leads to the extremal principals.

One of the most difficult to figure out, laminar flow regions, has a long history of aggravating scientists and engineers. Another is chemical reactions that are reversible. Add the right amount of the right kind of energy and the preferred direction can change.

Paltridge criticism is mainly that he misinterpreted the extremal principle, which I would think is open to pretty wide interpretations, since extremal principals would lead to , “At present, for this area of investigation, the prospects for useful extremal principles seem clouded at best. C. Nicolis (1999)[52] concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production ” (wikipedia) which is funny to me, because MEP and Extremal principals tend to indicate exactly what happens in the climate, non-ergodic behavior.

So the question I think remains, what trips the switches?

No one can go too far wrong starting with the fact that Einstein’s appreciation for classical thermodynamics would have from the outset distanced him from the pseud-oscience runaway global warming fearmongers.

There may well be very few skeptics who actually believe that CO2 does not cause a “greenhouse” effect. There are certainly no mainstream scientists who believe that runaway global warming is likely. Dangerous warming yes but runaway no.

So Hansen is no longer mainstream?

http://www.huffingtonpost.com/dr-james-hansen/twenty-years-later-tippin_b_108766.html

Glad you cleared that up.

And this guy is not mainstream?

So will climate scientists come out and condemn this founding member of AGW?

http://www.theregister.co.uk/2011/11/30/climate_tipping_points/

The IPCC Fourth Assessment Report talks about runaway climate change.”Anthropogenic warming could lead to some effects that are abrupt or irreversible, depending upon the rate and magnitude of the climate change.” That sure sounds like tipping point. And certainly those of who have followed the devolution of the IPCC would agree that it is not a mainstream science group, but rather a political marketing group.

Is this one of the kook, non-mainstream scientists you were talking about?

Neither a tipping point nor abrupt climate change corresponds to run away global warming and saying that “over a few centuries it is conceivable that …” is not equivalent to saying that it is likely..

Maybe Kleidon and Trenberth can get together and merge their diagrams and give a better picture of the energy flows involved.

I also don’t think a first law of thermodynamics approach is better or worse than a second law approach, as both need to be observed to solve the problem.

Maximum entropy production means to me that if the earth’s systems gravitate to that state, then the earth will warm as fast as possible, since entropy can be thought of as the energy you can not get any useful work out of and just becomes waste heat. So to me, MEP means maximum heat production.

Nothing in that paper that any warmist would have any problem with, except the anti woo Gaia stuff and the wooish holistic stuff. Why did he put that into what was a pretty good read and worth further study?

bob droege said;

“Maximum entropy production means to me that if the earth’s systems gravitate to that state, then the earth will warm as fast as possible, since entropy can be thought of as the energy you can not get any useful work out of and just becomes waste heat. So to me, MEP means maximum heat production.”

LOL You have demonstrated the mistake that many climate scientists and posters are making. Energy can take many forms (heat, kinetic, potential, etc.). What makes you think that entropy loses necessarily turn into heat in the climate system?

What part of Judith’s statements, “Nope, 2nd law does not appear in the traditional analyses of feedback”, and “Fred, Kleidon’s approach uses the 2nd law of thermodynamics. The mainstream approach uses the first law of thermodynamics. Different physics are in play here, there is no reason to expect the same answer”, do you not understand?

I don’t get the different physics are in play here part.

Maybe they should use this equation instead of picking either the first or second law

dU = TdS – PdV

“The different physics are in play here” is the fundamentally most perplexing thing Judith has written on this blog, in my opinion.

LOL, you have demonstrated the mistake many have made. Energy can take many forms (heat, kinetic, potential, entropy, etc), what makes you think entropy can turn into any form of energy you can do anything with such as kinetic, potential etc.

This equation is nice for reversible processes, doesn’t hold tho for irreversible processes which are at issue here.

bob droege said;

what makes you think entropy can turn into any form of energy you can do anything with such as kinetic, potential etc.

I never said that it did! That is what entropy is, missing energy. Energy that in the past could be observed, measured and accounted for, that after going through some process, can no longer be observed, measured and accounted for. Did you read the link I posted? Here, I’ll post it again.

http://blogs.discovermagazine.com/cosmicvariance/2009/01/12/where-does-the-entropy-go/

The title of the article is…. Where does the Entropy go?

If you know where energy is, then it is not entropy, but enthalpy.

The deep fundamental questions related to black holes and entropy are so remote from more common occurrence of the concepts of energy and entropy that they can safely be forgotten. Thinking about them will only add to the confusion that the concept of entropy creates in engineering or in understanding the atmosphere.

Entropy is related to energy, but it’s not a form of energy. The entropy can change – and tends to change – even when energy stays constant. That’s just the second law: The entropy of a closed system increases although by definition the energy of the closed system is constant. (The second law allows for constancy of entropy, but that’s only an idealization that’s never reached.)

The increase of entropy is due to two types of changes. In the first other types of energy are transformed to heat, in the second heat flows from higher temperature to lower. Both involve addition of heat somewhere but the source can be either higher temperature heat or some other form of energy like chemical energy, potential energy, kinetic energy of macroscopic volumes of matter or electricity.

As a further hint, other really practical applications of entropy exist when one starts thinking about energy spread in terms of probability distributions. Sharper distributions have lower entropy and broader distributions have higher entropy, corresponding to more disorder.

This gets to the dual views of entropy, corresponding to those who want to venture down the statistical mechanics path (probabilities, etc) and those that want to stay in the short-hand thermodynamics realm.

Web said, “Sharper distributions have lower entropy and broader distributions have higher entropy, corresponding to more disorder.”

Very true, one of the reasons I mentioned the super El Nino signature. Fred interprets that as a phenomenon where the ocean releases more heat. I interpret that as where radiant forcing reached minimum entropy, (or close anyway, because of the signature.)

What that minimum radiant entropy value is, I would think would be a fairly valuable thing to know. This is the 185K or 65Wm-2 puzzle I mentioned. If Venus has THE maximum greenhouse effect, its black body temperature, 185K may be the minimum radiant entropy. What was the minimum temperature of the tropopause during the super El Nino?

Pekka Pirilä said;

A lot of nonsense.

The purpose of posting the article was not to propose that studying entropy from black holes is pertinent the climate debate directly, but to help explain the concept of just what one is talking about when discussing entropy. All thermodynamic processes have entropy, everything from simple flow into steam tanks to black holes. Any form of energy that you can locate is not missing and therefore is not entropy. That is why Sean Carroll (a top theoretical physicist from Cal Tech) is asking where the heck the missing energy(entropy) has gone. He’s asking (and stating his theory on just where he thinks it goes) because nobody knows. Get it?

“Entropy is related to energy, but it’s not a form of energy.”

Entropy is energy loss that result from any thermodynamic process. Period.

Barring the fact that entropy does not have the same units as energy.

.. and barring also that energy is conserved and that therefore no thermodynamic process loses energy.

WHT said;

“Barring the fact that entropy does not have the same units as energy.”

Nonsense. The energy of the enthalpy and entropy involved can have whatever units of measure you want them to have.

Pekka Pirilä said;

“…..energy is conserved and that therefore no thermodynamic process loses energy.”

Well since you seem to think that no energy is lost, why don’t you tell us just exactly where the entropy is, so that the rest of the world can finally gain this useful knowledge. Unless of course you think that there is no entropy in the Earth’s climate system.

You stated earlier in this same thread, “All of the thermodynamics is totally dependent on the use of the second law. With the first law alone almost nothing can be said.”

Yep. The first law is only an idealized statement of little use. The second law is what applies in the real world. You seem to have forgotten this since you wrote it a few hours ago.

Entropy is not a measure of the total amount of energy, it’s a measure of how far the distribution of energy to its various forms is from the equilibrium that extends throughout the system. Its sign is defined in such a way that the maximum value is reached at the equilibrium.

As long as we are not at the equilibrium, the deviations can be taken advantage of. Temperature differences can be used to drive an engine, chemical energy to create temperature differences or perhaps to produce electricity in a fuel cell or battery, the possible uses of electricity are well known,

Deviations from equilibrium drive also natural processes like weather phenomena or ocean currents.

A closed system moves naturally closer and closer to equilibrium. In that it’s entropy increases asymptotically towards its maximum value given the constitution and the total energy.

There are other concepts like free energy and exergy, which are related to the same phenomena. They have the unit of energy and they do indeed disappear, or diminish, when entropy is increasing, while the energy itself is conserved.

Pekka, good overview. I wanted to add that entropy is in some sense hierarchical as well. As you describe, the global fluctuations can give rise to wind. And this wind can also show Maximum Entropy locally via the Rayleigh distribution of wind speeds. And within a small volume of moving air, the mixing of the contents will also follow the principle of Maximum Entropy.

So this follows through many levels of coarse graining, which makes it a wonderfully intuitive concept.

Pekka Pirilä said;

“Entropy is not a measure of the total amount of energy, it’s a measure of how far the distribution of energy to its various forms is from the equilibrium that extends throughout the system. Its sign is defined in such a way that the maximum value is reached at the equilibrium.”

The sign of the entropy you say? That is beyond nonsensical. I won’t belabor the point, but the idea of negative entropy is something that implies the thermodynamic processes of the universe can run in reverse to what they are doing now. This would be like driving your car and having it fill the tank by itself while you drive!!

I have explained my opinion on the subject as well as I know how. I’ll leave you to it. You can have the last word if you want.

Bye

I think Pekka is referring to the concept of relative entropy or cross entropy, where the entropy values can go negative, as it is used to compare the relative strengths of two competing probability distributions. That is related to techniques that statisticians have long used, such as log-likelihood or maximum likelihood, to establish confidence in a model.

Bottom line is that if a statistical measure can’t go negative, there is a hole in your quantitative toolbox.

This is no longer for your benefit but for those that want to advance our understanding.

WHT,

That wasn’t part of my comment. The tarditionally defined entropy of a fineite closed system is bound on both sides, by zero from below and some maximum from above. The lower limit was irrelevant for my comment as the entropy is always increasing getting further from zero and closer to the upper limit.

I agree if you say the sign is set by the sum of p * ln(p) and since p I between 0 and 1, then the leading sign is negative to keep the absolute entropy always positive.

It looks like the CSI guy Gil thought he was onto something based on flimsy forensic evidence.

The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints.Are the given constraints and maximum possible rate measurable, so that the claim is potentially falsifiable? In my brief studies of thermodynamics, equilibria and energy balance very seldom predict rates: nitroglycerin and gasoline, to pick two examples, may sit around for a very long time without producing measurable entropy at all, and then produce it at a very high rate; sugar produces entropy at different rates depending on whether it is burning in an oxygen atmosphere or powering biological processes via enzymes. Enzymes (and other catalysts) can change the rate of entropy production by factors of about 10^15 — are enzymes “prevailing constraints”? Are blasting caps and spark plugs “prevailing constraints”?

Photosynthesis uses solar power to reduce entropy locally, though the production of the solar energy in the sun increases entropy globally. The second law specifies an inequality, and as long as the inequality is satisfied it imposes few restraints on rates by which mechanisms operate. In the daytime, warming and photosynthesis occur, and at night cooling occurs; depending of the rate of energy influx in the daytime and the rate of energy eflux at night, the net entropy change over many cycles may be negative on Earth, as may have happened when atmospheric CO2 was sequestered in the first place. That’s not the only way that energy inflow may produce persistent structures in place of random variation.

Is there even one potentially falsifiable new claim about the effects of changing CO2 concentration on Earth energy transfer?

Last question, nonlinear dissipative systems with fluctuating inputs don’t generally produce equilibria, so is the concept of “driven to steady state” even applicable?

“Last question, nonlinear dissipative systems with fluctuating inputs don’t generally produce equilibrium, so is the concept of “driven to steady state” even applicable?

A. No. It should read “driven to unattainable equilibrium”.

Our climate is a endlessly dynamic nonlinear dissipative system with fluctuating inputs. It is forever changing, unpredictably. The fluctuations are to many to quantify, within current scientific knowledge and instrumentation modelling, and basically it is too big to understand.

A few scientists tried climate predictions using our best methods of scientific endeavor, although some say say the method was corrupt. Nevertheless, the qualifications needed for such a expedition were never applied, which is why the theory produced under this method is a flawed hypothesis, not only questionable but questioned for the last decade or more, unresolved.

Humans will likely never have a answer for chaotic climate systems in totality. The most we can say about when we will know a truthful theory is that it will be in the future for the very reasoning expressed here.

Einstein:.”A theory is more impressive the greater the simplicity of its premises, the more different are the kinds of things it relates, and the more extended its range of applicability.”

We defer to Einstein’s theory on thermodynamics but disregard his hypothesis about the strength and applicability of scientific theory.

I’d rather roll the dice towards Einstein whilst were are developing strategies to enhance benefits from a naturally changing biodiversity climate, rather the current paradigm of fearing the worst and stunting humankind’s development.

Whether his statement is theory or philosophy, I do wonder what his thought would be on the methods of climatic reasoning today.

For ten points extra credit in a Physics final exam I once wrote that the apparent reversal of entropy on earth was a miracle consistent with the existence of God. Fortunately, I’d gauged the graduate assistant correctly; he had a sense of humour.

==============

That’s a pretty good exam question for a bot to think of.

It’s hard not to think of something as evidence for the existence of God. In this case, the miracle is the existence of the Sun at exactly the right distance from the Earth to produce an environment capable of supporting life, which itself is another whole set of miracles.

In earlier centuries, Newton’s laws were considered evidence for the existence of God.

In an NPR interview Saul Perlmuter stated something similar to your second paragraph, w/o reference to God, but pointing out that somehow humans are just the right size to observe the phenomena of the very large (relativity) and the very small (quantum mechanics).

Or, we see what we see because of where we stand?

God is it not, nor are we in earlier centuries.

Imagination predicts progress. Universal material equilibrium is possible, Probably has happened before, in a fraction of a moment before the kenetic energy of that moment dissiptated into a different universal state of all the the universes matter.

Entropy, is it the driver of big bangs?

P.S. No offence intended to God.

The paper of Kleidon has been discussed briefly in some earlier threads. I read the paper at that time and was not convinced as can be seen from this comment:

https://judithcurry.com/2011/08/16/climate-sensitivity-to-ocean-heat-transport/#comment-100428

If a behavior does show maximum entropy, then it must get to that state space through some fundamental process. It is entirely possible that maximum entropy production is analogous to the Principle of Least Action, or minimizing the

free actionas the folks at the AzimuthProject are trying to frame the problem.To side with you, I can’t read any of Roderick Dewar’s papers on max entropy production in that they suffer from a circular reasoning problem. Axel Kleidon is much better because he tries to apply some numbers, and bridges the gap between understanding the climate with the laudable goal of trying to estimate how much renewable energy that we can extract from our environment.

WHT,

I have noticed that Axel Kleidon studies relevant problems. His goals are laudable and there is nothing wrong in experimenting with new approaches to find out, haw far they can get. It’s, however, common in such work that the final results are meagre. My view is that the paper that this thread is discussing has not reached significant results.

Kleidon states explicitly in the paper that the formal basis for the approach is lacking. Thus the paper is highly dependent on the examples. As I have written, my conclusion is that the examples testify rather on failure than on success.

Some have argued that entropy arguments in general lack formality. Rota considers a dozen problems in probability that no one likes to bring up and two of these concern entropy :

http://books.google.com/books?id=eaJyGXguokIC&pg=PA73#v=onepage&q&f=false

This doesn’t mean it isn’t practical and useful, just that people can tear their hair out trying to understand it and formalize it.

Web said, “This doesn’t mean it isn’t practical and useful, just that people can tear their hair out trying to understand it and formalize it.”

Thanks to the increase in my male pattern baldness, I can testify to that! :)

Think of the implication for just the basic up/down radiant model/kernel.

Thermal diffusion in the mid to upper troposphere to the tropopause sink. The paper provides some detail of diffusion/conduction toward the polar regions, the tropopause expands that impact. That is not a bad place for you to use your maximum entropy diffusion methods.

Thanks greatly for this excellent paper, Judith. Although I would like to have seen a much deeper investigation of the constructal law and its relation to the maximization of entropy, this is the first paper that I’ve seen that was aware of the constructal law, so we’re making progress.

As someone who has pushed for some time for greater understanding and use of the constructal law and its

application to the climate, I can’t thank you enough for bringing this paper to our attention. It will take more time to digest, but the important point is clear—flow systems far from equilibrium evolve and change to maximize certain aspects of the system, including the entropy and the work produced.My thanks as always for your site, it’s a great addition to the climate discussion.

w.

Interesting. The Constructal Law is simply an amalgamation of the well known Principle of Least Action, the 2nd Law as exemplified by the Maximum Entropy Principle, and symmetry arguments tossed in.

This may be one of those ideas that engineers have a different name for than physicists,

Climate science’s selection of terminology versus engineering is a great deal of the battle.

Re the constructual law, I am in discussions with Adrian Bejan re his new book and possibility of a guest post.

There are laws that are strictly true for every system for which they can be properly applied. The First and Second law belong to those. That requires also that they can be formulated exactly in mathematical terms.

Then there are “laws” that classify commonly occurring situations or developments. Such laws are often ill defined. Knowledgeable people may be likely to define them roughly in the same way, when given the field of application, but not exactly. These laws are also difficult to test as they are not followed rigorously, but are rather rules of thumb. To me it appears clear that the constructal law belongs to this second class. The situation is not quite as obvious for the principle of maximal entropy production, but every practical application of the principle that I have seen has the characteristic of this second class of “laws”.

The laws of the second type may be useful, but their limits of applicability are unknown and one should never expect that they are valid for a new application until that has been verified. In that they are fundamentally different from the best established laws of physics, which are very likely to be true over a wide range of situations that have not yet been specifically studied. (They are less certain to be valid, when we proceed to essentially new areas like those that led to the breakdown of classical mechanics in areas, where QM or relativity are important.)

Very true Pekka, engineering has quite a few rules of thumb to work in the real world. Climate science may need a few rules of thumb as long as they obey the laws of thermodynamics :)

Possibly the following article shares Pekka’s viewpoint?

http://www.uvm.edu/~pdodds/files/papers/others/1999/goldenfeld1999a.pdf

From the concluding comments,

“Long ago, Katchalsky and Prigogine described the formation of complex structures in nonequilibrium systems. Their dissipative structures could have a degree of complication that could grow rapidly in time. It is believed that comparably complex structures do not exist in equilibrium. … As science turns to complexity, one must realize that complexity demands attitudes quite different from those heretofore common in physics. Up to now, physicists looked for fundamental laws true for all times and all places. But each complex system is different; apparently there are no general laws for complexity. Instead, one must reach for lessons that might, with insight and understanding, be learned in one system and applied to another.”

The related topic of fluctuation theorem – and fluctuation/dissipation theorem – may also be interesting. At least I’d like to understand a little better what they mean in climate context. For example, Kleidon references several papers by Dewar including http://arxiv.org/ftp/cond-mat/papers/0005/0005382.pdf.

Pekka

Maybe the problem is one of semantics.

Isn’t is such that a “law” is a “law” (applies without exception for every system), while the other “rule of thumb” you cite is really not a “law”, but rather a “suggestion”, or at best an “uncorroborated hypothesis”?

Just trying to get the wording straight here.

Max

Constructual law is coming soon, hopefully via a guest post (or at least Q&A) with Adrian Bejan

Thanks Judith. Look forward to Bejan’s post.

There are numerous papers being published around Maximum Entropy Production by Kleidon and others, which rarely overlap or cite with Bejan’s papers.

Carnot’s theorem inherently incorporates the 2nd Law of Thermodynamics:

Kleidon, Bejan and others modeling climate and winds as a Carnot heat engine driven by the temperature difference between the tropics and poles implicitly incorporate the 2nd Law of Thermodynamics.

The atmosphere and oceans are not reversible systems. Consequently, the efficiency of earth’s climate engine is always less than the Carnot efficiency. SLOT thus enforces rigorous bounds on wind efficiency and the consequent temperature distribution over the Earth and with elevation.

Note that Ferenc Miskolczi also appeals to entropy maximization – and is commonly derided as few understand the power of it. e.g. Miskolczi 2007

Woe betide anyone who even imagines violating SLOT. Russian thermodynamicist Ivan P. Bazarov (Thermodynamics, Pergamon 1964) stated:

PS The US Patent office requires an inventor to physically demonstrate any invention that appears to require “perpetual energy” (of the 2nd kind) and violate of the 2nd law of thermodynamics (SLOT).

Of course, the most profound deduction from the 2nd Law is that the common use of the ‘back radiation’ concept in climate science is bunkum except for the particular case of a temperature inversion.

In all cases the relevant radiative heat transfer phenomenon is the difference between Up and Down radiative signals!

A second issue is the claim in climate science that net present GHG warming is 33 K. Again it is bunkum because this is lapse rate conflated with real GHG warming, probably about 9 K and set by the difference between mostly H2O GHG warming and cooling by clouds.

As far as the MEP concept is concerned,this is deeply bound to statistical thermodynamics and the Gibbs Paradox. its application to climate science is with respect to the assumption of 100% direct thermalisation of IR.

The resolution of the Gibbs paradox is to understand that at thermodynamic equilibrium, gas molecules have no specific identity. Thus when an IR photon is absorbed, an identical photon emitted almost immediately restores the Equipartition of Energy so except at high pressures and very high collision frequency, there can be little direct thermalisation. Indeed, unless otherwise proven, most thermalisation may be at second phases.

Perhaps this is why the simplistic 1990 IPCC predictions deviate so greatly from experiment: http://www.sciencebits.com/IPCC_nowarming

Be warned; non equilibrium thermodynamics is deceptively seductive .At all times refer to direct, simple experiments and never believe propaganda like the 33 K and back radiation claims!

The effect of 0% albedo or complete absorption leads to a 9 to 10 K differential, while 30% albedo leads to 33 K for perfect black-body.

So are you saying that all radiation is at least temporarily absorbed by the planet? And that none of the earth has mirror-reflecting properties at all?

In essence, the IPCC’c claim is that if you take away the atmosphere, the -18°C of the composite emitter at the top of atmosphere would move to the earth’s surface. Subtract [-18°C] from +15°C and you get 33 K. Lacis is on record as saying the models predict the same and it’s all GHG warming.. But that’s not true.

By eliminating clouds and precipitated water = ice caps, albedo falls from .3 to .07 so the equilibrium radiative temperature increases to ~0°C, ~15 K GHG warming. Iterate to use the residual aerosols and the low IR radiative properties of N2 and O2 and you get ~9 K.

The problem the modellers apparently have is that they fail to take into account the most basic part of the basis of their modelling. It’s that GHG warming raises the tropopause so must be separated from lapse rate warming at the surface.

The zero albedo result is spurious. GHG warming is the result of the increase of IR impedance in the atmosphere which represents itself as the apparent emissivity/absorptivity of the atmosphere near to the Earth’s surface and facing it. As the CO2 in this is near IR band saturation, you get into the realm of self-absorption and reduction of that emissivity, an increase in the opposite direction, reducing net incremental CO2 climate sensitivity because IR impedance falls as [CO2] increases.

There are other issues, e.g. the 13,1% change of CO2 Cp from 250 K to 350 K, and the variation of this in gas mixtures you get by partial molar Cp data. This change of Cp is the development of longer wavelength IR absorption bands so there is variable IR impedance as a function of temperature.

It looks like the simple but very wrong World of climate science is suddenly developing into real science with thermodynamics. About time but expect radical changes from the assumptions made by those who grew up when climate science was to real science as painting by numbers is to fine art. Welcome to the real World!

“Mydogsgotnonose | January 11, 2012 at 8:44 am |

In essence, the IPCC’c claim is that if you take away the atmosphere, the -18°C of the composite emitter at the top of atmosphere would move to the earth’s surface. Subtract [-18°C] from +15°C and you get 33 K. Lacis is on record as saying the models predict the same and it’s all GHG warming.. But that’s not true.”

If you only consider latent and sensible cooling (thermals) the effective radiant temperature of the surface is 267K degrees, That is 21 degrees which would be made up by radiant impact.

That makes things interesting. The total atmospheric radiant or ghg impact would actually be greater than 33C degrees. This may seem counter intuitive, but that reduces the impact that the change in CO2 has as a percentage of the total impact. Manabe estimated roughly 70C total radiant impact, my quick and dirty estimate is 54C degrees impact. Dr. Roy Spencer stated,

http://www.drroyspencer.com/2011/12/why-atmospheric-pressure-cannot-explain-the-elevated-surface-temperature-of-the-earth/

“One of the more significant aspects of the above discussion, which was demonstrated theoretically back in the mid-1960s by Manabe and Strickler, is that the cooling effects of weather short-circuit at least 50% of the greenhouse effect’s warming of the surface. In other words, without surface evaporation and convective heat loss, the Earth’s surface would be about 70 deg. C warmer, rather than 33 deg. C warmer, than simple solar absorption by the surface would suggest. “

Fantastic post!!! Best technical thread yet, IMHO.

CurryJA said,

“Nope, 2nd law does not appear in the traditional analyses of feedback.”

I posted some comments about this last year and only got a few takers willing to discuss the fact that most climate energy balance research was mostly ignoring the second law of thermodynamics. Link below;

https://judithcurry.com/2010/12/11/co2-no-feedback-sensitivity/

also at WUWT

http://wattsupwiththat.com/2010/12/23/where-did-i-put-that-energy/

“Fred, Kleidon’s approach uses the 2nd law of thermodynamics. The mainstream approach uses the first law of thermodynamics. Different physics are in play here, there is no reason to expect the same answer, but also no evidence here of a different answer. The linear approach that evolves from energy balance models is almost certainly oversimplistic, IMO.”

I agree with Judith.

I think Trenberth will find his missing heat hiding in the entropy room.

Wondering where entropy goes is not a problem exclusively for climate science. Example here,

http://blogs.discovermagazine.com/cosmicvariance/2009/01/12/where-does-the-entropy-go/

The Earth and Atmospheric Sciences department at Ga Tech is in great hands!!

Heat can disperse into the ocean, just like CO2 disperses into sequestering sites. Given a constant increment of thermal forcing over time, the diffusion of heat can show an understandable diversion into a thermal heat sink.

This is what the divergence from an expected temperature increase will look like given the heat sink never reaches a steady state in temperature: Alpha plot of dispersive MaxEnt diffusion.

This can turn around if the heat sink thermalizes, and the temperature gradient disappears.

I am working on this idea in more depth.

Heat dispersion in the ocean still has to follow the second law. Mass absorption, like the change in vapor pressure allowing gases to return to solution, would be tricky, but the oceans are electrolytic. That is one reason I concentrated on the 4C density boundary. It does drive the ocean over turning current.

First off, thermal or heat diffusion is a strikingly clear manifestation of the second law. Secondly, I apply maximum entropy to the thermal diffusion coefficient because I know that it varies over the planet. So it turns into diffusion with dispersion, which is real disorder. The result is an analytical expression that I hope will get some traction because of its simplicity.

Kevin Trenberth made the comment that the “missing” heat that had gone into the deep ocean could return. its “conservation of energy”. Not possible owing to the 2nd law (and the results of mixing), unless the temperature of the heat sink substantially rises. Another case of climate scientists using the 1st law, and not the 2nd law

Trenberth is a climate scientist, not a climate scientists.

Maybe you could take an art course where they use a lot of fine brushes.

[Response: The circulation time for the deep ocean is on the order of hundreds to thousands of years. Change there is very slow – which makes the changes seen so far quite surprising. At any new (warmer) equilibrium, there will be a significant increase of OHC over what there was before. The damping of the rate of surface warming or the warming in the pipeline isn’t anything to do deep ocean heat coming back out. I have no idea where this idea originated, but it is not accurate. – gavin]I have been wondering, what Trenberth really meant by the statement.

A process where heat literarily goes into deep ocean and returns to warm doesn’t appear plausible, but something along the same lines is possible. It’s possible that the overall net heat transfer between deep ocean and the upper ocean varies in direction. It’s normal that the heat flux is down in some regions and up in others. The relative size of these fluxes may vary, and certainly does vary to some extent.

It’s not even necessary that the net flux changes sign. For significant effects it’s enough that the downward flux varies in size. If it has been exceptionally large over some period that may lead to a significant reduction of the flux later. This would speed up the warming during the latter period.

Pekka said, “It’s not even necessary that the net flux changes sign. For significant effects it’s enough that the downward flux varies in size. If it has been exceptionally large over some period that may lead to a significant reduction of the flux later. This would speed up the warming during the latter period.”

:-)

The temperature differences eventually equilibrate and since the forcing function is still there (remember that co2 has a long adjustment time), the continually produced excess heat will have no where to go. And that is when the temperature would start to catch up.

So I agree that the heat coming back is the wrong interpretation. It will remain mixed.

“Kevin Trenberth made the comment that the “missing” heat that had gone into the deep ocean could return. its “conservation of energy”. Not possible owing to the 2nd law (and the results of mixing), unless the temperature of the heat sink substantially rises. Another case of climate scientists using the 1st law, and not the 2nd law”Judy – Could you quote his statement exactly, including the context, or better yet, link to the source so we can read the context? I’m skeptical that Trenberth was making an unrealistic claim, but I would want to see his exact point. As to whether heat that had gone into the deep ocean can return, of course it can – eventually -, but the direction of heat flow depends on temperature gradients, and the inevitably slow time course would reflect the vast heat capacity of the deep ocean. It wouldn’t require the temperature of the deep ocean heat sink to rise, but could happen if the temperature of the upper ocean and the surface falls.

It’s one of the reasons why the long term surface warming from CO2 forcing would fail to subside rapidly from a later reduction in atmospheric CO2. Susan Solomon et al made this point in a PNAS paper a few years ago.

http://www.dailycamera.com/boulder-county-news/ci_18932226

That the heat is buried in the ocean, and not lost into space, is troublesome, Trenberth said, since the heat energy isn’t likely to stay in the ocean forever, perhaps releasing back into the atmosphere during a strong El Nino, when sea surface temperatures in the tropical Pacific are warmer than average.

“It can come back quite fast,” he said. “The energy is not lost, and it can come back to haunt us, so to speak, in the future.”

Dr. C., my understanding is that the theory that the extra heat disappeared into the deep oceans assumes a very slow laminar flow that precludes significant mixing. In theory this seems possible, but not terribly plausible. It seems like this is a real key issue that should be possible to model without the complications from turbulence. Do you know of any such efforts?

Fred, Pekka – See

here for Trenberth’s statement.

Pekka – I don’t think anybody knows what he was talking about!

Once it’s measured in the deep ocean, found so no longer missing, it’s not coming back. It’s here among us – apparently for a very long time.

I think he meant what caused the deep ocean warming could stop causing it and start heating up the things that I get to play with on Wood for Trees, ’cause from I can see, nothing changed at Wood for Trees when they found warming below 700 meters.

So, either Trenberth failed undergraduate thermodynamics iserably or he never studied thermodynamics, had no idea of thermodynamics!

Thanks, Pat. Trenberth’s comments are consistent with my understanding of his position, and with climate data. The one additional point he made that went beyond that general understanding was that a reversal of the heat flux could also occur fairly rapidly, but transiently, during an El Nino – a phenomenon in which the ocean loses heat to the atmosphere. As I understand this, most of the ocean heat loss occurs from the upper mixed layer, but flux out of the deep ocean would presumably contribute to the ability of the mixed layer to release heat upward.

Fred, “during an El Nino – a phenomenon in which the ocean loses heat to the atmosphere. ” Wouldn’t the ocean just be losing more radiant heat to the atmosphere? La nina is an increased convection (westerlies) event, radiant impact is greater when there is less convection. Warmer SST during an El Nino would indicate less heat loss from the ocean.

Dallas – The El Nino response is complicated, because it varies both with region and with time. During certain phases, however, heat moves to the surface from below and is released into the atmosphere, with a consequent net heat loss from the ocean. Increased atmospheric heating, however, can elicit transient positive feedbacks that lead to heat transfer back into the ocean. Because these phenomena are not synchronous in every region, the global pattern is not always easy to interpret. Nevetheless, the First Law is relevant in that as long as an El Nino is not a forced response to a radiative imbalance imposed at the top of the atmosphere, heat gain at the surface and atmosphere must signify heat loss from somewhere else. There have been suggestions that ENSO phenomena may partially reflect anthropogenic forcing, but outside of this possibility, total energy must be conserved.

Fred – during an El Nino, can heat from below 3000 meters be brought to the surface? How about below 700 meters, from below 2000 meters. I’ve never seen “deep ocean” defined.

During El Nino the GMT seems get hotter. During La Nina seems to get colder. That is what I meant by something I get to play with on Wood for Trees.

JCH – During an El Nino, as long as energy is conserved, heat added to the surface and atmosphere must come from the ocean. The immediate source will be the upper layers, but I think Trenberth’s point is that their ability to release heat to the surface will be reinforced if some of their heat loss is replaced by heat from deeper levels. The net result would therefore be a contribution to surface and atmospheric warming from the deep ocean that is mostly indirect via the upper mixed layer rather than a bulk convective transfer of large amounts of heat upward over thousands of meters.

The larger point, I think, and one I tried to make earlier, is that heat transfer out of the deep ocean will be triggered when the upper ocean starts to lose heat. Ordinarily, this would be associated with global cooling (e.g., during a sustained reduction in atmospheric CO2), but in the case of El Nino, one can argue that the upper ocean heat loss is transiently associated with surface warming – a phenomenon that will ultimately reverse itself within one or a few years.

Hi Everybody –

Non-equilibrium thermodynamics is a fundamental description of the Earth system and can be captured in a simple 1st order difference calculation.

dS/dt = Ein/s – Eout/s – where dS/dt is the change in energy storage in the system and Ein/s and Eout/s are the average energies into and out of the system over a period.

Within the simple global overview are of course myriad – and powerful – processes through which energy cascades in the deterministically chaotic system that is the fundamental mode of operation of Earth’s climate.

ENSO is a sub-system that is itself deterministically chaotic. The intensity and frequency of ENSO events varies at least over at least 11 millennia that we know of – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=ENSO11000.gif

We are used to thinking of the oceans as layer of warm water over cold water separated by the thermocline – the depth at which the rate of decrease of temperature with increase of depth is the largest. In terms of energy dynamics – this seems relatively arbitrary. The oceans heat as a whole and cool as a whole – but within this there are hydrodynamical and atmospheric processes that influence both local and average rates of warming or cooling. ENSO is a key process involving upwelling in the eastern Pacific in a La Niña and – when the trade winds falter – the flow eastward of a pool of warm water that had been piled up against Australia and Indonesia.

The cold surface of the central Pacific in a La Niña loses less heat than the warm surface in an El Niño – remembering the net direction of energy flux. There are in addition cloud feedbacks in ENSO that again change the planetary energy dynamic.

There are 2 lessons in this. First – that energy flux is complex and dynamic and that a maximum entropy principle tells us little about specific dissipation pathways. The specific and complex pathways cannot be neglected in simplifying assumptions without catastrophic loss of verisimilitude

Secondly – that a La Niña cools the planet and an El Niño warms the planet – suggesting both a contribution to warming between 1977 and 1998 and a cooling influence for 20 to 40 years from 1998. Unless we can get this from an equation of maximum entropy – we are as far from the truth as ever.

Cheers

Robert I Ellison

Spencer and Pielke Sr. discuss Trenberth’s missing energy, with numerous cites to Climate Etc. Where’s the missing heat. e.g. Spencer 2010 A Response to Kevin Trenberth

Pielke Sr. observes:

Trenberth could not understand thermodynamics, how the hell he could response!

It’s been rather warm (mid 60F’s) and very clear due to a high pressure system being stuck over my area (N. CA) the last six weeks This has been very advantageous for my little PV system. Lots of kwh produced- record producing output actually. I was wondering if any of the climate models include changes to biological systems? I bring this up as my water buckets for the horses and wild life (deer, gophers, moles, etc.) are teeming with life currently- lots of various colored algae. For them to be growing aren’t they using a bit of energy. And in my case more heat more growth.

The variational principle that the integral of the scalar product of a flux and a potential gradient be stationary is satisfied by their direct proportionality, with solutions then those of the Laplacian=0. This is limited to a linear region near equilibrium. The rate of total entropy production is the volume integral of the scalar product of an energy flux density and a temperature gradient. This was all spelled out by Onsager in 1931 and he was duly awarded for his work. In the linear region, entropy production is a maximum only when boundary temperatures are fixed, implying maximum energy transport. When boundary fluxes are fixed, steady-state dissipation and the temperature differential are minima. To apply such an analysis to the troposphere requires that tropospheric dissipation be proportional to the square of its temperature differential – not too realistic.

Beyond the linear region, entropy production remains given by the same scalar product but the assumption of local proportionality no longer holds. For the case of electric dissipation, the expression W=J(V1-V2) gives us an exact solution under virtually all steady-state conditions. One can show that an equally ‘trivial’ expression holds for thermal dissipation, W=J1(T1-T2)/T1, provided one assumes that the total dissipation of a system equals that of the sum of its individual parts (W=<J). Rigorously, as Onsager quite explicitly states, his solutions are not valid when forces are velocity dependent, i.e. magnetic and Coriolus, unless of even order in velocity. (A system's dissipation in a given external field equals that of its mirror image.)

pdq

Ah, the Onsager reciprocity relationship!

A fruitful area of research may be the sinking of ice melt water as it is heated and the diffusion of dissolved salts giving rise to an apparent density maximum at 2°C which exists at high pressures according to the equation of state.

This more than anything else controls World climate via the deep ocean currents. The reciprocal part is the down ward diffusion of salinity at the tropics. Heats of mixing etc?

Mydogsgotnonose | January 11, 2012 at 8:55 am

“A fruitful area of research may be…”

I tend to agree with much of what you post. But a “fruit area of research” would be how many commercial products are available based solely on the “back radiation” idea to gain 33K. I would have guessed that profit motive being what it is that someone somewhere would be selling coats, blankets, cookware, housing wrap etc to gain 33K. So far I have found none. But I think a nice blanket might be a nice invention.

God has already given us cloud covers.

I believe commercial smelting ovens use the properties of CO2 to create high temperatures. If engineers did not understand the process they couldn’t control it.

CO2 lasers assumes strong interactions with infrared.

Is this not what you want to hear?

“”mkelly | January 11, 2012 at 9:49 am |

I tend to agree with much of what you post. But a “fruit area of research” would be how many commercial products are available based solely on the “back radiation” idea to gain 33K.””

I love commerce. I got shares in Darwin, Australia on issue. Great place, 33K every day. Bloody hot but. Best I can say it has ugly mornings, unbearable days, chaotic evenings and restless nights. Still, as you are limited by the ecosystem to the function of drinking beer, life can be very merry indeed, for the current inhabitants.

Send me a personal email if you’d like to buy into paradise.

WebHubTelescope | January 11, 2012 at 12:46 pm |

Both of your examples (laser and smelting) require work input. That is not so in the “back radiation” that causes and increase from 255K to 288K.

Gases dissipate heat and our lives depend on this natural occurence.

Now mkelly is requiring a perpetual motion machine. Nice use of the raising-of-the-bar fallacious argument.

Question: Is the maximum of entropy production that is possible within the system limited to the difference in entropy between the entropy of the incoming shortwave radiation and the more or less equal wattage of outgoing longwave radiation?

That is a good question. I would think that maximum entropy would be limited by each thermal boundary layer. The oceans can loose a order of magnitude or more heat that the atmosphere can accept at the surface mixing layer. The latent portion of the lower atmosphere can lose more heat more rapidly than the dryer portion of the atmosphere can accept.

Above the tropopause, CO2 is limited by its spectrum, so it is a space blanket with a lot of holes in it with out significant spectral broadening. Most of the broadening should be below the tropopause, so the change in local emissivity would be a major impact.

The difference in entropy between incoming and outgoing radiation is significant. Incoming occupies a more narrow peaked frequency spectrum while outgoing is much broader with gaps, thus higher entropy. To balance out the energy gaps, the earth emitter slightly increases its temperature thus producing more energetic photons to radiate. This is basic statistical mechanics of Bose-Einstein particles, aka bosons.

Sorry “between the entropy of the incoming shortwave radiation and THE ENTROPY of the more or less equal wattage of outgoing longwave radiation”

Why not go the whole hog and claim that the CO2 molecules line themselves up in the atmosphere to form a Fabry-Perot Etalon, effectively a dichroic mirror, with the stored energy a standing wave!

This is what an Aussie climate scientist suggested, seriously, to me as the explanation of ‘back radiation’! Ludicrous.

But seriously, entropy change of a system as defined by statistical thermodynamics is delta q/T where delta q is the infinitesimal heat involved in reversible work at temperature T. Remember T is average molecular kinetic energy..

So, by this definition radiation has no entropy until it is converted to heat! Therefore we have SW energy in, converted to heat, increasing disorder as it moves upwards mainly convectively, then converted back to radiation at the upper atmosphere.

The key parameter is the ‘back radiation’, the product of gas temperature and emissivity ~0.2 for clear sky but this is not an energy source; instead it’s a measure of resistance to IR transmission. Climate science wrongly thinks that because it has IR emission lines it is the heating of the GHGs; wrong, it’s just emissivity [see the works of Hoyt C. Hottell at MIT in the 1950s].

Under a cloud it rises because emissivity gets near 1.0. There are few IR emission limes here because of much lower optical path length.

This radiation is ‘Prevost Exchange Energy’ which can do no thermodynamic work. In effect it couples the IR density of states at the Earth’s surface and the immediate atmosphere and if the atmosphere cools, less Exchange Energy causes the rate of heat energy in the solid being converted to radiation to increase. It’s a very subtle effect that no-one has researched apparently, and might prove a use for all those ‘back radiation’ measurements made by climate science but hitherto used wrongly!

Noseless Dog said, “So, by this definition radiation has no entropy until it is converted to heat! Thermal mass? The thermosphere has high temperature and little energy. Temperature can be substituted for energy in the laws of thermodynamics, but only if the real energy plays along. The Aussie’s standing wave would be similar to a capacitor charging, the voltage doubling is great, but the energy has to be available for work to be done.

Nonose’s comments are not very clear, but if he/she is suggesting that back radiation is due merely to redirected energy rather than atmospheric heating via GHG absorption of infrared radiation, then that misconception is an example of what is known as the “surface budget fallacy”. In fact, for a given increase in CO2, the large proportion of back radiation increase comes from the CO2-mediated atmospheric heating (a result of reduced OLR and consequent radiative imbalance) and a much smaller fraction from the redirection of energy downward; this can be calculated from the radiative transfer equations, and measurements of downwelling longwave radiation are consistent with this effect.

The surface budget fallacy was prevalent in the first half of the twentieth century but was finally refuted by Manabe in the 1960s. Details and examples can be found in Chapter 6 of Pierrehumbert’s “Principles of Planetary Climate”.

Mr. Molton says: “…this can be calculated from the radiative transfer equations…”

What emissivity do you assign to CO2 in these heat transfer equations?

That’s a coincidence. Mkelly was asking about GHG applications, and now Hottel’s name is brought up. Read up on his work with water vapor and co2 in furnaces.

Yes, this is a great technical thread. I think we would all benefit here from working on some specific examples, at least, us plebes. So, with respect to feedback:

What does the MEP have to say about the combination of water vapor and lapse rate feedbacks?

My initial answer, please correct me etc., is that the “constraints” on the MEP in systems exhibiting significant thermal activity in all 3 radiative, convective, and conductive process types, include constraints imposed by the quantum structure of the individual molecules in the atmosphere. So we’re back to the spectral line databases.

So, the Earth will act to maximize entropy, given the constraints imposed by individual orbital energies in gas molecules.

Back to square one?

BillC said, “So, the Earth will act to maximize entropy, given the constraints imposed by individual orbital energies in gas molecules.

Back to square one?”

Kinda, but at least we have a square peg for a square hole this time. My biggest issue has been abuse of the second law, but misapplication of the zeroth law, http://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics

When people assume that energy is fungible, it can turn into fudge-able.

Work is not fungible. The Arrhenius equation dT=5.25ln(Cf/Co) assumes that the temperature change is only dependent on the concentration change. The increase in concentration raises the average radiant layer of the CO2 “forcing”. The potential energy decreases with altitude so the impact of CO2 would decrease with altitude. In order for the maximum CO2 impact, the spectrum of CO2 at the average radiant layer would have to broaden, but broadening is itself a negative impact on CO2 forcing felt at the surface. That increases the impact or the latent, conductive/convective and atmospheric window cooling mechanisms.

So maximum entropy is a more realistic approach, but not exactly a simple approach, imnsho.

Cap’n:

“The Arrhenius equation dT=5.25ln(Cf/Co) assumes that the temperature change is only dependent on the concentration change.”

I thought the modern database/GCM code version of this was a bit more sophisticated?

“The increase in concentration raises the average radiant layer of the CO2 “forcing”. ”

Yes, this is the explanation commonly invoked by Pierrehumbert et al.

“The potential energy decreases with altitude so the impact of CO2 would decrease with altitude. ”

What potential energy?

” In order for the maximum CO2 impact, the spectrum of CO2 at the average radiant layer would have to broaden, but broadening is itself a negative impact on CO2 forcing felt at the surface.”

Not sure what you mean by this. Won’t the width of the absorption spectrum have a sort of a “lapse rate” where the broadest absorption will be near the surface?

“That increases the impact or the latent, conductive/convective and atmospheric window cooling mechanisms. ”

Assuming you mean “impact of”, I draw a parallel with a statement like “all atmospheric heat loss mechanisms will operate faster on a hotter planet”. Not much to argue with there (aside from clouds and water vapor feedback!) but the argument is all in the quantification of the effects, no?

If in sum you are saying that applying the MEP will help this quantification, I agree, see my reply to P.E. below about “directing” the parameterization of variables in GCMs other than the line by line or block radiative transfer codes.

BillC, I thought the modern database/GCM code version of this was a bit more sophisticated? It is supposed to be, not sophisticated enough to deal with the Antarctic or southern upper troposphere though.

What potential energy? the potential energy of the parcel of air being warmed. The potential energy and temperature of the air being warmed is required to know the amount of impact it would have. The troposphere hot spot that is not as apparent as estimated.

Not sure what you mean by this. Won’t the width of the absorption spectrum have a sort of a “lapse rate” where the broadest absorption will be near the surface? Yes it would, mixed with water vapor the total spectrum is near saturation. As the average radiant layer raises, the layer of saturation thickens, if DWLR is to be a real energy, it has to follow the real laws. In the Arctic, the DWLR impact doe increase water vapor feed back which is initially low enough to impact surface temperature, the GHE gameplan. In the Antarctic, water vapor does not respond due to the much lower temperature range, CO2 is saturated with itself, minimal broadening, minimal impact. In the tropics, latent and sensible cooling vary below the average radiant layer minimizing the impact.

but the argument is all in the quantification of the effects, no? Exactly!

http://wattsupwiththat.com/2012/01/10/the-climate-science-peer-pressure-cooker/ I would expect many more papers stating the same thing, that impact is over estimated. I want to know why, I already knew it was.

The key to quantification I believe is thermal diffusion, thermal dissipation or thermal conductivity, which ever term you prefer, near the radiant boundary layer and the ocean mixing layer.

Dude Capn nice find! WUWT I don’t read, but they do sometimes get the scoop!

I don’t really care what Pat Michaels said, but the fact that the new paper says “Our analysis also leads to a relatively low and tightly-constrained

estimate of Transient Climate Response of 1.3–1.8°C, and

relatively low projections of 21st-century warming under

the Representative Concentration Pathways” IN THE ABSTRACT is key.

Something bothered me about this last night, and now that it’s morning, I know what it is. At best, this approach can remove a degree of freedom. Did we have an extra one all along, or is this going to end up overconstraining the system?

P.E. – I think the place this could help the most is in the parameterization of sub-grid scale dynamical processes in GCMs. AFAIK, in some sense these parameterizations have lots of DOFs, because they are not specifically solving equations of state (Navier Stokes), even numerically. I don’t see it’s application much to the radiative transfer part of the models.

Pekka made a comment above that the second law is built into a lot of constituent pieces, such as the adiabatic lapse rate. That’s fine, but doesn’t give us anything new. This may also be of some help on the micro scale as you say, but without having anything to say about the macro scale picture, I don’t see this as having much impact. Somebody needs to run with this and show what it has to do with the overall energy flow.

I think “bettering the models” using this approach to direct sub-grid scale parameterizations, has the potential to improve our understanding of most feedbacks – S-B, water vapor, lapse rate, clouds, ice etc.

Heh heh, the S-B feedback, formerly known as Stefan-Boltzmann, now renamed in honor of Spencer and Braswell.

Two points from a non climate scientist but has done a lot of chemical thermodynamics:

1. N. Atlantic OHC: http://bobtisdale.files.wordpress.com/2011/12/figure-101.png

This should not happen unless GHG-AGW <<1990's natural heating, now cooling.

[It's actually associated with the forthcoming Arctic freezing in its 70 year cycle, over by 2020 according to Russian contacts when it'll be as cold as 1900.]

2. How can there be 100% thermalisation of absorbed IR when the Law of Equipartition of Energy means there is almost instantaneous random emission of an already thermally-excited molecule?

[Remember, the CO2 in a PET bottle experiment measures warming from two other aspects – absorption or IR by scattered IR, the constrained pressurisation of CO2 which has a higher CTE than air. Nahle has picked up on this Cp change, as climate science should have done but did not – it explains an awful lot of the physics.]

Sorry absorption of scattered IR by the walls of the PET [or glass] bottle.

Has anyone done IR absorption measurements is a NaCl [not IR absorbing] tube?

Organic chemists have been taking IR measurements of millions of synthetic compounds between KBr discs for many decades [frequently as a “Nujol mull”]. As an undergrad, I recall learning that the C=O stretch of carbon dioxide was often one of the significant background impurities of the spectrum. The chemical literature, and the Sigma-Aldrich reference data base, probably hold a wealth of historical [inadvertent] IR measurements of carbon dioxide from around the world, under ambient lab conditions. Knocking it into some usable shape might be an insurmountable task though.

This is a great post, JC. Since my school days I’ve been wondering about how the “earth system” exchanges entropy with the wider universe.

Here is a quote about the second law from the textbook that I used in my thermo classes at Ga Tech (Sonntag and Van Wylen were the authors). Basically they say the second law is evidence of a Creator.

‘… the authors see the Second Law of Thermodynamics as man′s description of the prior and continuing work of a Creator, who also holds the answer to the future destiny of man and the universe.′

provocative words!! I remember being struck by it when I was first studying the textbook.

verifying link here;

http://creation.com/sonntag-van-wylen-2nd-law-evidence-for-creator

That was also in a hydraulic engineering textbook I had for a college course (Morris and Wiggert), they also noted that it shows evolution is a load of crap (paraphrasing).

See Granville Sewell’s mathematical development: A second look at the second law where he develops the second law equations required for an open system.

And Sewell’s discussion on the Second Law

I have not seen any sound rebuttal to either Sewell’s math or his tautology.

What is the entropy diagram for photosynthesis?

BillC

Better yet, consider the reduction in entropy from stochastic processes with distributions of elements to a self reproducing organism relying on photosynthesis for energy. This massive change of entropy over Origin Of Life (OOL) scenario is where Sewell’s equations for the second law in an open system need to be applied.

“The proposed principle of MEP states that, if there are sufficient degrees of freedom within the system, it will adopt a steady state at which entropy production by irreversible processes is maximized. ”

That would not happen per se, would it? Take for instance wet wood. After it is burned everything complex and ordered is transferred to simple and unordered gasses, heat and a minor residue, ash. That is why wood burns, whereas these gasses that result from that fire would not magically turn into wood, cooling the environment. Hence, the burning, even of wet wood is an entropy production proces.

Still, it takes considerable effort to burn wet wood. I may not understand what the man says, but doesn’t that refute his idea maximization? Or does my example have “sufficient degrees of freedom within the system”?

O, I see my mistake now. Indeed it is what is called “freedom within the system”.

See MattStat’s comment above:

https://judithcurry.com/2012/01/10/nonequilibrium-thermodynamics-and-maximum-entropy-production-in-the-earth-system/#comment-157883

The above was meant as a 2nd reply to David L. Hagen:

https://judithcurry.com/2012/01/10/nonequilibrium-thermodynamics-and-maximum-entropy-production-in-the-earth-system/#comment-158055

It is great to watch the scientists debate, just to show that the debate is not based on ignorance. I know nothing about MEP but articles like the following suggest that it too is seriously unsettled.

http://en.wikipedia.org/wiki/Non-equilibrium_thermodynamics

Excellent article on MEP and it’s application to understanding both past and present climate. We really do need to understand MEP better, which, along with spatio-temporal chaos are the driving forces which create our world.

However, I wonder if the Earth was ever even near TE, even billions of years ago? Since very early in the solar system genesis, our planet has always had our moon and sister planets as companions and their tidal effects will have always kept things on Earth well stirred.

MEP is a step in the right direction as is spatio-temporal Chaos, but I think the cool part will be relativistic heat conduction. Every time I see the 1997/8 el nino temperature profile I think of thermal resonance and the wave nature of heat. I am pushing my luck just yakking about the impact of conductivity changes at thermal boundaries. Breaking out imaginary temperatures would be loony bin stuff :)

Cap’n. You are insane to bring up relativistic heat conduction.This is a solution to a non-problem. Yes, diffusion can show infinite propagation speeds, but the width of this is a delta and so can be integrated out. Also, resonances are pressure waves and can be acoustically modeled. More stuff pulled out of nether regions.

Web I said it was loony :) It still interests me though. When I was playing with the Kimoto equation, that started such a row at Lucia’s, it got me thinking about RHC and self organizing criticality. Maximum Entropy production defines a state the system is seeking, right? What happens when it finds it? It would change the source, sink or a little of both so it looks for a new state. That should be fairly predictable. Some of the climate shifts don’t seem to be that predictable and never will be if they are chaotic. With two approaches, MEP and something else, there may be more that is predictable.

Anywho, the Kimoto equation needs some kind of validation and it looks like MEP may benefit from it, but there is still some weirdness or chaotic influence I don’t think it will find around 185K. If that something is thermal/non-thermal flux interaction, RHC may be the better approach in the end to fill in some gaps, or I am just nutz :)

Dr. Curry – What a great site where the layman and the scientist can converse, and reputation and stature must be checked at the door. So much to learn for layman….and scientist. Great post!

Judith,

This thermodynamic lesson misses many areas that were not in consideration:

No mention whatsoever of water loss.

Different velocities missed.

90% of the planet being water and the differences.

Angles of solar energy from the sun.

Planetary tilting.

Centrifugal force energy.

Differentiations of gases in heat exchange.

Planetary slowdown.

HOW DID THEY GET THIS SINGLE CALCULATION ON AN ORB THAT HAS MANY DIFFERENT PHYSICAL PARAMETERS???

If this single calculation used averaging, then the planet it describes is a cylinder and NOT an orb.

Just try reapplying that calculation back on an orb and it is totally different from the original data taken.

Entropy is not original sin, chaps! I believe some of the more, shall we say, esoteric argument in this thread was discussed somewhat more more entertainingly in

this 1956 essay[i]The energy of the sun was stored, converted, and utilized directly on a planet-wide scale. All Earth turned off its burning coal, its fissioning uranium, and flipped the switch that connected all of it to a small station, one mile in diameter, circling the Earth at half the distance of the Moon. All Earth ran by invisible beams of sunpower.[/i]

Although the title seems to indicate that MEP is something associated to non-equilibrium thermodynamics, it is fair to say that currently accepted formulations of non-equilibrium thermodynamics (TIP, EIT…) have nothing to see with MEP hypothesis.

The author acknowledges in that article that:

However, in his book http://books.google.com/books?id=YRjfuEP_QycC&pg=PA42 Kleidon provides a more accurate review of the real status of MAXENT and MEP. In that book he also presents a .

There is a serious problem, however, MAXENT has been systematically showed wrong by thermodynamicians (e.g. by members of the Prigogine school as Radu Balescu)

http://en.wikipedia.org/wiki/Maximum_entropy_thermodynamics#Criticisms

I find many other interesting points in the Kleidon paper:

(i) He cites

, when this is not a principle but a proven theorem. It cannot be naive ignorance, because in another part Kleidon cites Kondepudi and Prigogine book, where the theorem is proven. Kondepudi and Prigogine also present an extension of the minimum entropy production to nonlinear regimes, but Kleidon does not cite this well-known result.(ii) He says that systems in thermodynamic equilibrium are characterized by S=k lnW. This is only true for microcanonical equilibrium, for which each microstate have the same probability, but the equation is not valid for other kind of equilibrium. The application of microcanonical results outside of the microcanonical regime has been a charateristic of the MAXENT school which has received strong criticism in the literature.

(iii)

. First, in nonequilibrium thermodynamics dQ is an exact differential, not an inexact one as in classical thermodynamics. Second, that is only valid for closed systems, for open systems it is dQ/T + dmatterS. Third. that is not the definition of entropy but the definition of entropy flow deS = dQ/T (for closed systems), the identity deS = dS is only valid when diS = 0. Four, dQ is not , but the flow of heat. The change in heat content, in Kondepudi and Prigogine, includes the production of heat term. See Kondepudi and Prigogine for details.For a modern definition of heat and comparison with Kondepudi and Prigogine see http://vixra.org/pdf/1111.0024v1.pdf

(iv) Equations from (15) to (21) are standard TIP equations. The comments done about (23) are plain wrong. The production term in (2) is always non-negative. dG in (23) is only negative for closed systems at constant pressure and temperature. A simple derivation shows that if G is being considered a thermodynamic potential then dH in (23) cannot be the

but represents the flow of heat with surrounds.There are more issues, but this post is going too long…

Good to have your insight around.

I have spent time studying the proposed principle of maximum rate of entropy production. My impression is that it is a fad. The very acronym MEP advertises to me that it is the province of lazy writers and slick thinkers.

Juan Ramón González seems to me to be a serious expert and I would more or less echo his post, though I am not a serious expert like him. I find his post helpful to myself.

As I interpret it within my limited competence, my reading of experts is that there cannot exist a valid and reliable general principle of far-from-equilibrium thermodynamics based simply on rate of entropy as is the proposed principle of maximum rate of entropy production. Special cases require appropriate special modes of analysis. One has to deal with the diverse special cases on their respective merits.

The main question goes beyond if I am an expert or not. The same textbook by Kondepudi and Prigogine, cited by Kleidon in his own paper, has a section titled “17.2 The Theorem of Minimum Entropy Production”. Him calling this theorem the “minimum entropy production principle” indicates either he does not know nonequilibrium thermodynamic basic stuff or that he pretends to over-emphasize the hypotetical MEP principle associated to MAXENT.

Virtually any textbook in thermodynamics, Kondepudi and Prigogine as well, explains that dG =< 0 only for constant N, p, and T.

Kondepudi and Prigogine devote a page to explain why is dQ in nonequilibrium thermodynamics instead of δQ as in classical thermodynamics of equilibrium.

Etcetera.

Why does Kleidon cites a well-known textbook, but then ignores most of what says? I left this to readers as an exercise.

You are right about of far-from-equilibrium thermodynamics. Effectively, outside the linear regime, the production of entropy continues being non-negative, but this production only covers the average behaviour, not the fluctuations. Near equilibrium fluctuations are small, but near bifurcation points, fluctuations are abnormally large and you cannot approximate the instantaneous rate of production \tilde{\sigma} by the averaged value \sigma used in equations (1) and (2) in his paper.

Slightly off track, B.H. Lavenda (pages 64-65 of his TIP 1978) says that Prigogine’s proof does not use properly independent variables. This is at the boundaries of my level of understanding. Would you comment on this, and give references and links to criticism, discussion, or development? Judith Curry has my email address.

Lavenda starts by criticizing a statement done by Nicolis about the production being always a minimum. Although finally, in page 65, he remarks that the production is minimum for stationary regimes near equilibrium.

Any presentation of the Prigogine theorem that I know emphasizes that its validity is restricted to linear nonequilibrium thermodynamics where the Onsager relations hold.

Thank you Juan Ramón González Álvarez for your kind response.

The objection of Lavenda is to Prigogine’s proposed proof for the case of stationary regimes near equilibrium. Lavenda on page 62 writes that “the principle of minimum entropy production is a by-product of Onsager’s variational princple.” Lavenda writes that Prigogine did not [as he claimed to do, comment inserted by present writer] prove a theorem for a non-equilibrium stationary state, that is to say, for a state finitely far from thermodynamic equilibrium. Instead, according to my (possibly mistaken) reading of Lavenda, Prigogine proved a theorem for infinitesimally small displacements from thermodynamic equilibrium. As I read Lavenda, Prigogine derived only the principle of least dissipation of energy applied to infinitesimally small deviations from thermodynamic equilibrium. These are subtle matters, and call for careful thought and careful statements, especially for non-experts such as me. What I have written just here is compatible with your statement “Any presentation of the Prigogine theorem that I know emphasizes that its validity is restricted to linear nonequilibrium thermodynamics where the Onsager relations hold.”

As far as I understand, the Onsager relations require that gradients are small, i.e. that the deviations from equlibrium are small in small volumes, not that the whole system is near thermodynamic equlibrium. Based on the limited reading that I have done the controversy is related to this distinction. (Full linearity is reached only at the limit, where all deviations go to zero, but I don’t think that this should be interpreted to imply that large scale smooth deviations from equlibrium should be excluded.)

The Wikipedia article on non-equlibrium thermodynamics gives a good impression, but only a real expert might perhaps tell, whether it gives a balanced view of various opinions (and the expert should be one that accepts the value of views that differs from his own). The only thing that I dare to conclude with some confidence is that there remain open questions even on basics of non-reversible thermodynamics.

That sounds reasonable for a minimum entropy production regime at steady state. At maximum entropy, deviations should give less entropy which makes it less likely an ensemble state. Production is a time derivative rate.

Remember that steady state does not imply equilibrium.

This is perhaps a naive reading, but it could explain the seeming contradiction between max entropy and min entropy production.

Pekka,

Onsager linearity assumes a local and fixed linearity between fluxes and potential gradients which prevails throughout the system. His proof of maximum entropy production is also only wrt flux variations within a prescribed temperature distribution – a constraint infrequently noted. Even with absolute linearity a given, however, when fluxes are conservative, there remains an issue of total energy dissipation exceeding associated internal flux when system temperature differences become commensurate with their absolute values unless an additional condition is introduced that energy once dissipated is no longer available for subsequent dissipation within the system. I have yet to discover a proof or even mention thereof, but my resources are rather limited.

“He says that systems in thermodynamic equilibrium are characterized by S=k lnW. This is only true for microcanonical equilibrium, for which each microstate have the same probability, but the equation is not valid for other kind of equilibrium.”This is an opportunity for me to enhance my understanding of entropy as a statistical mechanical concept. Could you cite examples of equilibria in which microstates have different probabilities? How serious an error would result from assuming equal probabilities when applied to climate variables? If we were simply looking at the behavior of atmospheric gases,would the errors be large?

Drop an amount of ideal dye into a tank of water. The dye will disperse to uniform concentration, giving a constant value of W over the XYZ coordinates of the volume. This is a Maximum Entropy condition according to the p*ln(p) definition. Microstates of different probabilities would occur if the dye was charged in an electric field or had differential bouyancy with gravity. That adds a constraint to MaxEnt modifying the probability result

How it gets there is a Maximum Entropy Production problem. One could just as soon use a master diffusion equation to try to solve the dynamical time evolution, but the production advocates suggest that there is an easier way, akin to plain old max entropy alone.

Fred,

For a system in equlibrium the relative probabilities of microstates are proportional to exp(-E/kT). In microcanonical ensemble all microstates have the same energy and are therefore equally probable in equlibrium. In canonical ensemble the energy varies and therefore the probabilities vary as well.

Web said, “Microstates of different probabilities would occur if the dye was charged in an electric field or had differential bouyancy with gravity.”

Is this a light bulb moment?

Perhaps for dim bulbs.

I might add that this can be used to solve the atmospheric lapse rate expression to first order by invoking an average gravity head as the constraint.

For the attitude, you will still not be receptive, but what the heck.

The 4C boundary layer, i,e, the maximum density of salt water, drives the ocean circulation. It is not a MaxEnt problem. The temperatures, viscosity and possible geomagnetic potential in the Antarctic atmosphere makes it not a MaxEnt problem. Do you agree?

Concerning the lapse rate, it must be taken into account that it is a property of a stationary non-equlibrium atmosphere and any valid derivation must be consistent with that.

To Pekka – Thanks. I wasn’t challenging the statement that assuming a microcanonical ensemble and equal probabilities of microstates introduces errors. I was interested in the extent of those errors – e.g., the variability of E for a given T as applied to climate states. For a given macroscopic volume of gas, how much deviation from equal probabilities is likely?

This is just for my own interest, because I think the earlier comments on the MEP principle were sufficient to raise doubts about its ability to determine real world behavior. My interest here related more to my interest in interpreting macroscopic data on the basis of probabilities, which for me has great intuitive appeal..

Someone has suggested that the main ocean thermocline shape is a Maximum Entropy configuration.

So you will have to argue with them.

Fred,

In statistical mechanics the number of particles and degrees of freedon is usually huge even in smallest macroscopic volumes. Therefore the relative deviations from the average are extremely small. Basically it’s expected that microcanonical, canonical and grand canonical ensemble have essentially identical macroscopic properties, when the energy of the microcanonical ensemble is chocen to be the same as the average energy of canonical ensemble and the number of particles in canonical and microcanonical ensemble is chocen to agree with the expectation value of the grand canonical ensemble.

The ensemble is many ways similar to a set of separate volumes in identical circumstances, e.g. small neighboring volumes of gas, but formally the concept of ensemble is different. There’s a lot of literature on the relationship of ensembles to states of the same volume at different times or to the separate volumes at a specific time.

From the short paper of Juan Ramon Gonzales we can also read that even the nobequilibrium thermodynamics is based on the assumption that it’s possibe to cosider small volumes in local thermal equilibrium.

By all the above I want to say that the variability between states of canonical (or grand canonical) ensemble is not closely related to the differences between small volumes with even slightly different macroscopic properties.

Web, I didn’t think I was being argumentative. I just noted your tone was not particularly receptive. From what you didn’t say, I assume that you think that the Antarctic atmospheric part of the question might have a different answer. I mentioned quite some time ago that I firmly believe that the Arrhenius CO2 equation falls apart at lower temperatures. I mentioned before the convergence of the 4C boundary and the change in the conductive properties of the atmosphere at lower temperatures, -20C is the peak. There is a lot happening in the Antarctic, MEP is a step in the right direction, but there are some oddities, I don’t know how it can handle.

Cap’n, With all due respect, I still think your thought process is borderline insane. Let me parse your sentences to show how you conflate principles into a confusing mix of nonsense.

So you are talking about atmosphere and not the ocean thermocline now.

How can a radiation transfer principle “fall apart” at lower temperatures?

The 4C boundary has to do with liquid water, not gases. And as we all know, the conductive properties of gases at atmospheric pressure is minimal in comparison to the convective properties. So I can’t see how this liquid thermocline boundary can have any impact on the twice-removed conductive properties of the atmosphere.

I sense that these seem to all be random thoughts that are not pieced together in any intelligent way.

The only reason I am trying to help you is that I have an interest in communicating scientific research in more intuitive ways. The last book I wrote is chalk full of applications of the Maximum Entropy Principle to various environmental and natural processes. My partial list of MaxEnt derivations is described in this Google Spreadsheet: http://bit.ly/wZIwnY

This list continues to grow and an interesting one that I am working on is deriving wave energy spectra, which comes out very cleanly from a simple first order energy model and maximum entropy dispersion of the energy content in a wave. That illustrates my goal — to try to convey what many would think are complicated concepts using some rather simple mathematical concepts relating to disorder in our environment.

Cheers and I seriously hope that you can try to create some order out of all those seemingly random thoughts that are racing through your mind.

A simple example would be a closed system at thermal equilibrium with a heat bath. The probability of a microstate j is given by exp(-E_j/kT)/Z, where E_j is the energy associated to a microstate and Z a ‘normalization’ constant.

Consider an atmospheric element of volume at equilibrium with surrounds. The different microstates have different probabilities. If you were to assume that any accessible microstate is equally probable, then the element of volume would have the same probability of being in a microstate with energy and composition equal to that measured, than of being in a hypothetical microstate containing the energy and matter of the whole atmosphere!

Pekka Pirilä: My paper states that (i) the TIP formulation of nonequilibrium thermodynamics assumes local equilibrium (not microcanonical) and (ii) this assumption does not work for large gradients or fast processes. The EIT formulation of non-equilibrium thermodynamics does not assumes local equilibrium and can study those processes. My paper cites relevant books on both formulations.

So using Jaynes formulation for Maximum Entropy, he defines entropy as the expected value of the (-) log of probability considered over all states. In this case, the E_j range from 0 upward, so the final result is E_mean/T, which is the thermodynamic result for the ensemble. Then, working backwards from E_mean, one can get back the probability distribution by applying variational techniques to find the maximum entropy result.

Perhaps that is the mathematical contrivance that Jaynes discovered. The MaxEnt principle seems to reduce to the classical statistical mechanics and thermodynamics in a very convenient way, and scientists and engineers just find this way of thinking practical and useful.

WebHubTelescope:

The canonical ensemble has been known in physics since Gibbs’ epoch. And that thermodynamic entropy is a maximum at equilibrium known since Clausius’ epoch, approx.

Neither Jaynes was who defined entropy as “the expected value of the (-) log of probability considered over all states.” This all was done by Gibbs also and is known as Gibbs entropy.

Maybe you are confounding Jaynes’ theory with the standard and well-tested classical thermodynamics and equilibrium statistical mechanics. I do not know if this is the case.

To be clear, what Jaynes does in his theory is to assume that entropy is always a maximum, also at non-equilibrium states, which is not true.

Moreover, Jaynes entropy is an informational entropy, which is different of the physical entropy used in thermodynamics and statistical mechanics.

Your claims about the popularity of Jaynes’ theory were already corrected.

It is nice to see that where MEP is concerned, different schools of thought are recognized and respected. The same cannot be said for the climate debate and this is a good measure of the politicization of the science. It is all that many skeptics are asking for,

If you read the Kleidon book linked above, you will discover that MEP and MAXENT are rejected by the immense majority of scientists.

I would say that there are two schools: one accepting MEP (about half dozen of individuals as Kleidon) and other rejecting MEP (the rest, including Nobel laureates in thermodynamics as Onsager, Prigogine, and NESM worldwide experts as Balescu)

I interpret what you are saying is that MaxEnt is rejected by mathematical physicists, but not by applied physicists and engineers. The latter use it all the time for such applications as Maximum Entropy Spectral Estimation.

As I quoted elsewhere in this thread, the formal justification is perhaps lacking, but it nevertheless works in practice.

WebHubTelescope: Therefore, when you read Kleidon’s book stating that such ideas are rejected by the immense majority of scientists, do you believe that most of scientists in the world are mathematical physicists? Wow!

The engineers that I know (including chemical engineers) use the thermodynamics theories developed by physicists, chemists…

An academic search of “Maximum Entropy Spectral Estimation” returns about 800 results; the immense majority being works about

information theory, and without distinguishing ‘equilibrium’ from ‘nonequilibrium’ regimes.An academic search of “Minimum Entropy Production” returns more than 2600 results; almost all are works in chemical physics, physics, thermodynamics, meteorology, applied physics, engineering…, and the 100% deal with nonequilibrium regimes.

So is this an issue of whether Maximum Entropy is useful for solving problems, versus whether it is not useful for describing physical behaviors?

This is a quote from a reference book with a section on applying Maximum Entropy priors, “Statistical decision theory and Bayesian analysis” by James O. Berger, 1985:

This is really about reasoning under uncertainty, or with limited information, which is what much of science, and one of the main themes of Climate Etc, is all about. We have physical models of the environment, yet these models are not complete and can have aleatory uncertainty. The principle of Maximum Entropy allows us to fill in some of the gaps.

So I suggest that it is useful both for modeling (filling in the unknowns) and for solving problems (estimating the unknowns).

WebHubTelescope: About your citation of Berger from an old textbook on statistics. Most of textbooks on physics, chemistry, biology, engineering do not use Jaynes ideas but the physical, chemical, biological… theories developed and tested in labs during centuries.

The citation is ambiguous because maximum entropy methods were not invented by Jaynes, but are used in physics since Clausius. Nobody doubts that entropy at equilibrium is maximum and techniques to exploit this fact are well-known and used in ordinary textbooks on equilibrium statistical mechanics, for instance.

Jaynes main idea on that entropy would be also maximum

outside equilibrium. It is this idea which has been rejected. And even when his theory is cured which the kind of ad hoc corrections denounced by Balescu (one of the fathers of the first formulation of non-equilibrium statistical mechanics), there is no way to obtain the evolution equations for the non-conserved variables, doing Jaynes et al. theory essentially useless for many problems of physical, chemical, or biological interest.There are a few URL links, and links-to-links, here.

Some applications of the approach seem to be open to question.

I can move all the links over to here if the hostess prefers.

I must be really missing something, because this whole discussion of MEP leaves me mystified – and I’ve taught thermodynamics. Viewing the Earth system as a whole the entropy production problem in steady steady is defined simply by the entropy difference between outgoing radiation (at the effective radiating temperature 255 K) and incoming radiation (at effective solar surface temperature around 5000 K). At steady state nothing else is changing, only incoming radiation and outgoing radiation, right?

So what is there to maximize? The number is unchanging and determined by the first law energy balance (at steady state). I.e. given any process that converts a quantity of heat dQ from temperature T1 to a lower temperature T2, the entropy change dS = dQ(1/T2 – 1/T1). So the overall entropy change per unit time for the planet has to be constrained to be dS/dt = dQ/dt (1/255K – 1/5000K). How can it be different? If the temperature change goes through a series of steps rather than dropping in one jump, the final total change is still given by the change from initial to final temperature.

Now if the entropy maximizing principle is intended to apply after the point of absorption by some surface system (i.e. T1 is Earth surface temperature, not solar surface temperature), then what is it that constrains MEP from forcing Earth surface temperature higher and higher and higher? Doesn’t that form of MEP imply an Earth surface temperature as hot as the Sun’s?

Not that I expect a coherent answer here, but I just wanted to express how this whole idea makes no sense to me…

A good example application is here:

http://88.167.97.19/albums/files/TMTisFree/Documents/Climate/Thermodynamic_optimization_of_global_circulation_and_climate_Constructal_Climate.pdf

dan, thanks for spotting an online copy of this paper

Author, I was hoping someone would have tried to answer your question by now. I was thinking about the same issue, but not going all the way back to the sun. 1367Wm-2 for 394K would be my T1 and space at 3K my T2 for maximum global entropy. I am thinking a multilayer model though, to try an incorporate all of the thermo boundary layers.

Then half, allowing for shape, would make the Hemispherical T1=331K which I think should be used for minimum entropy to give a maximum global average surface temperature of 331K for perfect insulation. Jim Hansen would disagree since we would not be boiling oceans off very soon :)

Individual processes and boundary conditions would limit within those maximum/minimum bounds.

Very nice link Dan, Duke university, power plant models, constructional theory, real work, real entropy, real temperatures, real limits, this is getting to be real interesting :)

I don’t see how the MEP principle is applicable to the regulation of albedo. It

isregulated somehow after all, basically by the fraction of snow & cloud cover over the surface, which is known to fluctuate in a narrow range. However, a simple black body would surely produce entropy on a higher rate for the same incoming radiation flux than this globe, covered with its haphazardly distributed white patches.Therefore the question is: if MEP holds, why the Earth is not pitch black?

The proposed principle of maximum rate of entropy production should have nothing specific to do with Jaynes’ principle of maximum entropy. One is proposed to be a physical principle, the other a rule for constructing probability distributions not necessarily related to physics. If Jaynes’ principle is not giving right answers to physical problems, that is surely due to the wrong use of the principle, not to any alleged invalidity for its proper uses. I think the case against the proposed physical principle of maximum rate of entropy production does not depend on a case against Jaynes’ principle. The case against the proposed physical principle of maximum rate of entropy production is properly stated in strictly physical terms, with no reference to Jaynes’ principle.

I agree that about the conflation between production and the original maximum entropy principle, which as Christopher says is only a way of constructing probability distributions. This is a partial list of probability distribution behaviors that I have characterized using Jaynes originally conceived maximum entropy principle.

Reservoir Size

Reserve Growth

Earthquake Size

City Size

Species Diversity

Crystal Growth

Human Transport

Project Completion

Volatile Investments

Hyperbolic Discounting

TCP/IP Latency

Train Statistics

Wind Energy

Signal Fading

Global Oil Discovery

Bathtub-shaped Reliability Curve

Rainfall

GPS Acquisition

Popcorn Popping

Dispersive Transport in Semiconductors

Fokker-Planck

Porous Transport

Heat Conduction

Labor Productivity

Chemical Reaction – CO2 Residence

1/f Noise

Classical Reliability

So with all due respect to Juan Ramon Gonzalez and his expertise, I am still puzzled by his assertion that MAXENT is “rejected by the immense majority of scientists”. I can side with him on the max entropy

productionprinciple, only insofar it is still a weakly-defined concept. My own interpretation is that many of the the physical processes that I have characterized follow from rate concepts; and these rates are maximally dispersed in the steady-state, leading to the empirically measured probability distributions.Maybe, if you were to read what was actually said you would discover that the assertion was done by the own Kleidon, who acknowledges how MAXENT has been rejected by the majority of scientists:

http://books.google.com/books?id=YRjfuEP_QycC&pg=PA42

The reasons which MAXENT is rejected are well-known

http://en.wikipedia.org/wiki/Maximum_entropy_thermodynamics#Criticisms

Not to nitpick, but please tell me of any physical events that do not have to do with physics?

That chapter you link to was written by Roderick Dewar, not by Kleidon, who appears to be co-editor of the volume.

Moreover, that is a common tactic, to say that an approach is rejected by a majority of scientists, as it puts the contrary theory in more of a novel, even revolutionary light. In this case, I think Dewar is exaggerating.

I really don’t understand how Jaynes’ Maximum Entropy principal can be rejected outright though. Once maximum entropy is reached, it seems that the extra entropy produced is minimal. A maximum is defined whereby the first derivative is zero — thereby the production rate at maximum entropy would be at a minimum.

Yes, you are right, the chapter was written by Dewar. However, Kleidon is editor and in his own chapter he cites the chapter by Dewar in the same volume.

You think that Dewar is exaggerating, but he is being sincere. You do not accept why Jaynes ideas are rejected, but technical reasons were given.

I know of a few issues with the Jaynes formulation. For one, the continuous form for probability is problematic and there is a representation dependence for a specific result. Another is that many believe it doesn’t handle fat-tails well, thereby the research on Renyi entropy.

Do those match your concerns? I tried looking up “non-transitive evolution law” but these are mainly circular refs back to the Wikipedia article. It sounds like this concern implies that ordering of causal actions or events is important.

My personal research involves rethinking of the whole CTRW formulations of random walk. I think they are much too complicated and are much easier to explain in terms of disorder in the parameters, and thus in simplifying the stochastic equations. This is probably closer to the ideas you are espousing with respect to non-equilibrium dynamics.

The “continuous form for probability” is not a real problem once one acknowledges that nature is discrete. The problems are the assumption of maximum entropy outside equilibrium, the neglect of relative components to the ‘thermodynamic’ branch f^c used in the distribution, confusion between physical entropy and informational entropy, lack of evolution law for non-conserved quantities, and more.

The concern by Balescu about the lack of a transitive evolution law is as follows. In Jaynes’ theory one starts at t_0 by maximizing entropy subject to certain ad hoc constraints. Now, application of the evolution of the microstates gives a constant entropy which does not agree with the second law of thermodynamics for irreversible processes. Then Jaynes et al, try to solve this by repeating the maximization process at posterior times t_1 < t_2 for forcing an increase on entropy in agreement with observations. This gives a serious problem, since the subsequent maximization processes are quite arbitrary: the one-step evolution from t_0 to t_2 does not yield the same results as the evolution from t_0 to t_1 followed by an evolution from t_1 to t_2.

MEP sceptics would consult this:

http://iopscience.iop.org/1751-8121/40/31/N01

Thanks for elaborating on the issues. I am not defending the derivation of Maximum Entropy — note that I already pointed out Gian-Carlo Rota’s description of the issues much earlier in this comment thread

https://judithcurry.com/2012/01/10/nonequilibrium-thermodynamics-and-maximum-entropy-production-in-the-earth-system/#comment-157925

The major issue he sees as the “mathematical axiomatization of thermodynamics”.

Rota also describes the issues between Shannon (information) entropy and Boltzmann entropy, with the skin-deep surface similarities not enough to keep scientists from using Boltzmann entropy exclusively for statistical mechanics and thermodynamics.

I also said upthread that I couldn’t follow Dewar’s work as it appeared to require some circular reasoning. Thanks for the link by Grinstein and Linsker, which I will read carefully.

There is a continuing discussion of this topic (deeper mathematical meaning of entropy in the context of dynamics and quantum mechanics) at the Azimuth Project blog, with a new post yesterday:

http://johncarlosbaez.wordpress.com/2012/01/13/extremal-principles-in-classical-statistical-and-quantum-mechanics

May I recommend a 2008 discussion of some of these questions, by Walter T. Grandy, JR., ‘Entropy and the Time Evolution of Macroscopic Systems’, Oxford University Press, Oxford UK, ISBN 978-019-954617-6 ?

typo 978-0-19-954617-6

All of you make this way too difficult.

The Temperature of Earth has cycled in a narrow range for Ten Thousand Years. Earth has a set point for Temperature. In the past Ten Thousand Years, if Temperature of Earth got one or two degrees above the set point, it always cooled. If Temperature of Earth got one or two degrees below the set point it always warmed. Of all the things that can be used to control Earth’s Temperature, the only one with a set point is Ice and Water. When Earth is warm, it melts Arctic Sea Ice and Massive Snowfall advances ice and increases Albedo. When Earth is cool, Arctic Sea Ice Freezes and reduced Snowfall allows the Sun to melt ice and Albedo Decreases and Earth Warms. This is the only forcing that Earth has that has a set point based on Temperature that is powerful enough and quick enough to have kept the Temperature of Earth Regulated in the bounds of the past Ten Thousand Years.

HAP, so varying thermal inertia and hysteresis associated with related system processes in near equilibrium produces pseudo-cyclic fluctuations on varying timescales? Never heard of such a thing :)

When the Arctic is liquid, Earth is cooling

When the Arctic is ice, Earth is warming

This is the Thermostat of Earth

This is in spite of orbit cycles, tilt cycles, CO2, Solar Cycles and whatever else you can drive temperature with.

Christopher Game: This is what Lavenda says on page 65:

Lavenda is not saying that the theorem is not valid, but that only applies to linear nonequilibrium regimes. As I said, and you quote, “Any presentation of the Prigogine theorem that I know emphasizes that its validity is restricted to linear nonequilibrium thermodynamics where the Onsager relations hold.”

Precisely, the section “17.2 The theorem of minimum entropy production” in Kondepudi and Prigogine book is found in the chapter “17 Nonequilibrium stationary states and their stability: linear regime”.

What is the problem?

WebHubTelescope: You again cite another book which is not about thermodynamics or statistical mechanics. And the author only cites image recovery as application of maximum [information] entropy. I already referred to information theory before.

You have extensible quoted this author in this thread about thermodynamics. In the same page that you quoted the author says:

I do not know any thermodynamics.In a first reading of the rest of that chapter, it seems that the author confounds different entropies: Boltzmann, Gibbs, Shannon, physical, informational… Maybe this explains why he does not understand why Shannon entropy is ignored for most applications on statistical mechanics and thermodynamics.

But at least he confirms what I have said in this thread: Jaynes ideas, MAXENT, MEP and similar ideas plays virtually no role in science (physics, chemistry, biology, geology…) and associated engineerings.

Sorry, I do not find any “deeper mathematical meaning of entropy” in the Azimuth blog.IMO, all of thermodynamics and statistical mechanics comes about from applying combinatorics via the mathematical shortcut known as Stirling’s approximation. This turns a factorial representation into a logarithm and from that, all of the different entropies fall out. Maybe this is not representative of a physical process, but it is representative of the statistics that can describe variations of a physical process, or the variations in the parameters of a physical process.

If you want to say that maximum entropy

“plays virtually no role in science (physics, chemistry, biology, geology…) and associated engineerings”that is your choice and you can use your own methods. However, I am mystified by all the progress that scientists and engineers have made by using maximum entropy techniques for problem solving. Granted, some of these may just be Lagrangian variational techniques known long before Jaynes started applying them with a fresh perspective, but we still have all the evidence of the popular use.Here are a few examples:

Geologists use maximum entropy all the time in exploring for natural resources. I have examples of using it for modeling oil reservoir sizes.

Biologists use maximum entropy all the time for modeling ecological diversity. I have examples of using it for modeling relative species abundance.

As an engineer who understands physics and chemistry, I use MaxEnt for modeling electrical transport in disordered semiconductors and in modeling crystal growth.

The question is how do we recast all these solutions that I and many other people have branded as MaxEnt techniques into your vision of a science and engineering theory?

There is always a dichotomy between the purity of an approach versus the practical applications of an applied model. We need a little help here in getting beyond this dichotomy. That was partly the theme of the lecture by Gian-Carlo Rota. He said that the issue is one of unifying the mathematics of probabilities with that of the physics.

I would be careful to say that statistical mechanics relies essentially on combinatorics, while equilibrium macroscopic thermodynamics does not. Utterly non-equilibrium thermodynamics must, I think, rely also on combinatorics, but the scope of it is still a research program.

I think that the account given by Solomon Kullback (Information Theory and Statistics, 1958) is often not used when it would help considerably. It makes more sense of the customary “Shannon” concept.

WebHubTelescope:

It is

nottrue that “all of thermodynamics and statistical mechanics comes about from applying combinatorics via the mathematical shortcut known as Stirling’s approximation.”Combinatorics and Stirling approximations play no fundamental role in nonequilibrium thermodynamics and nonequilibrium statistical mechanics. The reasons are well-known, equilibrium is essentially statistical,

nonequilibrium is essentially dynamical. This is another reason which the ‘statistical’ ideas by Jaynes et al have failed often in the science and engineering of nonequilibrium.Combinatorics and Stirling approximations play an important role in equilibrium statistical mechanics of large systems (macro scale). For the small systems (nano scale), Stirling is not used however.

Combinatorics and Stirling approximations play little or no role in classical thermodynamics. You can give an entire course in equilibrium thermodynamics without even mentioning them, although can be mentioned in some appendix dealing with microscopic foundations of thermodynamics.

It is

nottrue that “all of the different entropies fall out”. Boltzmann, Gibbs, and Shannon are different entropies, the former are physical and are found in textbooks of physics and chemistry, the latter is informational and found in textbooks dealing with information theory. Some people confounds them as the guy who you cited, but he acknowledged “I do not know any thermodynamics.”Your words: “If you want to say that maximum entropy “plays virtually no role in science (physics, chemistry, biology, geology…) and associated engineerings” that is your choice and you can use your own methods.” are a gross misinterpretation of what I said.

First, my exact words were:

Second, by “Jaynes and similar ideas” I already explained to you that I mean his theory that entropy must be a maximum at nonequilibrium.

Third, I already explained in this this thread that maximum entropy methods are known in the science of equilibrium since Clausius and were applied by Gibbs et al. to equilibrium statistical mechanics. Jaynes has not the monopoly of the methods. Jaynes merely believes that he can extend Gibbs methods outside equilibrium. And it is this belief which has been criticized by scientists.

Either you deliberately pretend to confound MAXENT and Jaynes ideas with any maximum entropy method used in science (e.g. with standard methods in equilibrium statistical mechanics) or you are deliberately ignoring what the own literature in MAXENT says about the lack of popularity and use of Jaynes ideas.

Two different textbooks acknowledging that MAXENT and Shannon entropy are

almost ignored by scientists and engineerswere given here. I repeat the quotes:http://books.google.com/books?id=YRjfuEP_QycC&pg=PA42

http://books.google.es/books?id=eaJyGXguokIC&pg=PA71#v=onepage&q&f=false

Your response was:

However, both textbooks are by authors who support Jaynes/Shannon ideas; the textbooks are

notby people working in rival theories; therefore, you cannot accuse them of being biased against Jaynes/Shannon ideas.I am a scientist, whereas you are not; but if you want to believe that in your own universe MAXENT and Sannon entropy are used each day in scientific labs (or in chemical engineering labs), or if you want to believe that the ideas that you endorse give “deeper mathematical meaning”, that is all ok for me.

Well Juan Ramon Gonzalez claims I don’t know science. I know that science is a long tough slog, but I am not ready to withdraw my doctoral dissertation quite yet.

You really should consider participating in the http://azimuthproject.org blog discussion. This topic is worthy of discussion over there.

Remark to Juan Ramón González Álvarez. I find your admirable article linked above, entitled ‘Non-redundant and natural variables definition of heat valid for open systems’ to be very valuable and helpful. Haase is not in my local university library and it will some time for me to get a copy. Also for library reasons it will take me some time to get a copy of Balescu 1997. (Balescu 1975 does not mention Jaynes.)

By Balescu 1975 I suppose that you mean “Equilibrium and nonequilibrium statistical mechanics”. I do not remember now if he mentions or not Jaynes, but probably does not even mention him, because his ideas are not needed (or are wrong) for developing statistical mechanics, kinetic theory and thermodynamics.

Balescu 1997 only mentions Jaynes in a final chapter where he discusses grand theories of irreversibility. He reports theories that work and are used each day by scientists and engineers, he discusses some recent advances done by Prigogine and coworkers, and finally he reports Jaynes Maximum entropy theory as a theory with problems and that has not given any new result.

Thank you for your reply.

I think that irreversibility is explained epistemically.

Irreversibility is reported in macroscopic thermodynamic accounts but not in purely mechanical (no statistics, only Newton’s laws and the like, describing every detail for every “particle”) accounts. The purely mechanical accounts, as I read them, have perfect knowledge of all the details of all the “particles”. The Laplace idea is that they can give perfect predictions of every detail supposing they are given perfect initial data and exact dynamical laws such as Newton’s. Macroscopic thermodynamic accounts have only statistical or smoothed information, which is by definition less informative than the complete and perfect data of the purely mechanical accounts. Consequently macroscopic thermodynamic accounts cannot give perfect predictions of every detail. As it is attempted to predict further into the future or to retrodict further into the past with imperfect initial (time zero) data, the lack of detailed information in the data sees more and more accumulated error of detailed pre-(retro-)diction.

The entropy, suitably defined, tells how far short of perfect detail is the macroscopic account. The result is that a macroscopic prediction of a macroscopic thermodynamic account shows increasing entropy as the time interval of prediction increases. This is another way of saying that the prediction of microscopic details gets worse as the interval of prediction increases, and this is due to ignorance expressed in the macroscopic account, not due to irreversibility of the basic laws such as Newton’s. The irreversibility works both ways in time (reckoned from time zero): errors of retrodiction of microscopic detail are just as necessary as are errors of prediction of microscopic detail.

The ignorance is not a dynamical factor, it is only an epistemic commentary. An attempt to use an epistemic commentary as if it were a dynamical factor is of course ridiculous. As I understand Balescu, he is pointing to examples of attempts to use epistemic commentary as if it were a dynamic factor, and of course I agree with Balescu (as I read it here) that such attempts are ridiculous. Balescu (as I read it here) is saying that Jaynes’ epistemic principle has not produced new results in statistical mechanics (which of course has a great dynamical content, in addition to the epistemic step in which the molecular detail is eventually translated into macroscopic quantities for the macroscopic thermodynamic account); I would not dispute that statement of Balescu.

Balescu 1975 gives a definition of irreversibility, on page 420. Irreversibility in that definition requires particle interactions, such as collisions. With perfect data and exact dynamical laws, the collisions would not have their unpredictability. Imperfect data about the initial trajectories of a collision leads to greater imperfection of predicted final trajectories even when the dynamical laws of the collision are exactly known. The epistemic commentary describes this as loss of information or increase of entropy due the collision. No one has to read the epistemic commentary if he doesn’t find it interesting. (Of course the idea of perfect data is different for quantum mechanics, but that is regarded as a physical factor, not as an epistemic comment.)

Second law and entropy in thermodynamics are not only statements about information; they are about what happens to macroscopic physical properties of the system being considered.

The difficulties in putting together fully deterministic dynamic equations and the principles of statistical mechanics have troubled theoreticians for long. What is

ergodicity, how to resolve theblack hole information paradox, the list of problems is very long.The difficulty is in a sense exemplified also by an experience from my time as a student. We had a mathematics professor, who decided that he should give a special course on the mathematical foundations of statistical mechanics. A thin 180 page textbook

Mathematical foundations of statistical mechanicsby I. Khinchin was selected as textbook for the course. Being a mathematician, our professor got stuck very early in the book in the problem of measuring phase space volumes and in the Liouville theorem. The whole one semester course was spent on these issues, i.e. in background for the first 20 pages of the book.This story is directly connected with the paradox that every single state is equally likely in a microcanonical ensemble. Every state by itself has zero entropy and according to the ergodic theorem also states that represent extremely unlikely macroscopic configurations will occur when given enough time (all gas molecules in a room will be located in one half, while the other half is empty, as an example). This paradox is not a paradox for the information theory application, but it can be considered such for thermodynamics.

Response to the post of Pekka Pirilä of January 17, 2012 at 4:01 am.

Pekka Pirilä writes: “Second law and entropy in thermodynamics are not only statements about information; they are about what happens to macroscopic physical properties of the system being considered.”

Christopher agrees. Macroscopic physical properties are properties of the system of interest stated in a particular way, namely in terms of macroscopic variables for the system of interest. The choice to describe the system in terms of macroscopic variables is an epistemic choice, a choice of how to frame a description. An alternate epistemic choice might be to describe the system in terms of microscopic details about every “particle”. With suitable definitions, there is a natural correspondence between the definition of entropy in terms of macroscopic variables and the microscopic one in terms of information about details of “particles”. Pekka Pirilä’s comment just quoted above is another way of saying this.

As Murray Gell-Mann describes in his book The Quark and the Jaguar, information theoretic approaches are clever ways to reduce complexity, and thus a shortcut to physics problem solving.

Actually, i have a post coming on Gell-Mann’s Plectics

WHT,

I think that one can find at least three classes of problems where MAXENT -type methods work.

First class consists of problems where the law of large numbers determines the results almost totally and all details of interaction have little effect. In these problems MAXENT is valid, but the problems are usually rather easy to solve in many different ways like the barymetric formula.

The second class consists of problems, where making some generally true additional assumptions to support the MAXENT principle gives right results. Here the role of the MAXENT principle is less clear, because the role of the other assumptions is also essential.

The third class consists of problems that can with care be tweaked to a form, where MAXENT gives good results, not necessarily exact results, but good enough for many uses. Using MAXENT in these cases is, however, more art than science, because tweaking of the setup is not based on solid knowledge, but rather on experience and intuition. I do believe that the cases, where MAXENT has been of practical value belong to this third class.

Pekka, said, “I do believe that the cases, where MAXENT has been of practical value belong to this third class.”

Exactly, engineers accept imperfection and modify equations into useful rules of thumb. I can modify the Kimoko equations and they become a useful tool. The scientist’s job is to prove why. :)

The issue is not whether MaxEnt is useful or not, the issue appears as Juan asserts that it has not lead to the discovery of any new physics and that the general technique is mostly a warmed over variational approach that scientists have used for years.

Frankly, I don’t care if it is not revolutionary and that it has unfairly usurped some other techniques, I have used the MaxEnt principle in many situations as I have described further up in this thread:

https://judithcurry.com/2012/01/10/nonequilibrium-thermodynamics-and-maximum-entropy-production-in-the-earth-system/#comment-158741

Now, these couple dozen cases are all documented in my work, many help explain some rather anomalous behaviors, and I reference MaxEnt at least once in each argument. Would someone like Juan want to offer up an explanation where I went terribly wrong in building up ideas based on my readings of Jaynes, Gell-Mann, and others ???

With respect, I don’t think it is part of the probability theory maximum entropy principle of Jaynes that “his theory that entropy must be a maximum at nonequilibrium”. It may be that some writers have put such an interpretation on Jaynes’ principle, but I think that such an interpretation does not rightly read Jaynes’ principle, which in itself has no physical content at all. I see a direct reading of Jaynes’ principle into a physical question like the proposed principle of maximum rate of entropy production as a travesty of physical reasoning and a travesty of probabilitistic reasoning.

At first I thought the proposed physical principle of maximum rate of entropy production seemed like a good idea, but that got not the slightest support from my view of Jaynes’ principle. On reading more I grew to think that the proposed physical principle of maximum rate of entropy production could not be generally valid. That did not affect my view that Jaynes’ principle has its valid place in probability theory.

I see no reason why Jaynes’ probabilistic principle should not be used to prove that the proposed physical principle of maximum rate of entropy production cannot be generally valid in physics.

There is a logical link between physical entropy and information theoretic entropy, but they are of course of different natures. Planck was a strict macroscopic entropy man (no statistics) till he was persuaded to change his mind by his discovery of his law of thermal radiative spectrum, which was a turning point from classical physics as he had previously known it. But that physical link does not mean that the probabilisitic principle has a simple and direct application in support of any proposed physical principle of maximum rate of entropy production.

People like to criticize and passionately reject the Jaynes principle because they passionately dislike his logical approach to probability theory. Impassioned attacks like that were also made on Harold Jeffreys’ views on probability theory. But both Jaynes and Jeffreys were respectable physicists, and they found their common views on probability theory compatible with their respectable physics. I agree with them on this.

In a nutshell, I think that the question of the proposed physical principle of maximum rate of entropy production has nothing essential to do with Jaynes’ principle. I reject the proposed physical principle of maximum rate of entropy production and accept Jaynes’ principle of probability theory.

Christopher Game, I definitely respect your pragmatic view.

Perhaps many scientists resent the audacity that Jaynes had in referring to Probability Theory as the “Logic of Science”?

The reasons why some or many physicists reject the Jaynes approach to probability theory are to do with their philosophical outllook, best not discussed further here.

Sorry, but when I wrote that “his theory that entropy must be a maximum at nonequilibrium” I am refering to entropy not to entropy production.

What I have said is correct. Jaynes writes that

and has tried to present his theory as a

the extension of Gibbs’ formalism to irreversible processes“,but this theory has failed by the reasons stated before.

One reason why I’m enjoying this thread is that it has helped me deepen my understanding of a topic that I know at only a very general level. I’ll try to continue this process by asking a few questions. Let me start with a statement by Juan Ramon Gonzalez – “equilibrium is essentially statistical, nonequilibrium is essentially dynamical. This is another reason which the ‘statistical’ ideas by Jaynes et al have failed often in the science and engineering of nonequilibrium.” I’ve read some of Jaynes – I don’t know his work well enough to know exactly what is being referred to here, but let me continue to explore the possibility that probability theory can be applied in some way to the dynamics of non-equilibrium entropy production. Corrections to any misconceptions will be welcome.

Imagine a large empty box into which a demon inserts a huge number of energized molecules into an upper right hand corner. Based on probabilistic considerations, the molecules will assume various microstates that eventuate over a time interval (based on their energies) in a macrostate encompassing a range that is highly probable because the total number of individual microstates adjusted for their relative probabilities greatly exceeds all combinations outside of that range. That macrostate, and fluctuations that maintain that range at almost all times will constitute an equilibrium state.

Question 1: Given the initial conditions, can we not predict the rate at which equilibrium will be approached, based on probabilistic considerations? Even if the asymptotic portion of the change is a problem, the earlier phases should be better determined, should they not?

Assume now that as the molecules are moving away from the corner toward more probable macrostates (i.e., characterized by a larger number of microstates), the demon sucks molecules out of the box at random locations and replaces them with a new batch of molecules in the upper right hand corner. The process described earlier will be repeated, with these molecules spreading out toward more probable macrostates. Imagine now that this phenomenon is continuously repeated, so that molecules are always being removed from the entire box contents and being replaced in the upper right hand corner. At some point, I assume a steady state will ensue in which the rate at which the well distributed molecules are removed is balanced by the rate at which new molecules are moving toward an equilibrium distribution.

Question 2: Are these rates not a characteristic of a process that can be described in probabilistic terms? Would these rates not be determined by the state of the system, so that rates outside of a particular range would be highly improbable?

Obviously, the above very general questions relate to the maximum entropy production principle, but I’m not suggesting that the MEP principle is proved by this logic. In addition, all the comments in this thread as well as my own reading leave me with much doubt that the MEP principle (which must address more than simple gas-filled boxes) can yield enough predictive information to be of practical value – that seems to be unlikely. Rather, my questions relate to whether MEP production can have a theoretical basis in probability.

Interesting. I think the issue is how well the statistical method compares the brakes. Different processes would have different inertia. How do the “brakes” compare? The over shoots/ under shoots with respect to each other is the most important dynamical consideration. That is the main non-linear issue in my mind.

Web said, “My strong assertion based on applying these techniques is that many of the power law observations attributed to critical phenomenon are in actuality disorder in the physical space. ”

In many cases I can agree with that, one of the reasons I used the tides and currents example. Note there that you approximate a base width large for larger waves. Very true, statistically. Rogue waves though, the base narrows to increase the height. So the NOAA radio predicts significant wave height, the average of the highest 1/3 of the waves with waves greater than twice the significant average being possible.

Now, let’s say I use a similar method to predict the potential warming due to CO2. I use a method that predicts the maximum impact. I am now predicting the rogue waves. They can happen, but they are twice the average of the significant mean. That is valuable information, but more valuable is the probability of that perfect wave. That is why I say that the Arrhenius CO2 equation is misrepresented. Once you allow for real world coniditions, the disorder, the tropopause instead of the assumption that the TOA would have been the tropopause, you get closer to the reality, Arrhenius attempted to predict perfection.

So climate science is in some ways mixing apples and oranges, much like linear no-threshold models tend to do, perfection should be compared with perfection, not averages and definitely not smoothed averages.

Fred, you are on my wavelength. I use maxent for just that reason, to create a maximal spread of rates corresponding to the physical characteristics of the system (mean rate, etc). This is a very easy way to model the physical behavior of dispersion within a number of different contexts. I especially like to use it in steady state situations where I can average out different initial conditions as you describe with the box example. This can generate distributions of growth values, determined from the integration of rates. I always think back to my high school math problem solving days with this approach — the math and probabilities are easily in reach for a plodder like myself.

Even though I think I use this approach correctly, I am positive Juan Carlos Ramirez would not approve.

I meant Juan Ramon Gonzalez, sorry.

But it doesn’t consider the brakes. LaPlace’s tidal equations do a good job. They don’t do a great job. To fine tune the tidal predictions you have to consider harmonics. Even considering harmonics, variations in wind direction and velocity can change the actual tide by a significant amount.

The oceans absorb energy in the day and discharge a portion of that energy at night. The rates of absorption and discharge vary with the seasons, radiant conditions of the atmosphere, convective circulation patterns and several other smaller but not necessarily negligible factors.

So any method of determining the rate of diffusion or dissipation can only be an approximation in a dynamic system. Forget that and you end up with possible maximum values without any indication of the variation from those values that can be possible or even should be expected.

Comparing two methods, MEP with a simple linear method for example, would give you a better idea of the changes in the rates. Any method can be useful, but I doubt any one will be the best in all situations.

Cap’n, You are crazy again. I am working on a model of wave energy spectra and the envelope is entirely explained by dispersion of rates given a mean energy of a wave crest. This is like one of those high school math problems: from the height of a wave, derived from the potential energy, one can determine how long it takes to drop. The frequency is the reciprocal of time and that together with the maxent pdf generates the power spectral density.

I am in the midst of writing this up and found some really good coastal data taken from sensor buoys to fit the model to. It’s a quick derivation but don’t know if it has been done before. Start small and build your way up.

Web said, “I am working on a model of wave energy spectra and the envelope is entirely explained by dispersion of rates given a mean energy of a wave crest. ”

Then you have a good example. A tidal Crest leads the current slack by 1.5 hours down here, but can vary by an hour.

Tarpon fishing, you want know the slack tide, It is harder to predict than the crest/valley of the tide. So your energy spectra would be great, unless you want to know when the flow changes.

OK, here is as short a Maximum Entropy derivation I can give for the wave energy spectra of a steady state wave in deep water.

First we make a maximum entropy estimation of the energy of a one-dimensional propagating wave driven by a prevailing wind direction. The mean energy of the wave is related to the wave height by the square of the height, H. This makes sense because a taller wave needs a broader base to support that height, leading to a triangular shape. Since the area of such a scaled triangle goes as H^2, the MaxEnt cumulative property is:

P(H) = exp(-a*H^2)

where a is related to the mean energy of an ensemble of waves.

This is enough and we can stop here if we want, since this relationship is empirically observed from measurements of ocean wave heights over a time period. However, we can proceed further and try to derive the dispersion results of wave frequency, which is another very common measure.

Consider from the energy stored in a wave, the time, t, it will take to drop is related to height by a Newtonian relation

t^2 ~ H

and since t goes as 1/f, then we can create a new PDF from the height cumulative as follows:

p(f)*df = dP(H)/dH * dH/df * df

where

H ~ 1/f^2

dH/df ~ -1/f^3

then

p(f) ~ 1/f^5 * exp(-c/f^4)

which is just the Pierson Moskowitz wave spectra that oceanographers have observed for years.

Now, I am interested in using these ideas for actual potential application, so I went to a coastal measuring station to evaluate some real data. The following is from the first two places I looked, measuring stations located off the coast of San Diego, and I picked the first day of this year, 1/1/2012 and this is the averaged wave spectra for the entire day:

If you want to play around with the data yourself, here is a link to the interactive page :

http://cdip.ucsd.edu/?nav=historic&sub=data&units=metric&tz=UTC&pub=public&map_stati=1,2,3&stn=167&stream=p1&xyrmo=201201&xitem=product25

I don’t know what more I can add. This is practical Maximum Entropy principle analysis. If you want to attack it go ahead, I will continue to consider using this approach for any statistical phenomena I will come across in the future.

WHT,

What makes your calculation an application of MaxEnt?

I think that many people might derive the same formulas having in mind nothing about entropy at all. To me it appears rather a typical physics based semiquantitative reasoning with little connection to entropy.

In your description you mention maximum entropy a couple of times, but give no hint on it’s relevance for the reasoning.

Pekka, It might be because you can’t see the nose in front of your face.

The first relation I showed is a Maximum Entropy prior given an uncertainty in wave energy. Like I said, the physicists problem is that they don’t like the fact that the terminology has been hijacked and used without their permission by the maximum entropy people.

So would you prefer to call

P(H) = exp(-a*H^2)

a Gibbs estimate?

Fine with me, but that is just different terminology for the same first-order approximation.

Here we come back to the question: Does MaxEnt produce something scientifically significant? When it’s applied at simple enough level like this one it doesn’t as there’s nothing new to discover. Are its results scientifically relevant for more difficult problems is a different question not answered by this kind of examples.

Approaches that are rather rules of thumb than theories may be very useful, but they make theories only, if they can be formulated precisely for non non-trivial problems and if they give then correct non-trivial answers. I haven’t seen examples of that and Juan Ramon Gonzalez appears to belive that such examples do not exist.

Pekka, Point me to a first-principles derivation of that wave energy formula I derived. I don’t think you can because it is driven by complete disorder in the mixing and dispersion of waves.

I have hypothesized about this in the past. Physicists are always looking for something new because that is a laudable scientific goal. But the everyday occurrence is the mundane what I refer to as “garden variety” disorder. Unfortunately, characterizing this disorder does not lead to Nobel prizes, because like has been said, this is all statistical mush and it doesn’t get at the heart of the physical mechanisms.

Well, I really don’t necessarily care about that. What I care about is the natural world and characterizing all the uncertainty that exists in that world. That’s why I have an interest in this blog, because I think it has a similar objective.

I don’t claim that there are any first-principle derivations, only that similar formulas have been presented without reference to entropy, not necessarily for this specific application, but to many similar ones. The rationale behind those may be quite similat, and I don’t imply that they would be any better. My point was rather that none of these, including MaxEnt, has been formulated rigorously for the particular applications. Such approaches represent physical knowledge, but not in it’s precise form, but rather in the form of semiquantitative explanation of phenomena.

That’s all fine, but not yet science. To make science out of it requires quite a lot more. Perhaps it can be done, perhaps not.

The range for what we consider science has suddenly narrowed. Math is not a science because it is abstract and does not discover new physical principles by itself. And statistics must just be heuristics.

The fact that this discussion has evolved into a philosophy of Science indicates we are at a standstill.

For example, I spent some time incorporating maximum Entropy uncertainties into Fokker-Planck transport equations to characterize electrical transport. This seemed at least somewhat scientific to me, but I didn’t realize that I had crossed the boundary into an unscientific realm.

Pekka Pirilä is right, nothing of MAXENT was used in WebHubTelescope derivation. That result follows from classical thermodynamics plus Gibbs statistical mechanics methods.

Aha. Now let me change the parameters so there is a mean energy and a very tight variance about that energy. Or let me ask what happens if the mean is not well defined.

The latter would allow one to model the fat-tail long wave dynamics.

The exponential PDF no longer fits in to either of these solutions, meaning that we don’t invoke straight Gibbs but something more general in the probability realm.

I believe that’s where Max Entropy and other approaches, including Christian Beck’s superstatistics allow for more flexibility in solving problems.

My strong assertion based on applying these techniques is that many of the power law observations attributed to critical phenomenon are in actuality disorder in the physical space. Physicists want to believe it is some emergent phenomena, potentially related to some undiscovered phase, and are taught that is what power laws are associated with. But the dreary fact is that most power laws are due to plain vanilla aleatory uncertainty, and these methods are useful to root this behavior out.

Now I suppose we will hear about the problems with superstatistics?

WebHubTelescope, you are who said us that P(H) = exp(-a*H^2), where H^2 is area, was a MAXENT result…

Pekka was suspicious and with good reason, because using plain Gibbs P(E) = exp(-beta*E) and using your “energy of the wave is related to the wave height by the square of the height, H”; i.e. E=E(H^2) we obtain P(H) = exp(-beta*E) = exp(-a*H^2) as a pure Gibbs result.

Apart from being a trivial problem (as Pekka pointed), you are not using Jaynes MAXENT but Gibbs statistical mechanics.

Now, you change completely your argument to

and to

The first is wrong, since the mean value of any observable is always defined = Tr{\hat{O} \rho}, unless you have discovered some system that does not follow quantum theory, which would be a revolution.

The second is another non-issue. The exponential form is obtained from the equations of motion, when one ignores the inhomogeneous term, works in Markovian approximation and sets to zero the dissipative part. When we relax those we obtain generalizations, evidently. Long tails, power-laws, non-Markovian corrections, and all that is well-known for scientists.

It seems that you have not still digested Balescu’s criticism of MAXENT. The theory fails and when it is cured ad hoc, MAXENT does not give any new result that was not derived before using the scientific theories developed by physicists, chemists…

Juan, Web’s attempt to force fit MAXENT is not that unrealistic. Its commonality with other physical processes that have acquired other common names is not unexpected. Nature is pretty simple. What separates the applications is the degree of inertia. Gases diffusing have less inertia than fluids etc. Heat transfer from different boundary layers have differing rates and different inertia.

Classic equations have to be modified for different applications, because the real world has exceptions. Those exceptions are commonly due to changing rates of change in a process, system inertia.

While finding just the right name and giving the right person the credit is a wonderful goal, the end goal is solving the problem. Let’s call MAXENT WebENT and look at where it fails, then we have a clue of what needs to be adjusted to develop the Curry theory of dynamic non-linear non-equilibrium, pseudo-chaotic thermodynamic systems :)

Juan,

Stepping back for a moment, MaxEnt or Gibbs, I really have no preference either way. I only use the ideas that are available and that help me solve problems. The fact that you said my solution to the wave energy problem is trivial, I am happy with. That was how I was educated in my graduate physics classes, to come up with the simplest most concise derivation as one can. I guess I succeeded in that.

I still do have issues with the other things you stated. Fat-tail processes result in PDF’s such as Cauchy and Levy that do not generate a mean value. That is what I was getting at when I said that mean values are not defined. In reality, the tails get truncated so that a mean value does exist in practice.

Another example that is very pertinent to the climate change debate is the mean adjustment time of atmospheric CO2, which is either hundreds of years or thousands of years depending on how the tails of the impulse response function are truncated. The tails are obviously from a diffusional response as it goes 1/sqrt(time).

And you say this:

The Markovian approximation is purely a statistical conjecture so it looks like you have evoked a circular argument here. By making the Markovian assumption of a memory-less process you are making an inference with the least amount of bias (i.e. ignorance of higher order statistical moments), something that Jaynes is criticized for.

I routinely interchange my invocation of MaxEnt Principle with a Markov approximation because they really point to the same thing — an intuitive probability concept for solving problems. Now I can invoke MaxEnt, Gibbs, or Markov depending on what aspect of the data I am ignorant about.

That is really what we are arguing about, the probability and statistics forming the aleatory uncertainty in observed behaviors. Isn’t it?

Cap’n raises some interesting points which fit under the heading of aleatory uncertainty.

How do we model the diffusion of a trace gas such as CO2 to sequestering sites when we know that the diffusion coefficient can vary by a wide margin considering the varying ocean vs land pathways? The answer lies in modeling a prior in the proper way, and MaxEnt can help with this.

Same thing with heat sequestering from the excess radiative forcing from GHG. How do we model thermal diffusion when we know that the diffusion coefficient can also vary quite a bit. Is this important for considering the heat in the pipeline?

This is getting away from the way that Axel Kleidon applies maximum entropy principles, but the way I have consistently thought about the problem — that of rooting out the aleatory uncertainty.

Thanks for that Fred, you said:

“I assume a steady state will ensue in which the rate at which the well distributed molecules are removed is balanced by the rate at which new molecules are moving toward an equilibrium distribution”.

The conception of your hypothesis removes gravity.

So, “I assume a steady state will ensue in which the rate at which the well distributed molecules are removed is balanced by the rate at which new molecules are moving toward an equilibrium distribution”, would need to quantify the rate at which the new molecules are balanced, relative to the different fluctuations now in the range created when the first equilibrium was created by MEP.

I surmise, the entropy created during the MEP process of the first theoretical equilibrium process forces potential gravitational energy to the range of its central gravitational point in the field. It must follow, that matter within this range will be driven to a different MEP state of equilibrium each time the process occurs.

I’m not fully sure if your principal would be applicable in a three dimensional universe.

One of the classic examples of maxent is deriving atmospheric pressure with altitude in the simplest approximation. This brings in gravity by incrementally calculating the weight of the gravity head. I have seen this derivation described in many places with the caveat that of assuming both abiabatic or non-adiabatic conditions.

The barometric formula had been previously obtained from the ideal gas law plus the ‘electrochemical’ potential of classical electrodynamics.

This confirms again Balescu criticism about MAXENT: “has not led to any new concrete result”…

Web, as Juan mentions, the Electrochemical issue is where I am really stuck at the poles. That 184K boundary I have mentioned, approximately 65Wm-2, is in the thermal/non-thermal range. That can fluctuate with the geomagnetic field, gravity (more likely since Venus has roughly the same boundary) and/or atmospheric composition. Ozone depletion or rebuilding appears to change with temperature excursions into the 184K range, note the Arctic now and the tropopause during the bigger El Ninos. That is why I use the surface/tropopause emissivity instead of the TOA to estimate CO2 forcing impact. Being able to measure changes in the maximum local emissivity variation would be nice, but it looks like it would need to be calculated or modeled because of the small changes.

With the multilayer bucky model, these impacts would be easier to hone in on to determine relative magnitudes at varying initial conditions.

Cap’n, I am glad to see a forthcoming post on Gell-Mann’s concept of Plectics. Pay attention on the arguments for simplicity and you might be able to build up a cohesive mathematical argument. As it stands, I don’t think anybody can figure out what you are going on about, and it almost looks like you are taking us for a ride to nowhere.

Ultimately, the only way that you will be able to make headway in convincing anyone on your complex thought pattern is to adopt some math formalism. It doesn’t have to be perfect, just enough that someone could follow along or test the ideas on their own.

It may be that you are way above the rest of us in intellectual capacity, but your flashy style doesn’t help us any.

Multilateral bucky model ???? Give me a break.

Web said, “Ultimately, the only way that you will be able to make headway in convincing anyone on your complex thought pattern is to adopt some math formalism. ”

The math formalism for the processes is known. What is not know is the limits of the math. Circular reasoning to a point, but the diffusion equations reach limits of accuracy as they approach boundary conditions, temperature, pressure, viscosity etc. When a process is nearing a limit, near equilibrium, there is instability possible with respect to a related process. The radiant impact of CO2 with respect to the conductive impact of CO2 for example. An accurate calculation of the impact of CO2 is not easy as it interacts with water vapor, itself and the source temperature plus the sink temperature. I do not see a reliable approach to solve that problem to the accuracy required.

This is where the model enters the picture. Using the known math to determine the rates and expected impacts of each, variations from the expected provides the information. The model learns based on observation. No different that adjusting diffusions rate in a lab for dopants that tend to lag or out pace the calculated dispersion.

That is the reason I modified the Kimoto equation. You use the fungibility of energy to perform band by band energy transfer calculations.

http://redneckphysics.blogspot.com/2011/11/learning-equation-kimoto-modified.html

with the Bucky sphere model,

http://redneckphysics.blogspot.com/2011/11/learning-equation-kimoto-modified.html

Together, the two provide a way to locate the significant unknowns, either in the limits of the equations/assumptions or unsuspected things like the stupid 184K boundary which I can’t tell if it is real, happenstance, instrumentation error or a combination.

As far as I know there is no known mathematical solution for the combination of non-linear dynamic interrelations in the Earth climate system. This is just a way to find out if there may be one :)

We are not discussing if probability theory applies or not to non-equilibrium. Of course, it applies.

What we are emphasizing here is that Jaynes MAXENT theory and similar developments as MEP are ill-founded and/or fail.

Question 1: No. Because the rate at which equilibrium will be approached is given by the relaxation function/operator, which is at least second order in the coupling constant in the interaction Liouvillian, which itself is a function of the interaction term in the Hamiltonian.

Question 2: No. If the system is in a stationary state, then the free motion term is not zero, the dynamics is not trivial, all the microstates are not equally probable. To obtain the rates you must solve the dynamical equation, with the constraint that the dissipation term must equal the free term.

Even I understood that.

Juan said:

Markus then said:

Yet, elsewhere in this thread Markus said:

Something does not square with those two statements, and I have a feeling that Markus’s understanding is at the level of

“Yes, that sentence parses, and the grammar is correct, so I understand that it is a statement of some sort”. But beyond that he hasn’t a clue, and his support for the Unified Climate Theory is at the most superficial level. I can only conclude that Markus is but an ignorant cheerleader for a bone-headed theory.Rah. Rah. Rah.

Interesting, that you would probably insist climategate emails would be taken out of context, but you have fully conceptualized my separate posts into one.

Something does not square with those two inferences.

“Yes, that sentence parses, and the grammar is correct, so I understand that it is a statement of some sort”

Excepting, he knows that the predictive relevance of my syntax is false.

WebHubTelescope:

You affirm that you have no preference for MAXENT, but you have taken a Gibbs distribution and renamed it as MAXENT. Before you said that Jaynes defined entropy as “the expected value of the (-) log of probability”, but this was done by Boltzmann and Gibbs much earlier. You also said us that the baricentric formula relating atmospheric pressure to height, was a typical MAXENT result; when this is a trivial result derived in classical thermodynamics by using the ‘electrochemical’ potential and the gas law. You said us that MAXENT theory is very popular and used each day by scientists and engineers, but the own MAXENT people reports that this theory is rejected by the majority of the scientific comunity and not used.

As remarked before, the mean value of an observable is always defined and is given by the formula written above. If you think that you have discovered some system (atmospheric CO2?) for which the mean values are “not well defined” then receive my congrats, because you have discovered a system that, for instance, violates one of the basic postulates of quantum mechanics. A Nobel Prize must be waiting for you… :-)

The Markovian approximation is neither “statistical” nor a “conjecture”. The Markovian approximation (also Markovian limit or Markovian regime) is a pure dinamical approximation of the dissipative part of the equations of motion. The analitical proof of this approximation is too long but can be checked in the literature and numerical checks are available also for computational applications. The fundamental idea is that the non-Markovian terms decay within a time scale of the order of t_corr (which for typical macroscopic systems can be so short that is experimentally unaccesible with current technology); therefore, for any time t >> t_corr, you can set to zero the Non-Markovian corrections, obtaining a purely Markovian equation as the equations of hydrodynamics, of chemical kinetics, as Fourier law…

I would emphasize, once again, that this discussion is not about the utility of “probability and statistics”, which is beyond doubt. What we are emphasizing here is that Jaynes’ MAXENT theory and similar developments as MEP are ill-founded and/or fail.

Thanks for pointing out how approximations are useful.

The atmospheric CO2 adjustment time is a real problem. Based on the current models, the mean value of this does diverge. I don’t see this as any Nobel prize revelation either, but just a consequence of how marginal probability distributions are set up.

The practical fact is that mean values of observables can diverge.

Here is san example: what is the maximum entropy distribution for a random variable for the situation whereby we only know the MEDIAN value?

Is this physical? I don’t know. Is it practical? I tend to think so.

WHT,

You can have an infinite number of strongly different answers for that kinf of questions by making a change in the variable.

Which of them is the correct one?

You cannot know the “correct” variable without a deeper theory. No statistical principle can tell the answer.

That is what I am getting at, which is it depends on how one marginalizes the parameyers. Look to see how I solved that “trivial” wave energy formulation.

Then consider a random walk problem. Does one place an uncertainty on the time constant, which is the way that current physics does it. Or do you put the uncertainty on the diffusion coefficient, which may be more realistic. Note that the two have a reciprocal relationship. This is the interesting part of modeling disordered behaviors and that is what I try to characterize through my work.

Nothing grandiose, but that’s what I do.

We have more or less agreed already earlier that practical use of MaxEnt involves additional assumptions. I think that we agree also on the observation that these additional assumptions are not based on solid theory but rather on experience and practices that could be called rules of thumb.

Juan Ramon Gonzales (JRG) has shown interest on solid physical theory and the kind of use of MaxEnt that I discuss above is of little value for that. If and when the methods are widely valuable in practices, it’s possible to study, why that’s the case and what are the limits of applicability. That kind of studies may lead to scientifically interesting results, but that does require serious work which lead to publishable results.

This thread was based on the paper of Kleidon. JRG has presented serious critique on that paper, and so have I. Our critiques are formulated quite differently, but I think that they are related as they both are based on the observation that the approach as formulated is in an essential way based on questionable additional assumptions. I see no reason to expect that Kleidons approach could produce any useful insight to the understanding of atmospheric processes. Where it gives right results, these results are almost certainly already known, where it differs, it’s very likely in error.

I think this is wrong. I predict that somebody applying maximum entropy will eventually use it to infer missing data that no one will be able to measure.

The reason no one can measure these parameters is that the disorder interfering with the observables or the model will make a complete simulation intractable. That is why Gell-Mann’s plectic argument has some authority.

I know it is again simplistic but I used this to show the CO2 impulse response.

Now I want to apply it to the thermal response.

Both of these issues have as a description a “missing fraction” of some expected result. I contend that the missing pieces are best resolved as maximum entropy estimators.

For CO2 I adopt a maximum entropy spread in diffusion coefficients for adjustment sequestering, and for thermal response, I assume a maximum entropy spread in thermal diffusion coefficient. This will allow us to estimate how much of the heat is going to the unobservable locations.

I am sure somebody has done this analysis before but the hardest part is finding the correct citation. I will try to generate a thermal example over the weekend.

The problem is that you have to assume. That may lead to good results and may sometimes give even better results than other known methods, but as long as the scientific basis is severely lacking, you’ll never know.

Having success in many cases may give reason that results are good even other similar enough cases, but extrapolating further from the earlier successes adds rapidly to the uncertainty as always in fields that are more art than science. (Extrapolating may be questionable also in science, but the limits are typically known better in science.)

In a perfect world I suppose you can get away without assuming but I do come from a background of experimental semiconductor research. In that discipline, everything is about characterization, be it electrical or spectroscopic or other measurements. You always have to fill in the holes (no pun intended) because you are never counting individual particles. You didn’t know the density of defects or the contaminants every time.

The modeling was done as part of the characterization — I.e. what parametric model fit the curves the best..

I really don’t see anything foreign to what I have always done; it’s just a bigger system and trying to find pieces that I can chew on.

In any case, with the thermal example, I will try an extrapolation.

I think you are being supportive and I might be naive on certain issues, but that is not necessarily bad.

WebHubTelescope:

Divergences in observables are known in science since Poincaré work in classical (non-relativistic) mechanics or even before, and methods to cure them are available. Divergences have nothing to see with MAXENT.

I cannot say if your question is physical, practical, or nonsensical because is rather ill-defined for the usual scientific standards. What median value we know? The median value of the variable? Of the distribution? What variable is? The IBEX index is not one of the variables describing the state of chemical system is a vessel, therefore with independence of if you know the value of this index today is irrelevant when studying the system. Indeed, this index is not even a physicochemical variable and does not appear in the theories of physics, chemistry, biology, geology… Is the random variable that you allude the strengh of magnetic field? If the system is an ideal gas, its physical state is not given by such physical variable, doing it again unuseful, and so on…

What entropy? Informational entropy? Physical entropy? And if is the latter, what kind of physical entropy? Thermodynamical entropy? If the response is affirmative, what formulation are you using? Entropy in EIT is not the same than entropy in TIP, for instance.

Why do you claim that entropy is a maximum? If the system is at equilibrium (x = x_eq), then its entropy is a maximum as classical thermodynamics predicts, but if the system is found at some non-equilibrium regime, its entropy is not a maximum.

If the system is at equilibrium, what kind of equilibrium? Is the system open or closed? closed or isolated? If it is isolated each ‘microstate’ associated to the variable x is equally probably. If the system is closed then this is not true…

Why do you asume that the system’s state is described by a distribution? Jaynes worked with classical concept of probability, but we, scientists, usually work in a more general framework. When I wrote above the generalized formulae for the average value of an observable I used \rho for describing the state of the system, but this is not a distribution: \rho(x) /= P(x). It is only under certain approximations, or in special cases, that a probability distribution P(x) can be derived from \rho and used to reproduce the properties of the system…

This kind of questions must be irrelevant for you, but are basic for us, scientists.

I have no idea what you mean by this statement. Divergence differs from dispersion.

I mentioned earlier that I would bring up a thermal example.I have this documented already but let me put a new spin on it. What I will do is solve the heat equation (aka Fourier’s law) with initial conditions and boundary conditions for a simple experiment. And then I will add two dimensions of Maximum Entropy priors.

The situation is measuring the temperature of a buried sensor situated at some distance below the surface after an impulse of thermal energy is applied. The physics solution to this problem is the

heat kernelfunction which is the impulse response or Green’s function for that variation of the master equation. This is pure diffusion with no convection involved (heat is not sensitive to fields, gravity or electrical, so no convection).However the diffusion coefficient involved in the solution is not known to any degree of precision. The earthen material that the heat is diffusing through is heterogeneously disordered, and all we can really guess at that it has a mean value for the diffusion coefficient. By inferring through the maximum entropy principle, we can say that the diffusion coefficient has a PDF that is exponentially distributed with a mean value D.

We then work the original heat equation solution with this smeared version of D, and then the kernel simplifies to a

exp()solution.But we also don’t know the value of x that well and have uncertainty in its value. If we give a Maximum Entropy uncertainty in that value, then the solution simpilfies to

where x0 is a smeared value for x.

This is a valid approximation to the solution of this particular problem and this figure is a fit to experimental data. There are two parameters to the model, an asymptotic value that is used to extrapolate a steady state value based on the initial thermal impulse and the smearing value which generates the red line. The slightly noisy blue line is the data, and one can note the good agreement.

That is an example of how you apply the Maximum Entropy principle to what is essentially a non-equilibrium problem. I think that such a technique will be useful for a climate model where the thermal impulse due to the forcing function will likely diffuse into the ocean (mainly) and into the earth and freshwater lakes in a highly dispersed fashion. We don’t know the diffusion coefficient nor the spatial positions well so we use the MaxEnt priors which reflect this ignorance (in Jaynes definition of ignorance).

WebHubTelescope,

After you have ignored the previous post (without answering any of the dozen of questions asked to you), now you claim that you want to solve Fourier law of heat conduction, which you also call heat equation. You do not write any equation. I will do. Fourier law is

q = – kappa nabla T

where kappa is the thermal conductivity and q the heat flux.

The heat equation is

@T/@t = – kappa/C @T/@x = -lambda @T/@x

where C is volumic heat capacity and @ denotes partial.

Then you jump to talking about some “diffusion coefficient” D, but the diffusion coefficient D appears in Ficks law of diffusion, not in Fourier law neither in the heat equation. Using a bit of imagination, it seems that you name “diffusion coefficient” and denote by D to the coefficient lambda. In fact, the thermal diffusion coefficient (sometimes denoted by D’) is something completely different to kappa and lambda.

The main point here is that Ficks law of diffusion, Fourier law of conduction, and similar laws are only valid in situations where the local equilibrium approximation holds.

Since Fourier law is more fundamental than the heat equation I will continue with the former. Using standard statistical mechanics methods we can obtain both the Fourier law and the value of the coefficient kappa. As is well-known the value of kappa is computed by using the local equilibrium PDF (often named the Maxwellian PDF), which, of course, has an exponential shape. But once again this is not a MAXENT result, but the standard equilibrium theory known much before…

The standard theory shows that kappa depends on the values of local parameters such as the density of particles n(x). If you do not know the value of those local parameters you could try to obtain an average value n by using the Maxwellian PDF.

Of course, you do not need to compute the value of any transport coefficient (Ficks, Fourier…), you can merely leave it as a free parameter to be fit to the data a posteriori. Precisely in this way experimental values of many transport coefficients have been obtained by scientists and engineers.

Evidently nothing of this is an application of Jaynes “Maximum Entropy principle to what is essentially a non-equilibrium problem”. But an application of the standard theory developed much before by Boltzmann, Maxwell…

There are other issues in your post. For example, you allude to the kernel heat associated to the heat equation, but the heat kernel associated to the equation is proportional to e^{-x^2/Dt}, when you give a e^{-x/sqrt{Dt}}.

Now I understand where you are confused with my approach. Yes, the e^{-x^2/Dt} is the well-known

kernel solutionI referred to earlier. I probably should have explicitly wrote that part out. However, the e^{-x/sqrt{Dt}} solution comes about after I applied the Maximum Entropy uncertainty to D.This essentially describes a model where many parallel paths of diffusion occur, corresponding to a heterogeneous environment. Essentially, I am trying to model a very mushy disordered behavior. We can’t control nature, but can only attempt to model what it gives us.

It looks like we are in complete agreement, as you were able to infer the heat kernel I was basing my premise on.

I have got the impression that the words

ill-founded and/or failare not understood in the same way by all participants. It may even be that a major part of the disagreement is due to that and related speaking past each other.Examples of “iIl-founded” were given, for example, when Jaynes MAXENT gives an evolution law which violates transitivity and gives different predictions for the same system and process. Or for instance in the paper criticizing MEP, showing how the supposed ‘derivations ‘ ignore nonlinearities, and when reintroduced the derivation has gone.

Examples of “fail” are also in the menu… Since MAXENT ignores relative component of f^c, cannot study the stability of system far from equilibrium, giving wrong predictions. Since Jaynes’ MAXENT theory lacks the equations for the non-conserved variables, it cannot do any prediction about the dynamics of those variables and the theory fails to explain phenomena.

In my criticism of Kleidon paper, I showed how his attempt to equate a minus G term with an entropy production term fails miserable when pressure is not constant and the system is not closed. Both requirements omitted by Kleidon, but needed for G being a thermodynamic potential.

To be more clear, someone saying that dG < 0 for some system for the which dG > 0 would be an easy to understand example of the concept “fails”.

So, as a summary, MaxEnt both fails as a method, and succeeds in the way it copies the conventional approach.

That sounds like a tie, and so I won’t update my usage of MaxEnt method. I will just have to apologize to all the science grammar police.

WebHubTelescope:

If you ignore totally Jaynes’ own words and redefine MAXENT to be anything that you want then it could be so satisfactory like you want.

Pekka.

Your thoughts are critical, but your reasoning is biased.

You are of the genre that will be able to conceive the new paradigm of Unified Climate Theory, whereas, those of Josh’s genre, who cannot think critically, never overcome their bias.

I try to be open to new ideas, but there’s a lot of physics and related science that I do consider so strongly justified that it’s usually better to disregard the possibility that that will change.

There’s also a lot of such solid background in climate science. Thick books like that of Pierrehumbert are based on such knowledge, but reading such books with understanding reveals that they don’t even try to provide anything close to a full picture of the actual state of the Earth climate. Textbook theories help in understanding, what’s behind observations, but based on them alone the climate could be much more different from the present than we know as true any time in millions of years of history or or than anybody has projected for the future.

All “theories” that I have seen proposed as alternatives for the kind of theory that Pierrehumbert presents are most certainly wrong, but there remain many possibilities for improving the understanding without attempting to invent some revolutionary new theory. Unfortunately real improvements are difficult to achieve and one has to first understand all the basics, before any hope of success can be considered possible.

markus –

Please tell me how you’ve reached your ability to reason critically without bias. Is it simply a function of your superior intelligence, or is there some specific technique that you use?

More a matter, that my bias points towards truth. It is the bias of Skepticism that gives me truthful knowledge

Pekka, I only know of basic understandings, I am not a scientist

You say:.”There’s also a lot of such solid background in climate science. Thick books like that of Pierrehumbert are based on such knowledge, but reading such books with understanding reveals that they don’t even try to provide anything close to a full picture of the actual state of the Earth climate”.

And in that statement is reason enough why I don’t read the bible. As with the debunked C02 climate theory, they based their faith on a false premise.

Do you believe a man was raised from the dead?

Markus,

I wrote, what I wrote, because I have noticed that you present ideas that I’m 100% certain to be seriously wrong.

The Unified Climate Theory is total crap as are several earlier “theories” that have been discussed on this site. I would not be surprised to find out, if Judith’s conclusion is that there’s no need to give more weight to this latest casse of hopeless attempts to discredit theories, which have not led to any reason for being discredited.

I think that the new “theories” like UCT are supported by two groups of people, those who just like to oppose the present and don’t care the least about evidence, and those who want to give weight to empirical evidence but don’t realize how huge the total empirical support for the basic physical theories is and how well and unambiguously physicists can apply it to basic understanding of climate processes.

There simply are no problems in need of new theories in the basics; the need is in the ability to use any theory in accurate understanding of a very complex system. The system is complex, because it’s large, consists of very many processes, is influenced by complex topography, etc., etc. This complexity is something that cannot be removed by any theory. All new theories make their attacks against those features of the present undestanding, which are well known and they do the attacks always making explicitly and provably wrong assumptions or strong assumptions that lack all real empirical or theoretical support, they all are just hoaxes.

My main point briefly:

It’s important to be open to new ideas and it’s important to remember that even rather well confirmed “facts” may turn out to be in error or incomplete in an unexpected way. In this sense one should never feel fully certain.

Taking any specific new idea, it’s often 100% certain that they are wrong, because they contradict in an obvious way knowledge based on empirical observations directly or indirectly but still without doubt. The Unified Climate Theory is wrong in this way. I’m sure the right arguments are given on WUWT as well, although I haven’t looked at them carefully enough to tell, which message presents them best.

Thank you for your thoughts Pekka. I will not bore you by waxing lyrically of my own philosophical perspectives. But,

“Taking any specific new idea, it’s often 100% certain that they are wrong, because they contradict in an obvious way knowledge based on empirical observations directly or indirectly but still without doubt”.

Without rhetoric, give me the empirical evidence relied upon, to support the AGW theory. No proxies please. I’ve searched without success.

“Unfortunately real improvements are difficult to achieve and one has to first understand all the basics, before any hope of success can be considered possible”.

Pekka, I would not throw my hat into a ring, unless I knew I was on a winner. My profession is such disciplined. It is my appeal to the authority of my own reasoning. Unlike many here, coached in the scientific method, I have no blinkers attached, I am free to the musing of a open mind. I am able to make my own mistakes and not be embarrassed by them.

The basic premise of N&K is this:

Kinetic Energy is (forced) employed by Potential Energy until mass re-radiates the employed kinetic energy to space.

So Enhanced Energy is the kinetic energy plus the potential energy of mass. The mechanism of conjoining energies causes heating the mechanism of decoupling causes cooling.

That is my own very, very short hypothesis. The long one would perplex you, no offence intended.

But I am MAD, so none of it is truth. Right?

I know perfectly well that it appears useless to argue with your views on the net. There isn’t slightist hope that either one of us would tell openly that he has changed his mind to the least, but I always have a tiny hope that something gets through and starts to have it’s effect.

I’ve not written anything about AGW theory, neither is the book of Pierrehumbert about AGW theory. We both discuss physics and it’s application in understanding atmosphere. The empirical evidence that I refer to is all the empirical evidence that proves about the validity of understanding of physics. That’s the huge and powerful evidence, whose value is dismissed by all those who promote theories that contrdict with well known physics.

Yes, that is indeed our view of the situation, that of a a very complex system. Yet, the reason statistical mechanics and the coarse-grained thermodynamic concept got invented was to convert complexity into a more simple concept. This is part of Murray Gell-Mann’s Holy Grail of plectics, and part of the rationale for his starting the Sante Fe Institute. Too many people think that Gell-Mann and company intended to study complexity for complexity’s sake, and don’t realize that Gell-Mann’s real interest lies in extracting simplicity from the complex.

Consider that besides statistical mechanics and maximum entropy, physicists also rely on concepts of symmetry and group theory to model the essence of a behavior. Yet, should we expect that someone like Juan Ramon Gonzalez to cry foul at the use of symmetry or group theory to model a system, just because it is not real physics? Nothing of the sort, and we should try whatever technique that is in our arsenal to divide and conquer the beast. Sorry, but that was just the way I was educated by my physics professors and thesis advisors.

Pekka also mentions the role of complex topography in the system. Coincidentally, that is something I have studied as well through maximum entropy principles. I have actually gone through number crunching the terrain profile of the entire USA down to the 100 meter post level and analyzed that in terms of maximum entropy at different scales. The superstatistics of the distribution is quite interesting.

And incidentally, I just discovered that this approach is justified by invoking the Giry monad from category theory, which you can read more about from visiting the Azimuth blog, which is continuing a parallel discussion of the topic of thermodynamics, but at a much deeper and more fundamental level than we are having here:

http://johncarlosbaez.wordpress.com/2012/01/19/classical-mechanics-versus-thermodynamics-part-1/

Again this all goes into the hopper of scientific analysis in the hope that we can put this all together and reduce the complexity of the climate system. Some people may consider this hopeless but obviously I and many others do not.

WHT,

The number of components in a system is certainly not enough for making it complex in the sense I mean. A small volume of gas has a huge number of molecules and is actually simple to analyse just because of that, when the boundary conditions are favorable.

Adding larger scale structure makes the issue more difficult, but it may happen that the larger scale structure is such that a new statistical simplicity emerges. If that happens, it’s likely that some kind of MaxEnt principle can be developed to give valid results, but the order of inference goes from other knowledge to MaxEnt.

When there are larger scale structures to the point they occur in the Earth system with a small number of continents and oceans, there’s no hope that any statistical approach can give a full and accurate description of all important phenomena.

In the Earth system we have all scales from subatomic to global. Some subsets of these structures may follow some scaling laws (power laws with some exponents), but that cannot be a full description of the system and that cannot provide full results. Thus we are left with the situation that some rules of thumb apply somewhere, some others somewhere else. These can be identified empirically and perhaps also from results of more complete models, but does this observation really help us in deeper understanding of the system. I don’t know any evidence that it does.

“The empirical evidence that I refer to is all the empirical evidence that proves about the validity of understanding of physics. That’s the huge and powerful evidence, whose value is dismissed by all those who promote theories that contrdict with well known physics”.

Dear Pekka, I am glad that you do not give up hope that some will eventually get trough. We are at equilibrium in that, but I should try to make you should understand, the AGE does not validly employ all that evidence that proves relativity.

The theoretical discussion of current climate physics is that it does in fact breach the first law – the creation of energy. A unenclosed GH system can not create energy, as with a closed system.

Get the basic principles right first Pekka and truer results will follow.

The roof of our atmosphere green house starts at the surface with the earth and ends with the TOA, u]nlike AGW that claim GHG’s act like a roof over us. So, adding Co2 does change the composition of Whole Of Atmosphere GH, but as the greenhouse works from top to bottom and from bottom to top, this change also effect the amount of incoming energy into the system, rendering the composition of the Whole of Atmosphere GH basically irrelevant.

Peeka, we are very likely on the cusp of enlightenment. Imagine that the GH is in fact the force by pressure on mass, not the composition of the atmosphere.

N&K haven’t, I repeat, haven’t envisaged a theory and then worked the science. No, they worked the science and then envisaged a theory. They have made a discovery, AGW proponents, by your own reckoning, have only shuffled the deck.

Pekka, Einstein would be alarmed you put little faith in relativity, and, I would think, be a little bit miffed that his future fellows were overwhelmed by the understanding of it.

“When there are larger scale structures to the point they occur in the Earth system with a small number of continents and oceans, there’s no hope that any statistical approach can give a full and accurate description of all important phenomena”.

Pekka, I do agree that it will break down at the most coarse-grained level — as you say the handful of oceans and continents do create that last bit of determinism that cannot be modeled stochastically.

So there is bad news and there is good news. The bad news is that the probabilistic view breaks down, but the good news is that there are not many oceans to deal with.

Here is an old but interesting article on modeling the ocean’s thermocline at a gross level.

http://www-pord.ucsd.edu/~rsalmon/salmon.1982b.pdf

I think this was published after Paltridge’s work but he does not reference Paltridge.

Capt. Dallas:

Nature is very very complex, and this complexity has little to see with your “degree of inertia”. There are many issues in your posts that I am going to ignore; such as you thinking that gases are different than fluids, when any student knows that a gas is just a kind of fluid…

Maybe you are happy if tomorrow someone claims that Newton laws of gravitation were invented by Jaynes in the 20th and are a MAXENT result (or your “MAXENT WebENT”), but I would react when reading such nonsense; specially if such nonsenses are used to overemphasize an ill-defined theory (MAXENT) that has given nothing that was not known before in the physics and chemistry of nonequilibrium.

Juan, “degree of inertia” is only one of the complexities, especially as related to the pseudo-chaotic behavior of non-linear systems. I, by the way, do not think gases are different that fluids, gases are fluids, in general though, the properties of gases are different enough from liquids that engineers use a less exact terminology. Terminology is one of the largest barriers of climate science progress, IMHO

MAXENT (WebENT) could be a useful tool in a combined method approach to finding IF a reasonable solution is practical. So far, I am convinced that no single approach is up to the task. Personally, I see constructional theory and/or Maximum Entropy Production as more useful, but I am open to Web’s approach, provided he realizes the limitations of that approach. Inertia is but one of those limitations.

Capt Dallas:

“degree of inertia” is not a scientific concept in any fundamental science that I know. It must be a pseudo-scientific concept used by some camp to try to give self-importance. I know that the term is used by economists and in marketing.

I cannot know what you think, but I am replying to what you write, and you wrote that “gases diffusing have less inertia than fluids”. It seemed to me that you though that gases and fluids are two different things.

If you really think that gases are fluids (as you affirm NOW), then what you said was “fluids diffusing have less inertia than fluids” and I can object to this phrase as well.

Juan, I am sorry that my command of the language is not up to your standards. Engineers do have a nasty habit of thinking of matter in phase states, solid, liquid and gas. All can be fluids in the strict meaning of the term. If I remember correctly, an object in motion tends to remain in motion until persuaded to stop or change the rate and/or direction of that motion. It is a lot easier persuade a less massive object to change its direction and rate of motion than a more massive object. When I use the term, gas, I am implying it would be less massive than a liquid, which some engineers refer to as “fluids” inappropriately, when the context of the sentence should indicate there is a not so subtle difference in the properties of the “fluids” being considered.

If that is cleared up to a sufficient degree, why would I use “degree of inertia” instead of precisely defining the term? Laziness. I would assume that a discussion on methods of modeling a non-linear dynamic, non-equilibrium thermodynamic system, that varying viscosity and mass of fluids in motion with different “braking distances” might contribute to what some might call chaotic or unpredictable changes in rates of energy transfer. A somewhat complex problem it appears to me. One that may require an outside of the box approach.

Juan –

I have to say that you write very well, and you seem very logical and rational in your thinking in general. But then again, you do write a statement like that above – which amounts to an obvious self-contradiction (and you continue with that contradiction below where you again discuss what you say that you’re going to ignore). It’s a very minor point w/r/t the scientific debate – except that it speaks to what might potentially be a larger characterization of your reasoning: That it is influenced by biases – as reflected in an interest in somehow proving that Cap’n knows less than what “any student” might know (which seems quite dubious) – as opposed to simply objective scientific analysis.

The Google led me to the original Paltridge applications, and other papers.

Very interesting applications, IMO.

WebHubTelescope:

you claim often your are familiar with scientific topics, but continue writing completely distorted views of well-known scientific issues.

Neither statistical mechanics nor “the coarse-grained thermodynamic concept” got invented “to convert complexity into a more simple concept”. Statistical mechanics was developed for searching a link between the microscopic world and the macroscopic world, so that one could, for instance compute macroscopic parameters from molecular structure. Statistical mechanics is more complex than hydrodynamics or thermodynamics because considers atomic-molecular structure.

You have named Gell-Mann often but his contributions to statistical mechanics or to thermodynamics are easy to count: zero. Gell-Mann invented the term plectics and defines it as

But this is not a revolutionary concept, it is what statistical mechanics has been doing for a century! As a consequence, only Gell-Mann and maybe a pair of persons more use the term plectics, with the immense majority of scientists and engineers ignoring it.

Do you really pretend to compare MAXENT, to symmetry or group theory? Wow! Open a textbook on physics or chemistry and you will find applications of symmetry or group theory, when the same textbooks will not even mention MAXENT.

You have been said and shown the

technicalreasons which MAXENT is rejected, but you seem to believe in some kind of conspiracy theory by physicists :-)About the Azimuth blog, sorry but there is absolutely no “deeper and more fundamental level” discussion therein, but a lot of confusion and some nonsense by people whose contribution to thermodynamics or statistical mechanics is zero.

Read again what you just wrote, Juan, as you are starting to contradict yourself. I am not hung up on the purity of a subject as much as you seem to be. I am simply trying to provide some motivation and spirit to the discussion. The motivation is always to try to make the problems tractable. Working out the microscopic motions of all the particles is obviously impossible, so that the macroscopic stat mech was invented. And this made the solution simpler, and by implication, solvable. How can anyone argue that?

Wow. I don’t know that I would draw the conclusion that they were spouting nonsense. I pick up all sorts of interesting nuggets from there.

At least I feel myself in good company. I always thought the Azimuth guys like Baez, Corfield, etc are way too smart for what they are trying to do, as they keep finding interesting diversions. I would pay big money to have their knowledge of math.

I’ve added azimuth to my blogroll, i find it very interesting and unique in the blogosphere.

Gosh, what a great blog, I’d not seen it before! How beautiful is that Hamilton-Maxwell comparison? Very grateful to you for pointing this blog out.

WHT,

the goal of statistical mechanics is not merely to simplify the equations of mechanics. If tomorrow you could solve the Hamilton equations for a N-body system (N ~ 10^23) using a powerful supercomputer, this would not help us to understand why a fluid behaves diffusively, less still to compute a diffusion coefficient. As stated before, statistical mechanics provides a link between microscopic equations and macroscopic equations. Moreover, the techniques developed in statistical mechanics have found novel applications to systems so simple as one degree of freedom.

I already stated my opinion about the discussion on ‘thermodynamics’ in the Azimuth blog. If you are happy discussing such stuff in the company of mathematician and philosophers with zero contributions in the topic, that is fine for me. In the past I already corrected some similar mistakes of one of them, who I know rather well, in his own blog. You consider him “too smart”, but the fact is that even in his own field of ‘expertise’ (not thermodynamics as said) he has been dubbed crackpot in public by well-known physicists working in the same field than him.

The Azimuth people seem to appreciate having readers contribute to the discussion and make corrections when necessary. I believe that they are interested in exploring the mathematics to see how it may apply.

Well those look like unreferenced assertions to me.

Baez provided a good explanation for the territory they are traversing when he wrote recently:

I think they know exactly the boundaries of what they are trying to accomplish. Mathematically exploring the physics is one area that a collaborative effort works. No need for a lab and concrete experiments.

WebHubTelescope,

You say “Well those look like unreferenced assertions to me.” That must be right for outsiders unaware of the flame wars. People in the field knows that he has been considered a crackpot by several physicists, both in public and in private

I only mentioned a couple of names and can’t imagine them as crackpots. They actually moderate the groups so have to deal with bizarre notions all the time. Since they have deep math backgrounds, they may use abstract notions, but that’s not a reason to make a blanket statement.

Any physicist (or mathematical physicist) who “keeps realizing more and more that our little planet is in deep trouble!” due to changes in CO2 is a crackpot by very definition. Sorry, cannot resist.

Most natural scientists are observant about their environment. Something about the way they were educated and inspired about nature.

Sorry couldn’t resist.

Most natural scientists might be observant about their environment, but are apparently less clueful about physics that drives that environment. There must be something about the way they were educated and inspired that makes them to believe in MaxEnt (or was it a “MinEnt” a while ago, in chemistry?), or in any other simplified construction that would govern nonlinear dynamics and stationary state of open system of coupled reservoirs of Earth fluids…

WHT, You say you “can’t imagine”… but was not about imagination. You claim that they “have deep math backgrounds”, but some people who has won a Fields medal thinks otherwise. In any case that was not the point, because was not about math but about

physics.Al Tekhasski,

if by “MinEnt”, you really mean the minimum entropy production theorem, the comparison to MaxEnt is not fair.

Maybe some scientists initially proposed the theorem as a generic nonequilibrium evolution criteria, when the theory of nonequilibrium thermodynamics was being developed and still in its infancy, but posterior analysis restricted its use to the linear regime (the own Prigogine published results showing why one would wait the lack of a general ‘potential’ in far from equilibrium regimes). Today the theorem is acknowledged as one of the main results derived in linear non-equilibrium thermodynamics.

The situation here is not very different from Einstein initially obtaining wrong field equations of general relativity, when he and other were developing the theory and was still in its infancy. Posteriorly, the initial field equations were amended with the famous trace term.

Both situations would not be compared to that with MaxEnt. MaxEnt has been proved wrong both in a foundational basis and from the perspective of applications, although a very tiny community of about half-dozen of people ignores the physical and mathematical facts and take MaxEnt as gospel

The Fields medal winner Terence Tao has encouraged what Baez is trying to do at least a few times

http://terrytao.wordpress.com/2009/09/30/two-quick-updates/

http://terrytao.wordpress.com/2009/08/09/what-do-mathematicians-need-to-know-about-blogging/

So what I suggest you do is wander over to the Azimuth blog, the N-category blog, and Tao’s blog, and proclaim that they all stay away from discussing physics at all. I am sure that they would appreciate getting some feedback.

Here is another Fields winner, David Mumford, who wrote a book published last year called “Pattern Theory: The Stochastic Analysis of Real-World Signals”, which I have been studying for its applications of entropy to understanding stochastic phenomena. This is what David Corfield says about Mumford:

http://math.ucr.edu/home/baez/corfield/2006/02/david-mumford.html

I do like to read what these guys have to say because the applied math stimulates thought. Your mileage appears to vary from mine.

Juan,

From what I found about 30 years ago, philosophical musings about dissipative structures are no more than a wishful thinking about ordinary well-known (in certain circles of course) linear theory of hydrodynamical stability by a newcomer from chemistry. I don’t know any other application of this principle other than to the ordinary Rayleigh-Benard convection, where the same result just have a fancy interpretation and new (at the time) buzzwords. It is probably quite better than the maxent, so it might be not fair, I agree. But still in the same ballpark of brutal [in]applicability to real far-from-equilibrium non-stationary situation in Earth climate dynamics. Same goes for other philosophical re-incarnations as “constructal theory”…

Cheers,

– Al Tekhasski

WebHubTelescope,

I emphasized the word

physicsin bold face… because I am referring to research in this field. But you have ignored this and presents us, as support for your beliefs/credo, two links to Terence Tao blog. In the first link he acknowledge Baez article in aNoticesjournal aboutbloggingand the second is about the same:bloggingThanks for the laugh.

You make a suggestion about blogs. This is a difference between you and me. You want everyone to share your beliefs/credo. This explains why you are so angry when most of scientists and engineers ignore MaxEnt and similar flawed stuff. You even presented here conspiracy theories:

The reading of the “Philosophy of Real Mathematics” article that you linked was very boring.

Al Tekhasski:

What you write has nothing to see with minimum entropy production, which is quoted often as one of main results on linear non-equilibrium thermodynamics.

Many examples of dissipative structures are known and studied. Turing structures (a stationary spatial dissipative structure) are observed in chlorite-iodide-malonic acid reaction in an acidic aqueous solution. They can be studied using a Brussels molecular model, which, of course, goes beyond hydrodynamics theory.

Juan,

You were the one that brought up criticisms of the Azimuth people from

Fields Medalwinners, who would have to bemathematicians. Your whole argument seems to revolve around the purity of a discipline, whether it be math or physics, and then you defer to mathematicians to resolve the issue ?And then you start whining about a link I reference being boring.

At this point, the arguments have become rhetorical.

WHT, I wrote what you do not still understand

And that is why it has turned into a rhetorical debate, just as I said.

I think the reference to a mathematical technique is just shorthand for describing the modeling some physical behavior. Your opinion obviously differs.

Incidentally, I found what I think is a cool application of max entropy uncertainty propagation to characterizing behaviors in my semiconductor technology specialty. If interested, stay tuned …

WebHubTelescope,

After seven paragraphs showing why MAXENT plays no role in heat and Fourier equations, I did only a minor comment, in the eight paragraph, about your heat kernel. Let us study this specific issue a bit more…

The derivation of the heat equation is easy, one starts from the energy balance law, substitutes the Fourier law, assumes a homogeneous medium (then kappa and C are independent of the position), and one obtains the heat equation written above, where lambda == (kappa/C) is independent of position. Now that you have confirmed that you call D to lambda, what you are doing seems to be the following.

You start from the heat equation [in my post of above, powers of 2 are lacking in some partial derivatives]

@T/@t = -D @^2T/@x^2

which is only valid for homogeneous systems (D is constant), but you decide that can be also applied to non-homogeneous systems with non-constant D(x). In your own words “The earthen material that the heat is diffusing through is heterogeneously disordered”.

After this you decide that you need to obtain an average value for D(x), what you call “mean value” or “smeared version” of D(x) before solving the heat equation (ignoring that D in the heat equation is already independent of position) then you claim that your ‘kernel’ e^{-x/sqrt{Dt}}, with a smeared D independent of position, is a solution of the “original heat equation”.

However, your ‘kernel’ is not a solution of the heat equation. The solution to the heat equation is the heat kernel e^{-x^2/Dt}, where D is constant.

There are more issues with your ‘kernel’ but I think that is enough.

Juan, It is a superposition of solutions, which is a perfectly acceptable way of doing propagation of uncertainty.

This superposes a maximally non-commital view of the thermal diffusion coefficient assuming all we know is the mean.

This is very intuitive to understand, and I wrote about a practical application elsewhere. This has to do with an insulated home where there may be multiple pathways for heat to escape. If you know an average thermal resistivity but due to variable insulation constructions, you have to guess at the variance, this is one way to do it.. The least informative is the maximum entropy.

You may not like it, but the proof is in how useful it is from an applied physics or engineering perspective. In that case, you may not like it because it is trivial. Well, I like trivial.

I

I already explained to you how the heat conduction coefficient is derived using ordinary statistical mechanics and how we can obtain an average value using Maxwellian distribution. Evidently, nothing of this has anything to see with MAXENT, although you continue renaming standard theory of equilibrium as MAXENT in a rather unfair way, like when you renamed the well-known Gibbs distribution as a Jaynes’ MAXENT distribution!

Moreover, you have ignored any technical question about the heat equation and its kernel. You can invent any equation that you want and give any solution that you want, was nonsensical or not, but please do not continue blaming over scientists because are ignoring such ‘brilliant’ ideas.

Finally, I only want to add that I did a sign mistake in an above post and that the coefficient lambda in the heat equation is related to Fourier by lambda = – kappa/C. Also, the Fourier law given above is ignoring the presence of fields and needs to be generalized for such cases

I agree with you that Fourier’s Law is a component of the heat equation and the two are not the same thing. Sorry for the confusion in implying that they were the same (when I parenthetically stated “aka”), and want to state plainly that Fourier’s law is more fundamental and is used to derive the heat equation. That said, this misstatement had nothing to do with the rest of my analysis, and I stand by what I was trying to do with respect to uncertainty propagation.

I put together a blog post on this topic, that has some implications for transient AGW analysis.

Carry on, and you can continue to go postal on what I am trying to do.

Believe me, I really don’t mind.

The issue was not if Fourier law and heat equation are the same or not. Evidently are not. The real issues are the limitations of both laws, when they apply and when they do not apply, which is the kernel of the heat equation, and why there is not trace of MaxEnt in your musings.

Evidently, you continue ignore such issues.

I added some more information to my thermal diffusion model here. The results are very complementary to a 1D box diffusion model that James Hansen published in 1985.

Ramble on, Juan. You can call it what you want, I don’t really care any more. I am satisfied that I am on an interesting track.

You have missed another opportunity to correct the nonsense that you said… but I found this page

https://docs.google.com/document/pub?id=1TbosA_JLgwcjj6SfOqmvQu4oEgkWNQOw7XgvxvUZoJE

where you are presented as

Fine!

You claim now that your nonsensical claims about MaxEnT and the heat equation are related to an ancient box model published in Science. Of course that paper does not say the nonsense that you say about the heat equation, neither mention MaxEnT.

Currently climate scientists use two box models as the simplest model for interpreting data. Although those simple models are more complex than your MaxEnt ruminations.

Scientists also know what are the limitations of the two box model and how to develop corrections to it using science. Of course MaxEnt ruminations are absent in all this.

I took a look to your crank blog and I can see that in your “wave-energy-spectrum” entry you continue naming to the Gibbs result P = exp(-bE), where energy E=E(A), being A area, a MaxEnt result, despite the fact at least two posters in this thread corrected you.

The fact you pretend to rewrite the history of physics for supporting your silly ideas would be added to the presentation given in above google doc

I will not waste more time with a crank.

Juan,

You have been what we call punk’d. That web page that you link to is something that I wrote recently. This is hysterical. I am making fun of myself for getting into the math, and you take it as evidence for your argument.

Thanks for the laugh and you made my day.