Week in review – science edition

by Judith Curry

A few things that caught my eye this past week.

Effects of atmospheric CO2 enrichment on net photosynthesis and dark respiration rates [link]

Identifying key driving processes of major recent heat waves [link]

Reassessing Southern Ocean air-sea CO2 flux estimates [link]

How and why will planetary-scale waves in the atmosphere change in response to global warming? link.springer.com/article/10.100

Detected global agricultural greening from satellite [link]

New paper by Scafetta and Wilson addressing the discrepancies in total solar irradiance during 1980-2018. mdpi.com/2072-4292/11/2

New paper evaluating UKESM1 climate model agupubs.onlinelibrary.wiley.com/doi/abs/10.102 High ECS (5.4K) and poor fit to historic observations (see below). Does this mean its ECS is too high? are the runs of the future using this model going to be too warm?

Will plants help make the planet wetter or drier in a changing climate? | j.mp/36yx5dU 

Marine ice cliff instability mitigated by slow removal of ice shelves [link]

Recent increases in drought frequency cause observed multi-year drought legacies in the tree rings of semi-arid forests [link]

Sensitivity of projected long-term CO2 emissions across the Shared Socioeconomic Pathways [link]

Not all carbon dioxide emission scenarios are equally likely: a subjective expert assessment [link]

The blue holes are revered among divers for their deep, clear waters. They are also important keepers of the scientific record. [link]

Ancient air challenges prominent explanation for a shift in glacial cycles nature.com/articles/d4158

New elevation data triple estimates of global vulnerability to sea level rise and coastal flooding [link]  Rud Istvan’s takedown at WUWT [link]

Western Mediterranean SSTs were warmer than today when CO2 levels were 200-230 ppm (50-110K yrs ago, last glacial) and ~260 ppm (5K-10K yrs ago), when the Sahara had lakes, trees, humans. Abrupt climate shifts (°C/century) occurred without CO2 changes. sciencedirect.com/science/articl

How climate change could shift California’s Santa Ana winds [link]

Oceans vented CO2 during the last deglaciation [link]

Variations in the Intensity and Spatial Extent of Tropical Cyclone Precipitation. Geophysical Research Letters. agupubs.onlinelibrary.wiley.com/doi/pdf/10.102

Intense hurricane activity over the past 1500 years at South Andros Island, The Bahamas [link]

A 2-Million-Year-Old Ice Core from Antarctica Reveals Ancient Climate Clues e360.yale.edu/digest/a-2-mil

Reframing Antarctica’s meltwater pond dangers to ice shelves and sea level [link]

“Land plant evolution decreased, rather than increased, weathering rates” doi.org/10.1130/G46776

Goodwin et al (2019): “Conventionally, definitions of climate feedback & climate sensitivity […] do not include carbon cycle feedbacks. [We provide] a new framework to incorporate carbon feedback into the definitions of climate feedback and sensitivity.” doi.org/10.1029/2019GL

Enhanced ENSO variability in recent decades [link]

“The Southern Ocean is getting greener because the amount of marine plants (#phytoplankton) has been increasing in the last 21 years. These changes appear to be happening faster during the winter.” doi.org/10.1029/2019GL

Global warming displaces the East-Asian subtropical monsoon southward as enhanced equatorial warming causes Hadley contraction in June-July. [link]

Glacial rivers absorb carbon faster than rainforests [link]

How life blossomed after the dinosaurs died [link]

Uncertainty in the evolution of climate feedback traced to the strength of the Atlantic Meridional Overturning Circulation [link]

A new 800-year reconstruction of W. Arctic Ocean sea ice coverage indicates sea ice has been “relatively stable” since 1800, the W. Arctic was ice free for 2-5 months per year during the 1500s-1700s vs. <1 month today, and temps were warmer in the 1930s. agupubs.onlinelibrary.wiley.com/doi/abs/10.102

Marine ice cliff instability mitigated by slow removal of ice shelves [link]

Recent #droughts in India have been less severe but more detrimental!  Long-term (1870-2018) drought reconstruction in context of surface water security in India sciencedirect.com/science/articl

The role of atmospheric nuclear explosions on the stagnation of warming in the mid 20th century [link[

Policy, impacts & technologies

Be cautious with the precautionary principle [link]

A new study examines how people’s opinions on climate change affect real estate prices:  denier vs believer neighborhoods [link]

Healthy wetlands could stave off rising seas [link]

Roger Pielke Jr: Everything you hear about billion dollar disasters is wrong [link]

Roger Pielke Jr:  The impact of flooding depends more on societal change than on climate change [link]

Roger Pielke Jr:  Democrat climate policies are ambitious but fail the reality test [link]

Roger Pielke Jr:  The surprising good news on the economic cost of disasters [link]

Roger Pielke Jr:  The word is not going to halve carbon emissions by 2030, so now what? [link]

Schellenberger: Why everything they say about California fires – including that climate matters most – is wrong [link]

Wildfires and wildland development [link]

Wildfires are causing California to become net CO2 emitters, owing to poor timber management [link]

This is a staggering turn of events in California: “Only about 163,000 acres have burned this year, a fraction of the 632,000 or so scorched in the same period last year.” Did the PCG blackout help?  bloomberg.com/news/articles/

The radical reforms necessary to prepare California’s power system for the 21st century [link]

The smoldering state [link]

Read to the end to catch Adam Sobel’s and Park Williams’ explanations of causes of the CA fires. nytimes.com/2019/10/28/us/

Parts of California are too wildfire prone to insure [link]

Our nitrogen footprint: We’ve changed a life-giving nutrient into a deadly pollutant. How can we change it back? ensia.com/features/nitro

A carbon-neutral land sector by 2040?  Here’s how to get there [link]

Tibet’s rivers will determine Asia’s future [link]

India’s CO2 emissions are poised to slow  sharply in 2019 [link]

Geopolitical gains and losses after the energy transition [link]

Valuing the Flood Risk Reduction Benefits of Florida’s Mangroves [link]

Since the 1950s, Florida authorities have spent $1.3 billion periodically bringing in sand. Despite a huge effort, nearly half the state’s 825 miles of beaches are now considered “critically eroded”. [link]

Robert Stavins: 50 years of policy evolution under the Clean Air Act [link]

The Department of Energy’s ‘super grid’ proposal for renewables[link]

“Energy and the military: Convergence of security, economic, and environmental decision-making.” [link]

A lawsuit was filed at the Court of Justice seeking to stop the European Union from counting wood as a renewable energy source. [link]

Promoting innovation for low carbon technologies [link]

The secret plan for decarbonization: how demand flexibility can save our grid [link]

Energy Futures Initiative new report on carbon renewal technologies [link]

Understanding the water-energy-food nexus in a warming climate j.mp/2PukUIT

Mike Hulme:  Is it too late to stop dangerous climate change? [link]

Energy efficiency can cut US energy use and greenhouse gas emissions in half by 2050 [link]

1.5 million packages a day: the internet brings chaos to New York City streets [link]

Complex system models are hard to understand fully because: 1) they have many component parts, many local variables & many system states i2insights.org/2017/03/07/com

About science & scientists

Looking in the right places to identify ‘unknown unknowns’ [link]

An entertaining spat among the climate alarmists: Michael Mann actually has the more defensible position on this one [link]

Dagfinn Reiersol:  Assessing the worst case scenario [link]

Research – who needs it? [link]

High suicide rates: elite colleges reconsidered [link]

NASA’s next 50 years [link]

Academics have internalised the culture of censorship on campus. [link]

Ross Gelbspan’s takedown of Naomi Oreskes [link]

Long read but worth it:  We must confront climate change with reason rather than emotion. [link]

“Bayesian Estimation with Informative Priors is Indistinguishable from Data Falsification.” With uninformative priors, Bayesian estimation produces essentially the same results as null hypothesis significance tests. New paper. ow.ly/J4PT50wYxP3

Experiments in nurturing classroom curiosity [link]

Academic travel culture is not only bad for the planet, it is also bad for the diversity and equity of research. [link]

The truth is not in the middle:  Journalistic norms of climate change bloggers [link]

Peter Gluckman:  Science in a global perspective – challenges ahead [link]

Shellenberger: Channelling the Malthusian Roots of Climate Extremism [link]

Math is racist according to Seattle Public Schools [link]

I argue here that greater emphasis needs to be placed on the relationship between the psychological biases of scientists, the validity of scientific research, and the advancement of scientific progress.” [link]

198 responses to “Week in review – science edition

  1. Thanks for another great overview and a feast of science and reason.

    Here’s an interesting paper on atmospheric physics that you missed:

    https://globalwarmingsolved.com/2013/11/summary-the-physics-of-the-earths-atmosphere-papers-1-3/

    • Amateurs buy into junk papers and think any document uploaded to the Internet can overthrow over 150 years of climate science.

      Science doesn’t work that way. Your link isn’t even peer reviewed — it’s just tossed up by…someone…who knows who, who can’t hack it in the peer reviewed literature.

      Learn about standards.

      • “… Wildland fires (in rangelands as well as forests) that burned 140 million acres annually in pre-industrial days, dropped to 30 million acres a year in the 1930s and to between 2 and 5 million by the 1960s. In the 1990s, however, as fuels continued to build up and suppression became ever more difficult, this trend began to reverse, with wildfires burning 8 million federal acres in 2000. …”

        Source: U.S. Department of the Interior, U.S. Department of Agriculture, et al, “Restoring Fire-Adapted Ecosystems on Federal Lands: A Cohesive Fuel Treatment Strategy,” page 44, April, 2002.

        https://web.archive.org/web/20041213044819/https://www.nrdc.org/land/forests/pfires.asp

        BTW, there are more trees in the US now than 100 years ago.

    • Is it only denayas who study radiosonde balloon data? Miskolczi and now the Connollys study actual data of atmospheric dynamics, and are ritually stoned as blasphemers. The Connollys discover a previously unknown density discontinuity above the tropopause and are punished for it.

      Is atmospheric climate science like string theory in that they can no only ever be theoretical? No. String theory is at least beautiful. CO2 warmist alarmism is crassly opportunist and visciously ugly.

      • These types don’t “discover” major anything. They make mountains out of molehills. They make mountains out of nothing.

        If they really made a real significant finding of anythink major the big journals would be thrilled to publish them. It’s telling that none did — maybe the authors didn’t even aim for them. That tells you more.

        What I read of the Connelleys was so pedantic it doens’t even belong in freshman science classes. Such as, they make a major deal out of rearranging the ideal gas law!

      • DC
        “Mountains … molehills … pedantic …”
        Such generalities suggest you have only ever read second or third hand reports of the Connollys work. Something more direct and specify is needed.

        The general criticism by the Connollys and Miskolczi and others is that it is wrong to imagine that all heat movement and exchange in the atmosphere is radiative only. It’s not a serious rational position. Gravity exists, pressure exists, convection and diffusion also. The universe is more than 300,000 years old. The universe is no longer in the light dominated epoch. Matter dominates now.

        To the Connollys and Miskolczi, Nikolov and Zeller etc., you can also add Albert Einstein who in 1917 pointed out that emission – radiation is not important to gas thermodynamics:

        During absorption and emission of radiation there is also present a transfer of momentum to the molecules. This means that just the interaction of radiation and molecules leads to a velocity distribution of the latter. This must surely be the same as the velocity distribution which molecules acquire as the result of their mutual interaction by collisions, that is, it must coincide with the Maxwell distribution. We must require that the mean kinetic energy which a molecule
        per degree of freedom acquires in a Plank radiation field of temperature T be

        kT / 2

        this must be valid regardless of the nature of the molecules and independent of frequencies which the molecules absorb and emit.

        “Regardless if the nature of the molecules and independent of the frequencies at which molecules absorb and emit.”

        http://inspirehep.net/record/858448/files/eng.pdf

      • phil salmon wrote:
        Such generalities suggest you have only ever read second or third hand reports of the Connollys work. Something more direct and specify is needed.

        Nope, I read that paper.

        The general criticism by the Connollys and Miskolczi and others is that it is wrong to imagine that all heat movement and exchange in the atmosphere is radiative only.

        No one thinks that:

        https://scied.ucar.edu/radiation-budget-diagram-earth-atmosphere

  2. Really happy to have found someone credible, who is willing to be “environmentally incorrect” with regard to the causes which are resulting in climate change

  3. See: Ancient Air challenges prominent explanation for a shift in glacial cycles.

    Could it not be more likely that an elevated plate under the present Bering Sea delayed the previous early return of warm Pacific water into the Arctic Ocean? This is based on the need for the time it takes Sea levels to rise during interglacial times.

    Carl Frohaug

  4. Schmidt’s post on CMIP6 models ECS is actually pretty good. The higher ECS’s are incompatible with the historical record of the last 150 years and also with paleoclimatology. That means they must have undone some of the fortuitous cancellation of errors in their earlier models. In this situation, improving one of the many sub grid models can make the results worse.

    • Yes. It’s a nice change for him.

    • dpy6629 wrote:
      “The higher ECS’s are incompatible with the historical record of the last 150 years and also with paleoclimatology.”

      We can’t calculate ECS from the recent historical record because we don’t know aerosol forcing over that time. (A cooling factor)

      Also, ECS refers to *E*QUILIBRIUM climate sensitivity. That’s a few centuries in the future, again ruling out a value from observed climate change.

      Past warming episodes started from a different place — different initial conditions. So they are only useful up to a point, but not further.

    • Well perhaps I should have talked instead about TCR which is clearly incompatible with the historical record. However, the statement about ECS pretty much echoes Schmidt, so perhaps you should take it up with him.

      • How is today’s TCR (what’s your best value?) “clearly incompatible with the historical record?”

        And how can you calculate anything without knowing the history of aerosol emissions? (And knowing them by latitude, since latitude determines how much sunlight they will redirect.)

      • So does your concern about uncertainty in 20th century aerosol concentration also spill over to a concern about attribution of 20th century climate change to AGW?

      • There are uncertainties in estimating TCR from the historical recored. Lewis and Curry did a good job of quantification using IPCC data. As I recall TCR was less than half of that in these new models. The model values are outside the 95% confidence interval.

      • curryja wrote:
        So does your concern about uncertainty in 20th century aerosol concentration also spill over to a concern about attribution of 20th century climate change to AGW?

        No. I don’t understand why it would.

      • curryja wrote:
        So does your concern about uncertainty in 20th century aerosol concentration also spill over to a concern about attribution of 20th century climate change to AGW?

        After thinking I think I now know what you’re asking.

        I think it’s that, since warming as been below what CO2e-alone predicts, how do we know that the difference is cooling aerosols vs too much warming attributed to CO2e.

        My response is, what natural factors are causing modern cooling or modern warming? Are there any known or any proven?

      • …blaming all of the recent warming on carbon dioxide emissions is incorrect, in my opinion. Solar indirect effects and multi-decadal oscillations of large scale ocean circulations have been effectively ignored in interpreting the causes of the recent warming.

        (from a recent post by dr curry which answered questions from a reporter)

      • The absence of evidence is not evidence of absence.

      • But it’s not evidence for, either.

      • David Appell: The absence of evidence is not evidence of absence.

        Depends on how competently the evidence has been searched for. For most scientists, the absence of an effect of the luminiferous aether in the the Michelson-Morley experiment is taken as evidence that the stuff isn’t there. There are a lot more examples, such as the absence of evidence that coffee drinking increases leukemia risk, or that megadoses of vitamin C reduce leukemia risk..

      • People looked all over for evidence of the ether. They didn’t find it.

        And since some studies do suggest Vit C reduces leukemia risk, that was found via evidence.

        The point is, anyone can always say that X causes climate change but we haven’t looked for it. But such a statement doesn’t say much. For example, Nir Shaviv has made claims about indirect solar effects, but there doesn’t seem to be much of any followup research. And it would exist if there were something to his claims. Scientists don’t suppress evidence or suppress investigation just to hang on to an existing theory. There are always younger (esp) scientists who are willing to consider and investigate new ideas if the evidence is there.

        Meanwhile, the evidence that atmospheric CO2 increases the greenhouse effect is so overwhelming that it’s solid and never going away, which is what I suspect some of you want.

      • David Appell: People looked all over for evidence of the ether. They didn’t find it.

        Yes. Absence of evidence can be evidence of absence.

      • How can any attribution of 20th Century warming be complete without the consideration of past global warming/cooling? As I documented in the previous post there are well over 150 studies suggesting some warming/cooling in some locations during the purported MWP/LIA anomalies. In both cases, 20 years ago it was more a case of absence of evidence rather than evidence of absence. That number of 150+ studies will only continue to grow in the future as it has in recent years.

        Also, the issue of the effect of AMO/AMV to uncertainty needs to be addressed. Contrary to what has been stated elsewhere, this variability has been found to be up to 80 to 100 years (Wei 2012, Moore 2017, Knudsen 2014, Gray 2004) rather than 60 years.

  5. Just correcting one of your lead ins. It wasn’t Ross Gelbspan’s takedown of Ms Oreskes. It is Russell Cook’s. He created the website to look at the articles Mr Gelbspan and people associated with him have published. Spoiler Alert. He wasn’t impressed.
    Ms Oreskes meets the criteria because she pushes her conspiracy theory as fact, despite having no evidence and then ignoring inconvenient truths. And yes, she deserves to be called out for her lying.

    • Thanks Chris, and I just found this myself in a google search of my name and “oreskes” in an effort to see if I was making any headway on getting this story out. Yes, “Ross Gelbspan” doesn’t expose Naomi Oreskes’ narrative wipeouts, Gelbspan is part of the problem along with Oreskes. When it comes to the overall smear of skeptic climate scientists as being ‘paid shills of the fossil fuel industry,’ my shorthand term for this is the “Gore-Oreskes-Gelbspan” accusation.

  6. The super grid article is just a desktop study that doesn’t deal with any of the technicalities of integrating a DC system as a substantial part of a functioning grid. Just saying you put a long distance DC line from A to B and it can move all this power doesn’t cut it. As people like Planning Engineer can eloquently explain, the devil is in the detail.

    • At WUWT, Rud Istvan has a post, in a multi-part series, where he describes how the spinning inertia of huge generators is important to frequency stabilization. If your electricity comes mostly from wind and solar, or if it’s wired in from far away, you need big expensive flywheel/generator gizmos called ‘synchronous condensers’. This is an arcane issue that politicians and renewable energy advocates probably have no clue about.

      https://wattsupwiththat.com/2019/11/09/pathway-2045-2/

      Grid interia is an AC frequency stability problem. Instability means blackouts even if the grid is on average adequatelly supplied with backup generation. Enormously heavy rotating generator masses ( each hundreds of tons) in a conventional fossil fuel supplied grid provide (per Newton’s First and Second Laws) grid inertia. Neither wind nor solar supply any grid inertia, by definition. Grid inertia is essential to stabilize grid frequency.

      Grid inertia is an old and well known EE problem in conventional grids where demand is remote from supply. It is solved by putting in local costly ‘synchronous condensers’, essentially large undriven generators which just supply/absorb rotating kinetic AC energy when needed. ,,,

      • Canman
        Rud isn’t correct in his article. Synchronous condensers are mainly used for phase correction and voltage control. The inertia they provide is only a secondary effect – it wasn’t necessary for them at all in the days of neighbourhood thermal power stations. For modern installations to support windfarms, they often install a flywheel because the inertia of a condenser generator isn’t that much. .
        Wind turbines can provide inertia if they are directly coupled to the grid, but those generators are too expensive. The ones they build are cheap and nasty.

      • Chris,

        I think Rud’s point is that a grid in an area without a lot big spinning generators, is going to need a lot of expensive equipment to provide this spinning inertia. The quote above continues after the ellipses (mistakenly typed as comas):

        … Below (per CtM request) is one of 6 that supply reactive current/frequency control to metropolitan Tokyo, since all its generationg stations are remote. Tokyo has NO renewable electricity supply as SoCalEd envisions in its point 1. And Tokyo is a LOT smaller than SoCal.

        SoCalEd would have to install hundreds of these Toshiba 200MVAR (multi hundred million dollar each) synchronous condensers to meet some minimum grid reliability spec given its 2045 goals. Their paper did not explain the known AC grid inertia engineering problem (complex mathematics where i [square root of minus one] computes frequency in the AC complex math plane of a+bi* c+di).

        I suspect the “multi hundred million dollar each” might be over-embellished? Searching with Google, the total Synchronous Condenser market is about $600 million — example:

        https://www.globenewswire.com/news-release/2019/09/02/1909598/0/en/Global-Synchronous-Condenser-Market-Analysis-to-2024-High-Maintenance-Equipment-Cost-and-Availability-of-Low-Cost-Alternatives-Hinder-Growth.html

        The global synchronous condenser market is projected to reach USD 606 million by 2024 from an estimated USD 549 million in 2019, at a CAGR of 2% during the forecast period.

      • Yes – I know the article as I commented there, but the machines don’t supply frequency control so his major premise is wrong. And 200MVA generator would supply next to no inertia in the context of their grid. Don’t know the physical dimensions of their generators, but a 600MW 50Hz coal fired unit is about 3000MWs. The sync condensers are probably 4-500MWs. General rule of thumb is you need the inertia to be 3-4 times the size of the grid and distributed throughout it..
        At the EPRI Generation Workshop last year, Seimens were making a big push on supplying units to stabilize voltages on windfarms and gave an interesting presentation. Mainly installed at Grid points close to the windfarm switchyard. As well as voltage correction, probably smooths the input signal for the VSDs and filters the stray harmonics, but the maths lost me early on.
        The need for the units depends on how secure the grid is. I suspect north western Europe is going to need a lot. So will Texas and California, especially as they decommission thermal units. SA already dispatches wind off or goes to negative pricing for them to keep GTs on.

      • Canman. I know about Rud’s article and have commented there. The synchronous condensers in Tokyo aren’t used for frequency control so the central premise is wrong. Their inertia contribution is negligible in terms of the grid there. There is no dimensional data I can find but they are probably about 100MWs inertia. A 600MW 50Hz thermal unit is about 3000MWs. The usual rule of thumb is the inertia of the grid should be 3-4 times the actual load. Otherwise unit trips can cause voltage and frequency collapse because the protection can’t work fast enough – RoCoF problems. SA knows all about this. California probably will soon and Texas is at risk.
        At the EPRI Generation Workshop last year Siemens gave a presentation on the units they are installing at the grid points for windfarms. They are used for voltage control and smoothing as well as contributing inertia. That is why they are now building them with flywheels.

      • Chris, I must say this topic is very confusing. It’s leaving me with more questions than answers. Is grid inertia as big of an issue as Rud says? If Tokyo’s synchronous condensers aren’t providing all its inertia, do they also have a bunch of big flywheels? Rud’s multi hundred million each price does look high.

      • Canman
        I will try to answer in a way that should give the clarity you seek– I am not sure of your understanding so it may be telling you stuff you already know, for which I apologise in advance. I have also simplified it to keep it short so there are a lot of qualifications and subtleties I left out. I also notice both of my comments that are similar have been posted. The first one disappeared so I assumed it was gone into the ether, hence the second which has slightly different numbers but is more correct.
        Because AC is a sine wave, the current can be out of sync with the voltage. This phase offset makes the system inefficient and increases losses. Power lines cause current to lead voltage and transformers or motors, current lags. As the load changes, the phase offset changes. It didn’t matter when you had neighbourhood power plants as the generators can alter their phase angle to minimise the offset.
        As power stations moved further away from load centres, it got harder to get the balance right. It could be corrected by large capacitors in switchyards but they used to be expensive. The grid operators then recommissioned the generators of old powerstations that were near load centres to do the correction. They were used as synchronous motors (or condenser). Used a fair bit of power to do it (wasting energy) but they worked. As high voltage capacitor banks became cheaper, they took over the role. However, they need to be switched in and out, where as a synchronous condenser has a very wide operating curve and is rapidly adjustable. Grid operators have to be very conservative so they stay with what works.
        Grids need inertia to limit the rate of change when there is inbalance between generation and load – which is near all the time. And when a generator or power line that is a reasonable percentage of the load, like 3-5%, it gets fairly serious if it trips. The frequency has to be kept within tight limits and when things go wrong, they happen quickly. Circuitbreakers take 60-100 milliseconds to operate at best, so the rate of frequency change has to be slow enough to allow them to function. It wasn’t a problem when you have a thermal or hydro grid as you have plenty of inertia that came automatically with the generation. However, as thermal plant gets replaced by asynchronous stuff like wind or solar (or DC lines), then the grid loses inertia, making changes a lot more rapid. That is when the grid operators had to look at restoring inertia and the synchronous condensers came back. Not much inertia, and they consume power, but a lot better than the none that the asynchronous generation offers. They add flywheels which consumes more power but gives a lot more inertia.
        So yes, if you want a solar or wind dominated grid, then you need to have the inertia created by rotating plant like synchronous condensers. Adds a lot to the cost of operation but they work. If you have a grid that doesn’t have power travelling long distances, then you don’t need them.

  7. Dark respiration rate decrease with rising co2 is interesting. So, not only does photosynthesis double, release of CO2 at night halves as CO2 concentration doubles.

  8. “Detected global agricultural greening from satellite data”

    How can this be? The increase in CO2 is only 1 part in 10,000. It’s a trace gas! I can’t have any macroscopic effects!

    • The trace gas has the macroscopic effect of allowing photosynthesis and life on earth. Which is why you Malthusians hate it so much. It breeds plebs.

      • You missed my point. How can it allow increased photosynthesis but at the same time be too small to augment the greenhouse effect?

    • You’re right but it does seem unlikely that the equilibrium effect should be some multiple of the direct effect of 1 degree per doubling.

      It seems the link between temperature trends and CO2 suggests a positive feedback amplifying the direct effect but I find the Curry/Lewis estimate based on that link of around 1.5 degrees to sound reasonable.

  9. The importance of math skills in the real world places what she calls an “unearned privilege” for those who are good at it.
    The only earned privilege may be strong muscles (called “bullying” in Seattle schools). The future looks bleak.

  10. Effects of atmospheric CO2 enrichment on net photosynthesis and dark respiration rates [link]

    Better late than never!

  11. Ancient air challenges prominent explanation for a shift in glacial cycles https://nature.com/articles/d41586-019-03199-8

    This excellent study by Y Yan et al gives a valuable snapshot of CO2 levels back about 2 million years by meticulous and painstaking analysis. This result casts doubt on the hypothesis of CO2 causation (not really a hypothesis – more an evidence-free statement of religious dogma) and indicates the explanation for the MPT (MPR) lies elsewhere.

    Ralf Ellis’ dust hypothesis has some evidence to support it and would be more persuasive if it was not mixed with unnecessary denial of the primacy of the Milankovitch cycles. Every one of the 30 or so interglacials in the Pleistocene have occurred 6500 years after an obliquity peak, showing ocean heating of obliquity with the expected thermal lag (how long it takes to move ocean temperatures). What are the chances of all those 30 coincidences? Here are the post MPR interglacials plotted with 6500-year lagged obliquity (thanks to Javier):

    The best way to describe the alternation of glacials and interglacials is as “flicker” between two states that represent chaotic attractors. What if climate was slowly descending from a warmer, non glaciated state before the Quarternary to a state of deep permanent glaciation in future – something similar to the Saharan-Andean glaciation 450 Mya or the Sturtian and Marinoan glaciations 640-800 Mya? During the slow transition to the deep glacial state, flickering between warm and cold attractors will occur while the trajectory of the climate in its phase space feels a similar pull from both attractors. While the pull of both attractors is very similar, the system is finely balanced and a very weak forcing such as obliquity is sufficient to tip the system periodically between attractors. This state of affairs is well known and is called a periodically forced nonlinear oscillator. Look up references for example to periodically forced versions of the Belousov-Zhabotinsky spontaneously oscillatory reaction.

    However eventually the flicker stops. Before stopping it will slow down as it starts to feel the cold attractor more strongly than the warm. This has been the case ever since the MPR. Post MPR, obliquity alone is not enough to precipitate an interglacial. Now all the ducks need to line up – all three Milankovitch oscillations need to peak together – obliquity, precession (and modulation of precession) and eccentricity. (All these are really just different faces of the same thing.) When all Milankovitch cycles peak together – roughly every 100,000 years, only then is the forcing enough to start a new interglacial.

    Looking at this trend of slowing down flickering between two attractors, what comes next? If the long term secular trend of cooling continues then eventually the flickering stops and we enter permanent deep glaciation. However Javier has shown data previously that the Pleistocene was coldest 2-3 glacial maxima ago, not now. But I’m not sure that the running mean is so important – it might be the increasing amplitude of glacial-interglacial fluctuations that matters. Plus – if the cycle is starting to slow down – the last glacial being >100,000 years, that also could signal an impending regime change.

    Flickering as an early warning signal

    Dakos V, van Nes EH, Scheffer M. Flickering as an early warning signal. Theoretical Ecology. 2013 Aug 1;6(3):309-17.

    Abstract

    Most work on generic early warning signals for critical transitions focuses on indicators of the phenomenon of critical slowing down that precedes a range of catastroph- ic bifurcation points. However, in highly stochastic environ- ments, systems will tend to shift to alternative basins of attraction already far from such bifurcation points. In fact, strong perturbations (noise) may cause the system to “flicker” between the basins of attraction of the system’s alternative states. As a result, under such noisy conditions, critical slowing down is not relevant, and one would expect its related generic leading indicators to fail, signaling an impending transition. Here, we systematically explore how flickering may be detected and interpreted as a signal of an emerging alternative attractor. We show that—although the two mechanisms differ—flickering may often be reflected in rising variance, lag-1 autocorrelation and skewness in ways that resemble the effects of critical slowing down. In partic- ular, we demonstrate how the probability distribution of a flickering system can be used to map potential alternative attractors and their resilience. Thus, while flickering systems differ in many ways from the classical image of critical transitions, changes in their dynamics may carry valuable information about upcoming major changes.

    • Ancient air challenges prominent explanation for a shift in glacial cycles https://nature.com/articles/d41586-019-03199-8 This paper triggered links to below.
      Late in an earlier thread alankwelch says ” —- I still believe that is a dangerous technique and I think “Extrapolate and be damned” should apply in most situations.” The basis of his piece was on ‘curve fitting’. The subject directly impacts the mentioned paper above.

      The root cause in the paper may be directly (or indirectly) connected to how obliquity was determined. From the days of JN Stockwell there are now several equations for the change of obliquity. The common point about them is that when extrapolated into the first millennium BCE they all depart from the ancient measurements of the earth’s tilt. To add to the confusion the curve fitting was based on polynomial for secular changes only, and ignored any other inputs. In fact others have shown that superimposed on the secular signal is a transient one, a step impulse with exponential decay. From evidence it appears that the exponential signal/forcing is stronger than the secular planetary signal. A second factor in the formulae is the empirical part, 23.xxdeg , which is itself an enigma why it is so and not something else (from evidence it was something else prior to ~2345bce ).

      This conflict was noted in this paper here: http://adsabs.harvard.edu/full/1979A%26A….73..129W Title: The obliquity of the ecliptic Authors: Wittmann, A.” See pg3 first para. So when it comes to Milankovitch, while the logic is correct, basing a hypothesis on extrapolation of an incorrect curve fitting is going to lead to problems and ‘Challenges’.

      • Some say, he bid the angels turn askance
        The poles of earth twice ten degrees or more
        From the sun’s axle; they with labour push’d
        Oblique the centric globe: some say, the sun
        Was bid turn reins from th’equinoctial road
        Like distant breadth to Taurus with the seven
        Atlantic Sisters, and the Spartan Twins,
        Up to the Tropic Crab; thence down amain
        By Leo, and the Virgin, and the Scales
        As deep as Capricorn, to bring in change
        Of seasons to each clime.

        Milton, Paradise Lost

      • “—- A long time ago, when I was angry and rose up from my dwelling and arranged for the Flood, I rose up from my dwelling, and the control of heaven and earth was undone. The very heavens I made to tremble, the positions of the stars of heaven changed, and I did not return them to their places. Even Erkalla quaked; the furrow’s yield diminished, and forever after (?) it was hard to extract (a yield). Even the control of heaven and earth was undone …”
        A Possible Babylonian Precursor to the Theory of ecpyrōsis by Marinus Anthony van der Sluijs https://www.mythopedia.info/vanderSluijs-CC9-2.pdf
        Or
        “— Now this has the form of a myth, but really signifies a declination of the bodies moving in the heavens around the earth, and a great conflagration of things upon the earth, which recurs after long intervals; —” Plato ‘Timaeus’ .
        Now I am sure that of all that had been said on this through the ages, no one had figured out what Plato really meant.

  12. I have enjoyed reading the articles about wildfires, wetlands, floodings, coastal ecosystems here and other, similar material elsewhere recently because they gave a balanced perspective that is usually missing in MSM coverage. Rather than being AGW obsessed, all the authors have tried to be circumspect and discuss other variables at play.

    California has 39 million people more than they did 150 years ago. The urban/wilderness interface is exponentially greater than during the gold rush. There are millions of miles of electric lines now. Forest management practices have been irresponsible for decades. Non native species of trees/natural cover have been introduced. Drought and dry winds are a natural part of the state’s history. But the CO2 Corps can only think about one thing.

    Populations in coastal communities on the East Coast have likewise exploded over the last 150 years, with the inexorable hardening of the geomorphology, the compaction of soils from concrete, the massive abstraction of groundwater, the destruction of wetlands and littering of buildings in the floodplain. But writings in the MSM about destruction from hurricanes, flooding and SLR only focus on the usual suspect.

    When pieces such as these actually try to look at the subjects with their inherent complexity, it’s not only refreshing but a lot more informative.

  13. “New elevation data triple estimates of global vulnerability to sea level rise and coastal flooding”

    Also this

    https://tambonthongchai.com/2019/11/03/lessweknow/

  14. “sea ice has been “relatively stable” since 1800, the W. Arctic was ice free for 2-5 months per year during the 1500s-1700s vs. <1 month today, and temps were warmer in the 1930s"

    There's probably more to Arctic sea ice variability than atmospheric phenomena. The geology of the Arctic, for example.

    https://tambonthongchai.com/2019/11/07/precipitous-decline-in-arctic-sea-ice-volume/

  15. Ireneusz Palmowski

    Arctic air will attack again in the Midwest.

  16. Attempting to let millions of years of hot air out of old ice, Yan wants to, “emphasize the need for a complete, undisturbed time series of greenhouse-gas concentrations that can be put into context with the climate cycles at that time. Let us hope that the planned new ice cores will provide that,” i.e., better understand that previous periods of global warming were not preceded by (and, therefore were not caused by) increases of atmospheric CO2.

    • These are not normal ice strata. There is no known place where there is a vertical deep ice core sample that goes bank that far. This layer of ice strata is the result of a unique ‘flow and fold’ phenomena I think and requires very careful analysis to extract meaningful data about the climate that far in the past.

    • Wagathon wrote:
      “Attempting to let millions of years of hot air out of old ice”

      In a million years, the air’s temperature doesn’t come into equilibrium with the ice’s temperature?

      (It does)

  17. For those who follow the Paris Accord negotiations:

    Sale of indulgences dominates Madrid climate summit (my latest article)
    https://www.cfact.org/2019/11/07/sale-of-indulgences-dominates-madrid-climate-summit/

    The beginning: In Madrid the negotiators will be trying hard to finalize the Paris Accord emission trading scheme. The non-binding Paris Accord targets may have big bucks value for some developing countries and this has led to a paralyzing controversy.
    Emission trading means any country that does better than their target can sell the difference as indulgences, called carbon credits.

    This is potentially a huge market. The airlines are already promising to offset their enormous, jet propelled emissions, and most developed countries are not on track to hit their targets, so there are a lot of potential buyers.

    Countries like China and India have a lot to sell, despite their coal mania, because their targets are based on emissions per GDP, not emissions per se. Industrialization increases emissions but it increases productivity even more, a lot more. Booming Brazil also has a pot full of indulgences to sell, as may others.

    This big pile of old credits is the primary sticking point. The new trading scheme, called the Sustainable Development Mechanism (SDM), replaces the old Clean Development Mechanism (CDM) under the Kyoto Protocol, which expires next year. China and India have banked huge amounts of CDM carbon credits, thanks to their rapid industrialization. So much that some fear there is danger of a glut making credits worthless. China and the other big holders of CDM credits do not see it that way.

    There is also the feeling that the old CDM credits are mostly bogus. A 2016 EU-commissioned report found that just 2% of CDM projects were likely to ensure additional emissions reductions. Claiming emission reductions for projects that were going ahead anyway is cheating in many people’s eyes.

    There is also the issue called “double counting.” This is when Country A sells credits to Country B, but does not subtract the credit from its reduction claim. In effect both A and B are then counting the reduction the credit is supposedly based on. (Socialism is the land of complex rules.) Preventing this sort of double counting would require that every country open their emission books to outside scrutiny, which China especially does not want.

    For these reasons, among others, there is a widespread movement to not allow CDM credits to be brought forward into the SDM pot. China, India et al disagree strongly. They want to cash in on those old credits. With possibly billions of dollars at stake, this is by far the biggest issue on the Madrid summit table. After all, this whole show is about money, not climate.

    There is more in the article. This is just the beginning of my Madrid coverage.

    David

  18. “The crisis in the German wind energy industry is worsening. According to the ‘Süddeutsche Zeitung’, hard cuts at the largest German manufacturer Enercon will cost 3000 jobs.”

    According to Enercon chief executive Hans-Dieter Kettwig, “politicians have pulled the plug on wind energy and we have no battery.”

    They built up to 700 turbines a year in the last few years and only 65 this year! Enercon is market leader in Germany (17,000 out of 29,000 total).

    https://notrickszone.com/2019/11/10/germany-pulls-plug-on-wind-energy-wind-industry-in-severe-crisis-wind-giant-enercon-to-lay-off-3000/

    https://m.focus.de/finanzen/news/die-politik-hat-uns-den-stecker-gezogen-windkraft-riese-enercon-streicht-3000-jobs_id_11328472.html?fbclid=IwAR3J3Godo-AzcM-TiXffz8APemVXguPiwh-a0L9Tqi3fxGZ7OE8N2UAaPqo

  19. The Seattle Public Schools Ethnic Studies Advisory Committee (ESAC) has determined that math is subjective and racist. In a draft for its Math Ethnic Studies framework, the ESAC writes that Western mathematics is “used to disenfranchise people and communities of color.”

    https://www.worldtribune.com/math-is-racist-according-to-seattle-public-schools/

    If the USA allows this Talebanic / Khmer Rouge extremism to prevail at Universities, then the USA is in free fall back to the Stone Age.

  20. Your link to “A 2-million-year-old ice core…” is to some group at Yale that breathlessly reports high correlations between temperature and CO2. The 1-million-year-old ice core also found high correlations. But the CO2 lagged the temperature by 600 (+- 400) years, so unless time reversal was in operation then, CO2 could not cause the temperature increase. Conversely, temperature increase could cause , or help cause, CO2 increase since the oceans heating up would give up CO2 to the atmosphere (Henry’s law).

    • I am completely baffled why…contrarians…think that CO2 today must follow temperature increases today.

      We’re pumping CO2 into the atmosphere extremely fast, regardless of the temperature. It’s an open spigot. In this case CO2 leads.

      I”ve even seen Willie Soon on YouTube confused about this.

      • That graph doesn’t show anything of the sort, even if you read it under a microscope (which seems necessary).

        What controls CO2’s rate of increase is, mostly, how fast we emit it & ENSOs,

      • None are so blind as those who will not see.

      • What controls CO2’s rate of increase is, mostly, how fast we emit it…

        Not so:

        The only changes in the long term growthrate of atmospheric co2 are coincident with the two well known step rises in temperature circa 1980 & 2000. (and that’s all)…

      • Looks like two matching trends.

        So where do you think all the CO2 we’re emitting is going, if not into the atmosphere and ocean and biosphere?

      • Looks like two matching trends.

        Trends flat til about ’78, then a step rise, flat again til about ’03, then another step rise, then flat again. (david, can you read a graph?)

        The WFTs graph shows a decent fit with hadsst3sh (unlike ed’s with hadcrut4gl)

      • That graph has far too much noise to make sense of, especially for the conclusion you want. Let’s see the analytics.

      • So where do you think all the CO2 we’re emitting is going, if not into the atmosphere and ocean and biosphere?

        Why is it a stretch to think that near 100% is sinking out when we often see it sinking at near 60% (with an airborn fraction near 40%)? Why is your’s even a relevant question? Is 100% that much more than 60%? (i’ve been answering the same dumb question for years now)…

      • And again, where has all the CO2 we’ve emitted gone to, if not into the atmosphere and ocean and biosphere?

        (A question Ed Berry won’t answer.)

      • I see two upward trends. No one expects the match to be one-to-one — there is a lot of noise in the climate system.

      • Why is it a stretch to think that near 100% is sinking out when we often see it sinking at near 60% (with an airborn fraction near 40%)

        Where is the evidence for that?

      • Let’s see the analytics.

        And what analytics does human emissions have? (cumulative emissions graphs?) One thing is for sure, one doesn’t get a change in the atmospheric co2 growthrate without a corresponding change in temperature. Simply doesn’t happen…

      • Where is the evidence for that?

        No need for evidence. Basic logic will tell you that if nature can handle 60%, then it can handle 100%. (whether it is or not is a different question entirely)…

      • Trends flat til about ’78, then a step rise, flat again til about ’03, then another step rise, then flat again.

        That graph has far too much noise to make sense of, especially for the conclusion you want.

        Really?

      • afonzarelli wrote:
        No need for evidence.

        Then you’re not doing science. Good bye.

      • David, are you really that stupid? Why do you need evidence to tell you that if 60% of aco2 can be removed then 100% could also be removed. BTW, there is no such thing as Good bye at a blog. i’ll hound you and your junk science as long as Dr Curry will allow me to. (so, hello my friend)…

      • Folks, it appears that i’ve just succeeded in getting rid of David Appell(!)

        (i mean, how cool is that?… 👍)

      • Fonz

        Now I know the purpose of your comments to me the other day. Hmmmm, sorry, but I think my theory holds up. There is a certain joy in having him here , though. 😋

      • Yeah, Ceresco, i left the door open to the possibility that you were right. But, take, for example, my exchange with him here. It has all the looks to it like David has just cut and run after having been skunked. (and, at the end, throwing me a cheap shot on the way out the door) Certainly confirms your thesis again in this instance, but then you never know. i was earnestly hoping he wouldn’t give up so easily here. i’ve still got a lot to learn about the temp/carbon growth relationship. It’d be nice for David to hit me with his best shot.

        There is a certain joy in having him here , though. 😋

        Agreed. i do hope he sticks around for a while at Climate, etc — he was recently banned at his go to blog @ Spencer’s. (he’s got a great combo of high energy & moxy)…

  21. Ireneusz Palmowski

    There will be heavy snowfall in Scotland at night.

  22. A new report from the GWPF “Cold Water” by David Whitehouse puts recent measurements of ocean heat content, in a Holocene historic context. In short, the current rising trend in OHC might well be just business as usual in the ocean, work is needed to explain why it is anomalous in the context of continually changing OHC (climate change is the null hypothesis).

    https://www.thegwpf.org/content/uploads/2019/11/Cold-Water-Whitehouse.pdf

    Rosenthal and his team reviewed proxy records of intermediate water temperatures from sediment cores and corals in the equatorial Pacific and north eastern Atlantic Ocean, spanning 10,000 years beyond the instrumental record. These records suggest that intermediate waters were 1.5–2C warmer during the Holocene Thermal Maximum than in the last century. Intermediate water masses cooled by 0.9C from the Medieval Climate Anomaly to the Little Ice Age. These changes are significantly larger than the temperature anomalies documented in the instrumental record. One concludes that what is happening to the oceans today is not unusual.

    Meanwhile North Atlantic Ocean data is contradictory to the warming story:

    https://notrickszone.com/2019/11/07/the-region-from-50-70s-has-cooled-since-the-1980s-as-north-atlantic-ssts-have-cooled-1c-since-2004/

    • Words from an organization paid to deny AGW. Let’s see Whitehouse publish in the peer reviewed literature.

      • David

        Firstly I agree that people like Whitehouse should publish in the peer reviewed realm. I also think that with those that claim that temperature records have been ‘doctored’ should similarly provide evidence that it has been done, let alone maliciously

        As regards your Exxon link via The Guardian I am bemused as to the thrust of this controversy. The belief that co2 was warming the atmosphere was prevalent as early as the 1970 Stockholm conferences on environment and climate change.

        “The Wolfson Foundation also stepped in at this time (1970’s) with a series of grants that included one for the construction of the building that now bears Lamb’s name……CRU was saved, surviving throughout Lamb’s directorship mostly on private money, much of it associated, directly or indirectly, with the oil industry.” (CRu was partially funded by Shell)

        By the early 80’s that belief was really mainstream;
        “The Rockefeller grant was the most exciting for Lamb because
        Wigley had 3 articles on co2/climate published in nature in the early 1980’s”

        Not to forget that back in the 1930’s GS Callendar had also worked out the theoretical impacts of C02 ; work which Hansen then appeared to incorporate in Giss.

        So why this claim that Exxon kept secret something that was not actually a secret?

        tonyb.

    • DA
      He who pays the piper calls the tune.
      This holds for the goose and the gander, for climate research from all persuasions.
      The issue is the null hypothesis – climate is always changing. The CO2 warming hypothesis either gives inadequate attention to the null hypothesis (this is just natural warming) or ignores it entirely, in so doing making the implicit and highly unscientific assumption / null hypothesis of Edenic stasis, that it, that in “normal natural” conditions climate never changes. Of course scientists don’t argue explicitly for stasis and acknowledge ice ages etc, but this doesn’t change the fact that normative stasis is implicit in much alarmist narrative when the possibility of an observed climate change being natural is ignored.

      • He who pays the piper calls the tune.

        Then why did Exxon scientists come to the same AGW conclusion in the late 1970s?

        Their projection to now has been spot on.

        https://www.theguardian.com/environment/climate-consensus-97-per-cent/2018/sep/19/shell-and-exxons-secret-1980s-climate-change-warnings#img-2

        Exxon scientist J.F. Black, memo of June 6, 1978:

        “What is considered the best presently available climate model for treating the Greenhouse Effect predicts that a doubling of the CO2 concentration in the atmosphere would produce a mean temperature increase of about 2 C to 3 C over most of the Earth. The model also predicts that the temperature increase near the poles may be two to three times this value.”

        – J.F. Black, Products Research Division, Exxon Research and Engineering Co.

      • David
        How do we know that ECS of 2-3 C is “spot on”?
        Its a nice political compromise, some are arguing 5-6 C, others 0-1 C.
        In a big corporation lots of reports are written and most never see the light of day, I know since I work in one (yes you can call me Dr Evil if you like, we dont do oil-gas, we make scientific instruments – perhaps just as bad :-).

      • Curious George

        DA, may I remind you that the Guardian is not peer reviewed?

  23. I sometimes think that the skeptic and luke-warmer communities are their own worst enemies. Why is (almost) everyone ignoring these inconvenient observational facts?

    https://astroclimateconnection.blogspot.com/2019/11/red-pill-4-here-is-what-real-world.html

    • more barking at the moon

      • Steven,

        This is a genuine question. What is wrong with the evidence that I [and others] have presented on this topic? Please show me the error of my ways! Do you have another viable explanation for the observations that I outline below?

        I have proposed a simple resonance “model” where I look at the most prominent cycles that arise when there is constructive interference between spring tides and seasonal (solar-driven) atmospheric pressure variations. The most prominent cycle that emerges from this “model” are the 9.3-year (semi-draconic eclipse cycle), and the 3.8-year (seasonal spring-tidal cycle).

        Next, I go out and look at the summer-time (DJF) anomaly of the peak- latitude of the Sub-Tropical High-Pressure Ridge over eastern Australia (on time scales between ~ 2 and 30 years) and find that the only significant (p > 0.99) cycles are those at 9.3 and 3.8-year cycles. Not only that, the 9.3-year cycle in the anomaly of the peak-latitude is in-phase with the 9.3-year Draconic lunar tidal cycles between 1860 and 2010.

        Is it any wonder that I propose that the observed anomalies in the peak-latitude of the sub-tropical ridge are being affected by the lunisolar tides?

        The data shows that the observed cycles in the peak-latitude anomaly of the sub-tropical ridge have an amplitude that is <= 1.0 degree in latitude.
        Next, I show that a simple 1-degree shift in the latitude of the peak of the sub-tropical high-pressure ridge (above 3,000 m) would naturally produce a change in Earth's rotation rate of ~ -7.7 μsec. This compares favorably with the -12.57 μsec change in the length-of-day (LOD) that is associated with the effect of 18.6-year lunar tides upon the earth’s rotation. In other words, it is entirely reasonable that atmospheric lunisolar tides could explain what is observed.

        In addition, I find that the excess rainfall events in the Australian state of Victoria exhibit distinct cycles at both 4 years and 18.6 years. [N.B. The 4-year cycle is simply a manifestation of the 4 + 4 + 3 + 4 + 4 = 19-year Metonic lunar cycle that aligns the lunar atmospheric tides with the seasons.] The meteorological underpinnings of this result are supported by:

        B. Timbal and W. Drosdowsky, The relationship between the decline of Southeastern Australian rainfall and the strengthening of the subtropical ridge, Int. J. Climatol. 33: 1021–1034 (2013)

        I recognize that a lot of people think that the lunisolar tides are too weak to have any noticeable effect upon the Earth's climate. However, papers by Sidorenkov and Li et al. have shown that cyclical changes in lunar tidal forcing produce atmospheric tides with periods of 27.3 days and 13.6 days. Li and his associates have detected these atmospheric tides in the tropical troposphere at heights above the 700 hPa isobaric surface (~ 3000m).

        In addition, Krahenbuhl et al. have shown that the 27.3-day lunar atmospheric tides can influence the short-term midlatitude general circulation pattern by deforming the high latitude Rossby long-waves. They also find that these tidal effects have their greatest influence in the upper troposphere of both hemispheres.

        Finally, you have the papers by myself and Nikolay Sidorenkov that clearly show that the lunar atmospheric tides have an influence on regional climate variables on inter-annual to decadal time scales.

        Why are you so quick to dismiss this important observational evidence?

      • Ireneusz Palmowski

        Maybe it is worth thinking about the influence of the Moon on geomagnetic changes? Gravity of the Moon must affect Earth’s liquid core.

      • At a minimum, the 18.6 year lunar cycle is very well documented in Sea Level Rise papers, alongside the 60 year cycle.

      • Ireneusz Palmowski – You do know that the long-term Perigean New/Full moon tidal cycle that best aligns with the seasons is 59.75 years. It is really a 239-year cycle that, with a slippage of 4.0 years, is locked into the 243-year Transit Cycle of Venus.

        59.75 ___________________________59.75 years
        59.75 + 59.75 = ___________________119.50 years
        59.75 + 59.75 + 59.75 = _____________179.25 years
        59.75 + 59.75 + 59.75 + 59.75 =_______239.00 years
        59.75 + 59.75 + 59.75 + 59.75 + 4.0____243.00 years

      • They are still looking for unicorns… In the end they find that with the full moon the weather patterns change…globaly! :D

      • Frankclimate, No, it’s scientists trying to explain the real world using evidence and logic while they are being flippantly dismissed by those who are scientifically challenged.

        Wilson, I.R.G. Are the Strongest Lunar Perigean Spring Tides Commensurate with the Transit Cycle of Venus?, Pattern Recogn. Phys., 2, 75-93

        https://www.researchgate.net/publication/285568721_Are_the_Strongest_Lunar_Perigean_Spring_Tides_Commensurate_with_the_Transit_Cycle_of_Venus

        Can you fault the scientific reasoning of this peer-reviewed paper? NO! I thought not!

      • ian you want a free review submit your code and data to a good journal.
        me? I charge. nobody owes you a eduction or refutation

      • Steven Mosher,
        Almost everything that I have posted is already in peer-reviewed journals. I have posted the links for these published papers multiple times here and all you have done is either disparaged them or ignored them.
        I have never said one thing negative about your research or posts here. I will leave it at that.

        Your education will start soon as an international group publishes a paper that supports some of the main claims of my research.

      • ian you want a free review submit your code and data to a good journal.
        me? I charge. nobody owes you a eduction or refutation

        That’s good, mosh… Just keep your mouth shut. (that’ll make you even more irrelevant than you already are)

      • Ian in the 19th century there appeared to be a pattern where the first half of each decade was wet and the latter half droughty. Looking at the work done by SEA.

        Timbal et al. reckoned the STR intensity was a permanent fixture, but he was wrong, blocking highs sprang up out of nowhere in the Austral winter 2017. His failure to announce that the climate has changed, is a missed opportunity.

        After the no show by El Nino, do you support the prospect of a strong La Nina late 2020?

  24. Ireneusz Palmowski

    Arctic air reaches the Gulf of Mexico over Texas and Louisiana.

  25. The blue holes are revered among divers for their deep, clear waters. They are also important keepers of the scientific record. [link]

    The blue hole Caribbean palaeo proxy of hurricanes going back a few millennia, is valuable information to contribute some factual rationality to the debate on hurricanes, extreme weather events and climate change.

    The research finds:

    “Much of the last 1,500 years has been much more active [in hurricanes] than anything we’ve seen in the last hundred.”

    I suspect that this factual rationality will be ignored entirely by the MSM, and public figures like Andrew Cuomo will continue to make statements that hurricanes are new and human caused.

  26. An example of IPCC SR15 contradicting the Climate Emergency

    Proponents of the climate emergency scare often cite last year’s IPCC SR15 report as their scientific basis, but it is no such thing. The widely proclaimed 12 year deadline is just for holding warming to 1.5 degrees, which the IPCC says is almost impossible. They also say that exceeding that warming is in no way catastrophic. The difference between the impact of 1.5 degrees of total warming (just 0.5 degrees of new warming) and 2.0 degrees is tiny. Thus the IPCC report actually contradicts the unfounded claim of a climate emergency.

    Here is an example from the SR15 Summary for Policy Makers:

    “B.1.2. Temperature extremes on land are projected to warm more than GMST (high confidence): extreme hot days in mid-latitudes warm by up to about 3°C at global warming of 1.5°C and about 4°C at 2°C, and extreme cold nights in high latitudes warm by up to about 4.5°C at 1.5°C and about 6°C at 2°C (high confidence).”

    Extreme hot days, which are uncommon to begin with, warm by up to about just one degree going from 1.5 degrees to 2.0 degrees of total warming. This is certainly not an emergency. It is probably not even detectable due to natural variability.

    Note too that extreme cold nights warm even more, which is arguably a good thing. Given that extreme cold is reportedly more dangerous than extreme heat, going to 2.0 degrees might even be net beneficial. Richard Tol’s integrated assessment model actually says this.

    The proponents of scary emergency need to be called out on this contradiction. No science supports the climate emergency

    David

  27. In the real world the Δ 33 oC difference does not exist.

  28. Ireneusz Palmowski

    A snowstorm develops over the Great Lakes.

  29. “Identifying key driving processes of major recent heat waves”

    The key driver is short term solar variability, these heatwaves would not otherwise have occurred. The July 2010 and July 2015 heat anomalies were on my solar based forecasts. I can show hundreds of hindcasts of what drives such weekly-monthly scale anomalies. 2012 US summer heat was low solar giving negative NAO/AO conditions, that was wet and cool in the UK, the reverse of the positive NAO in summer 2018 when the UK had drought. Here’s what drives the greater heatwave events like 1976, 2003, 2006, and 2017-2018:

    https://www.linkedin.com/pulse/major-heat-cold-waves-driven-key-heliocentric-alignments-ulric-lyons/

    • Ireneusz Palmowski

      The stronger the magnetic field, the larger the magnetosphere. Some 20,000 times stronger than Earth’s magnetic field, Jupiter’s magnetic field creates a magnetosphere so large it begins to avert the solar wind almost 3 million kilometers before it reaches Jupiter. The magnetosphere extends so far past Jupiter it sweeps the solar wind as far as the orbit of Saturn.
      The magnetosphere of Saturn is the cavity created in the flow of the solar wind by the planet’s internally generated magnetic field. Discovered in 1979 by the Pioneer 11 spacecraft, Saturn’s magnetosphere is the second largest of any planet in the Solar System after Jupiter. The magnetopause, the boundary between Saturn’s magnetosphere and the solar wind, is located at a distance of about 20 Saturn radii from the planet’s center, while its magnetotail stretches hundreds of Saturn radii behind it.

  30. In the real world the Δ 33 oC difference does not exist.
    The satellites measured Tearth.mean = 288 K actually is the Earth’s without-atmosphere Effective Temperature Te.earth = 288 K

  31. “An entertaining spat among the climate alarmists..”
    Interesting that the list of climate crisis action priorities keeps growing, and pushing actual fossil fuel alternatives further down the to-do-next list.
    I always wondered what would happen once everyone realized 100% wind/solar wasn’t realistic. The answer appears to be “climate justice” which is just socialism rebranded, Nord Stream 2, and nuclear powered China.
    50 years from now, today’s climate alarmists will be embarrassing enough to be politely forgotten and the “climate heroes” will be the folks who pioneered battery improvements for electric cars, and developments in new nuclear technologies.
    Both are likely to come from outside the developed world thanks to European and American universities’ focus on producing more activists than scientists.

  32. A Planet-Without-Atmosphere Effective Temperature Formula, the Te formula which is based on the radiative equilibrium and on the Stefan-Boltzmann Law:
    Te = [ (1-a) S / 4 σ ]¹∕ ⁴
    which is in common use right now, is actually an incomplete Te formula and that is why it gives us very confusing results.

  33. Comparison of results planet Te (Tsat.mean) measured by satellites, and the planet Te calculated by Complete Formula:

    Te = [ Φ (1-a) S (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

    Planet or Te.satellites Te.incomplete Te.complete
    Moon measured formula formula
    Mercury 340 K 437,30 K 346,11 K
    Earth 288 K 255 K 288,36 K
    Moon 220 Κ 271 Κ 221,74 Κ
    Mars 210 K 211,52 K 215,23 K
    To be honest with you, at the beginning, I got by surprise myself with these results.
    You see I was searching for a mathematical approach…
    As you can see Te.complete.earth = 288,36 K.
    That is why I say in the real world the Δ 33 oC difference does not exist.

  34. Comparison of results planet Te (Tsat.mean) measured by satellites, and the planet Te calculated by Complete Formula:

    …………………Te = [ Φ (1-a) S (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴……….. (1)

    Planet or…….. Te.satellites……… Te.incomplete…….Te.complete
    Moon ………….measured …………..formula ………….formula
    Mercury ………….340 K ……………437,30 K …………346,11 K
    Earth ……………..288 K ……………255 K ……………288,36 K
    Moon ……………..220 Κ ……………271 Κ ……………221,74 Κ
    Mars ………………210 K ……………211,52 K …………215,23 K

    To be honest with you, at the beginning, I got by surprise myself with these results.
    You see I was searching for a mathematical approach…
    As you can see Te.complete.earth = 288,36 K.
    That is why I say in the real world the Δ 33 oC difference does not exist.

  35. Shellenburger’s article on fires in California is excellent.

    It’s obvious that climate activists are so desperate to maximise political gains from media attention on fires that they themselves are starting fires 🔥.

    “I don’t think the president is wrong about the need to better manage,” said Keeley. “I don’t know if you want to call it ‘mismanaged’ but they’ve been managed in a way that has allowed the fire problem to get worse.”

    When you see the colossal political gains that accrue to the left from fires 🔥 , it is obvious that no incentive exists at all for the state to do anything to reduce fires – on the contrary it is clear that they have acted in their own self-interest by aiding the increase in fires, both by inaction in preventing buildup of woodland combustible material, and also quietly encouraging student activists hotheads to go out into the woods and start fires. In view of the deaths from fires especially in 2018, it is highly likely that several Uber-activist progressive families are hiding perpetrators of h0micide.

  36. A Planet-Without-Atmosphere Effective Temperature Complete Formula derives from the incomplete Te formula which is based on the radiative equilibrium and on the Stefan-Boltzmann Law.
    from the
    Te = [ (1-a) S / 4 σ ]¹∕ ⁴

    which is in common use right now, but actually it is an incomplete Te formula and that is why it gives us very confusing results.

    A Planet-Without-Atmosphere Effective Temperature Complete Formula is also based on the radiative equilibrium and on the Stefan-Boltzmann Law.

    The Formula is being completed by adding to the incomplete Te formula the new parameters Φ, N, cp and the constant β.
    to the complete

    ……………….Te = [ Φ (1-a) S (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ …………….(1)

    (…….)¹∕ ⁴ is the fourth root
    S = So(1/R²), where R is the average distance from the sun in AU (astronomical units)

    S – is the solar flux W/m²
    So = 1.362 W/m² (So is the Solar constant)
    Planet’s albedo: a

    Φ – is the dimensionless solar irradiation spherical surface accepting factor
    Accepted by a Hemisphere with radius r sunlight is S*Φ*π*r²(1-a), where Φ = 0,47 for smooth surface planets, like Earth, Moon, Mercury and Mars…
    (β*N*cp)¹∕ ⁴ is a dimensionless Rotating Planet Surface Solar Irradiation Absorbing-Re-emitting Developing Factor

    β = 150 days*gr*oC/rotation*cal – is a Rotating Planet Surface Solar Irradiation Absorbing-Re-emitting Universal Law constant

    N rotations/day, is planet’s sidereal rotation period
    cp – is the planet surface specific heat

    cp.earth = 1 cal/gr*oC, it is because Earth has a vast ocean. Generally speaking almost the whole Earth’s surface is wet. We can call Earth a Planet Ocean.
    cp = 0,19 cal/gr*oC, for dry soil rocky planets, like Moon and Mercury.
    Mars has an iron oxide F2O3 surface, cp.mars = 0,18 cal/gr*oC
    σ = 5,67*10⁻⁸ W/m²K⁴, the Stefan-Boltzmann constant

    This Universal Formula (1) is the instrument for calculating a Planet-Without-Atmosphere Effective Temperature. The results we get from these calculations are almost identical with those measured by satellites.

    1. Earth’s-Without-Atmosphere Effective Temperature Calculation:

    So = 1.362 W/m² (So is the Solar constant)
    Earth’s albedo: aearth = 0,30
    Earth is a rocky planet, Earth’s surface solar irradiation accepting factor Φearth = 0,47
    (Accepted by a Smooth Hemisphere with radius r sunlight is S*Φ*π*r²(1-a), where Φ = 0,47)
    β = 150 days*gr*oC/rotation*cal – is a Rotating Planet Surface Solar Irradiation Absorbing-Re-emitting Universal Law constant
    N = 1 rotation per day, is Earth’s sidereal rotation period
    cp.earth = 1 cal/gr*oC, it is because Earth has a vast ocean. Generally speaking almost the whole Earth’s surface is wet. We can call Earth a Planet Ocean.
    σ = 5,67*10⁻⁸ W/m²K⁴, the Stefan-Boltzmann constant
    Earth’s-Without-Atmosphere Effective Temperature Complete Formula Te.earth is

    Te.earth = [ Φ (1-a) So (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴
    Τ
    e.earth = [ 0,47(1-0,30)1.362 W/m²(150 days*gr*oC/rotation*cal *1rotations/day*1 cal/gr*oC)¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ ]¹∕ ⁴ =

    Τe.earth = [ 0,47(1-0,30)1.362 W/m²(150*1*1)¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ ]¹∕ ⁴ =

    Te.earth = 288,36 Κ
    And we compare it with the

    Tsat.mean.earth = 288 K, measured by satellites.
    Those two temperatures, the calculated one, and the measured by satellites are almost identical.

    2. Moon’s Effective Temperature Calculation:

    So = 1.362 W/m² (So is the Solar constant)
    Moon’s albedo: amoon = 0,136

    Moon’s sidereal rotation period is 27,3216 days. But Moon is Earth’s satellite, so the lunar day is 29,5 days

    Moon is a rocky planet, Moon’s surface solar irradiation accepting factor
    Φmoon = 0,47
    (Accepted by a Smooth Hemisphere with radius r sunlight is S* Φ*π*r²*(1-a), where Φ = 0,47)

    cp.moon = 0,19cal/gr oC, moon’s surface is considered as a dry soil
    β = 150 days*gr*oC/rotation*cal – it is a Rotating Planet Surface Solar Irradiation Absorbing-Re-emitting Universal Law constant

    N = 1/29,5 rotations per/ day
    σ = 5,67*10⁻⁸ W/m²K⁴, the Stefan-Boltzmann constant

    Moon’s Effective Temperature Complete Formula Te.moon:

    Te.moon = [ Φ (1-a) So (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴

    Te.moon = { 0,47 (1-0,136) 1.362 W/m² [150* (1/29,5)*0,19]¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ }¹∕ ⁴ =
    Te.moon = 221,74 Κ

    The newly calculated Moon’s Effective Temperature differs only by 0,8% from that measured by satellites!

    Tsat.mean.moon = 220 K, measured by satellites.

    • Fascinating, Christos. But surely the Earth is a watery planet, not a rocky one. Is this a big difference?

      • Thank you David.

        Yes there is a big difference.

        Earth’s Effective Temperature Complete Formula Te.earth:

        Te.earth = [ Φ (1-a) So (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴

        Τe.earth = [ 0,47(1-0,30)1.362 W/m²(150 days*gr*oC/rotation*cal *1rotations/day*1 cal/gr*oC)¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ ]¹∕ ⁴ =

        Τe.earth = [ 0,47(1-0,30)1.362 W/m²(150*1*1)¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ ]¹∕ ⁴ =
        Te.earth = 288,36 Κ

        Moon’s Effective Temperature Complete Formula Te.moon:

        Te.moon = [ Φ (1-a) So (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴

        Te.moon = { 0,47 (1-0,136) 1.362 W/m² [150* (1/29,5)*0,19]¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ }¹∕ ⁴ =
        Te.moon = 221,74 Κ

        cp.earth = 1 cal/gr*oC
        cp.moon = 0,19 cal/gr*oC
        The cp.earth is 5,263 times bigger.

        If Earth was not a Planet ocean, but a rocky planet, then:

        Te.earth = 288,36 Κ * [(0,19)¹∕ ⁴ ]¹∕ ⁴ =

        Te.earth = 288,36 Κ * 0,9014 = 259,93 K

        If the Earth was a rocky planet the Te.earth would be
        Te.earth = 259,93 = 260 K

        Thank you David

  37. 3. Mars’ Effective Temperature Calculation:

    Te.mars
    (1/R²) = (1/1,5²) = 1/2,25 Mars has 2,25 times less solar irradiation intensity than Earth has

    Mars’ albedo: amars = 0,25
    N = 1 rotations/per day, Planet Mars completes one rotation around its axis in about 24 hours

    Mars is a rocky planet, Mars’ surface solar irradiation accepting factor: Φmars = 0,47

    cp.mars = 0,18 cal/gr oC, on Mars’ surface is prevalent the iron oxide

    β = 150 days*gr*oC/rotation*cal – it is a Rotating Planet Surface Solar Irradiation Absorbing-Re-emitting Universal Law constant

    σ = 5,67*10⁻⁸ W/m²K⁴, the Stefan-Boltzmann constant
    So = 1.362 W/m² the Solar constant

    Mar’s Effective Temperature Complete Formula is:

    ……………Te.mars = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴

    Planet Mars’ Effective Temperature
    Te.mars is:
    Te.mars = [ 0,47 (1-0,25) 1.362 W/m²*(1/2,25)*(150*1*0,18)¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ ]¹∕ ⁴ =
    Te.mars = 215,23 K

    The calculated Mars’ effective temperature Te.mars = 215,23 K is only by 2,4% higher than that measured by satellites

    Tsat.mean.mars = 210 K !

    4. Mercury’s Effective Temperature Calculation:

    Te.mercury

    N = 1/58,646 rotations/per day, Planet Mercury completes one rotation around its axis in 58,646 days
    .
    Mercury average distance from the sun is R=0,387AU. The solar irradiation on Mercury is (1/R²) = (1AU/0,387AU)²= 2,584²= 6,6769 times stronger than that on Earth.
    Mercury’s albedo is: amercury = 0,088
    Mercury is a rocky planet, Mercury’s surface solar irradiation accepting factor: Φmercury = 0,47

    Cp.mercury = 0,19 cal/gr oC, Mercury’s surface is considered as a dry soil

    β = 150 days*gr*oC/rotation*cal – it is a Rotating Planet Surface Solar Irradiation Absorbing-Re-emitting Universal Law constant

    σ = 5,67*10⁻⁸ W/m²K⁴, the Stefan-Boltzmann constant
    So = 1.362 W/m² the Solar constant

    Mercury’s Effective Temperature Complete Formula is:

    ……..Te.mercury = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴

    Planet Mercury’s effective temperature
    Te.mercury is:

    Te.mercury = { 0,47(1-0,088) 1.362 W/m²*6,6769*[150* (1/58,646)*0,19]¹∕ ⁴ /4*5,67*10⁻⁸ W/m²K⁴ }¹∕ ⁴ =
    Te.mercury = 346,11 K

    The calculated Mercury’s Effective Temperature Te.mercury = 346,11 K is only 1,80% higher than the measured by satellites

    Tsat.mean.mercury = 340 K !

    We have collected the results calculated with the Effective Temperature Complete Formula.
    Comparison of results planet Te (Tsat.mean) measured by satellites, and the planet Te calculated with Complete Formula:

    …………………Te = [ Φ (1-a) S (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴……….. (1)

    Planet or…….. Te.satellites……… Te.incomplete…….Te.complete
    Moon ………….measured …………..formula ………….formula
    Mercury ………….340 K ……………437,30 K …………346,11 K
    Earth ……………..288 K ……………255 K ……………288,36 K
    Moon ……………..220 Κ ……………271 Κ ……………221,74 Κ
    Mars ………………210 K ……………211,52 K …………215,23 K

    To be honest with you, at the beginning, I got by surprise myself with these results.
    You see I was searching for a mathematical approach…
    As you can see Te.complete.earth = 288,36 K.

    That is why I say in the real world the Δ 33 oC difference does not exist.

    • Impressive result.
      Are there any parameters in this calculation that could be considered “tunable” in the derivation of these closely agreeing temperatures with and without an atmosphere; such as,

      cp – the planet surface specific heat
      Φ – the planet’s solar surface irradiation accepting factor

      Could the effect of an atmosphere be “hiding” in these parameters?

      • Thank you phil salmon.

        You are the first who noticed the Effective Temperature Complete Formula calculations.
        It is a Planet-Without-Atmosphere Effective Temperature Complete Formula.

        So, we can confirm now with great confidence, that a Planet or Moon Without-Atmosphere Effective Temperature Complete Formula, according to the Stefan-Boltzmann Law, is:

        Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

        We have collected the results now:

        Comparison of results planet Te (Tsat.mean) measured by satellites, and the planet Te calculated by Complete Formula:

        …………………Te = [ Φ (1-a) S (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴……….. (1)

        Planet or…….. Te.satellites……… Te.incomplete…….Te.complete
        Moon ………….measured …………..formula ………….formula
        Mercury ………….340 K ……………437,30 K …………346,11 K
        Earth ……………..288 K ……………255 K ……………288,36 K
        Moon ……………..220 Κ ……………271 Κ ……………221,74 Κ
        Mars ………………210 K ……………211,52 K …………215,23 K

        To be honest with you, at the beginning, I got by surprise myself with these results.
        You see I was searching for a mathematical approach…

        As you can see Te.complete.earth = 288,36 K.

        That is why I say in the real world the Δ 33 oC difference does not exist.

        These data, the calculated with a Planet Without-Atmosphere Effective Temperature Complete Formula and the measured by satellites are almost the same, very much alike.

        They are almost identical, within limits, which makes us conclude that the Planet-Without-Atmosphere Effective Temperature Complete Formula

        ……….Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

        can calculate a planet mean temperatures.

        It is a situation that happens once in a lifetime in science. Although the evidences existed, were measured and remained isolated information so far.
        It was not obvious one could combine the evidences in order to calculate the planet’s temperature.

        A planet-without-atmosphere effective temperature calculating formula

        …………………. Te = [ (1-a) S / 4 σ ]¹∕ ⁴

        is incomplete because it is based only on two parameters:

        1. On the average solar flux S W/m² on the top of a planet’s atmosphere and
        2. The planet’s average albedo a.

        Those two parameters are not enough to calculate a planet effective temperature. Planet is a celestial body with more major features when calculating planet effective temperature to consider.

        The planet-without-atmosphere effective temperature calculating formula has to include all the planet’s major properties and all the characteristic parameters.

        3. The sidereal rotation period N rotations/day

        4. The thermal property of the surface (the specific heat cp)

        5. The planet surface solar irradiation accepting factor Φ (the spherical surface’s primer quality).
        For Mercury, Moon, Earth and Mars without atmosphere Φ = 0,47.

        Earth is considered without atmosphere because Earth’s atmosphere is very thin and it does not affect Earth’s Effective Temperature.

        Altogether these parameters are combined in a Planet-Without-Atmosphere Effective Temperature Complete Formula:

        ……….Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

        A Planet-Without-Atmosphere Effective Temperature Complete Formula produces very reasonable results:

        Te.earth = 288,36 K, calculated with the Complete Formula, which is identical with the
        Tsat.mean.earth = 288 K, measured by satellites.

        Te.moon = 221,74 K, calculated with the Complete Formula, which is almost the same with the
        Tsat.mean.moon = 220 K, measured by satellites.

        A Planet-Without-Atmosphere Effective Temperature Complete Formula gives us a planet effective temperature values very close to the satellite measured planet mean temperatures (the satellite measured planet effective temperatures).

        Thus we have to conclude here that the satellites measured planet mean temperatures should be considered as the satellite measured Planet Effective Temperatures.

        It is a Stefan-Boltzmann Law Triumph!

        And as for NASA, all these new discoveries were possible only due to NASA satellites planet temperatures precise measurements!

        Dear phil salmon, thank you again for your reply.

        Christos Vournas

  38. Surely, this new study should be easy to debunk based on expanding land development and growing investments into properties located in at-risk locations due to misguided hazard insurance policies. https://news.yahoo.com/hurricanes-scale-katrina-harvey-now-200100164.html

  39. Interesting article on the precautionary principle as it applies to Japan’s curtailment of nuclear electricity generation in the wake of the Fukushima incident. It’s a reminder that an “abundance of caution” isn’t without its own costs and risks.

    • Nuclear power is the safest way to generate electricity by a wide margin and always has been.

      The health impacts of nuclear power station accidents, emissions, and all other life cycle health impacts are minuscule compared with the health impacts of other generating technologies.

      See these articles on the true impact of nuclear accidents and what the response should be. The Editorial is here:

      Thomas, P.; May, J. Coping after a big nuclear accident. Process Safety and Environmental Protection 2017, 112, 1-3. https://www.sciencedirect.com/science/article/pii/S0957582017303166

      The main articles are here:
      https://www.sciencedirect.com/science/article/pii/S095758201730352X

      They are open access.

      • Peter repeats the same one-eyed links every time. Try the official literature instead. No one credible disputes linear no-threshold dosing for instance. The safety record of nuclear is not disputed – it is appalling and the impact on populations in the literature is compelling.

        What we have instead is a handful of zealots on blogs dragging their knuckles and thumping their chests about how safe nuclear meltdowns are. Irony lost on them is that nuclear can be meltdown resistant.

        https://www.scientificamerican.com/article/how-safe-are-old-nuclear-reactors-lessons-from-fukushima/

      • Ellison is a repetitive, a bore and a troll. He quotes junk from Scientific American and clearly hasn’t read or understood the peer reviewed papers I linked.

        In comments elsewhere he demonstrates he hasn’t the faintest clue how learning rates are calculated or what they mean. He also continually misrepresents what the Lang (2017) paper says, by implying I said the costs of nuclear could be reduced by 90%. The paper and I have never said anything like that. His continual misrepresentations even after being corrected repeatedly demonstrate he is a troll, and ignorant of the facts on energy matters.

      • I have quoted MIT, a recent review from the NRC on linear no threshold and dozens of peer-reviewed journal articles last time this particular zealot repeated his rubbish.

        “The benefits for the global economy and human wellbeing could have been substantial: clean, safe, reliable power supply, 4.2 to 9.5 million deaths and 69 to 174 Gt CO2 emissions avoided, and nuclear providing up to 66% of the world’s power at around 10% of its current cost.” Peter Lang

        So if we got rid of the greenies nuclear power cost drops to 10% of current costs. It’s monumental nonsense and then he lies about it.

      • Repetitious disinformation. I never said “if we got rid of the greenies nuclear power cost drops to 10% of current costs.” It’s monumental nonsense and then he lies about it.” As usual, it’s a total distortion of what the paper says, and shows and I said.

        I never said going forward from here we could get costs down to 10% of current cost. That would take 7 capacity doublings from now – i.e. increase the cumulative global capacity of construction starts from 500 GW to 64,000 GW. What I said was that if the disruption had not occurred, so that nuclear learning rates had continued at the -pre-1967 rates and nuclear capacity had in creased at the actual rate it did from 1968 to 2015, then the cost of nuclear power would be around 10% of current costs — but around 5% if the pre-1967 learing rate and the accelerating deployment rate had continued (See Table 3 in Lang 2017 https://www.mdpi.com/1996-1073/10/12/2169/htm).

        Read the paper, study the figures and tables and learn how learning rates are calculated. And stop your continual misrepresentations and disinformation and lying.

      • RIE
        No one credible disputes linear no-threshold dosing for instance.

        LNT is politically mandated pseudoscience. There is a threshold below which IR is harmless or beneficial. In a few seconds on Google Scholar it is possible to find highly robust and repeatable biological data that show that low dose ionizing radiation REDUCES cancer rates and INCREASES lifespan in mice.

        https://journals.sagepub.com/doi/full/10.2203/dose-response.06-115.Sakai

        https://ecee.colorado.edu/~ecen5009/Resources/Radiation/Ina2004.pdf

        https://www.karger.com/Article/Abstract/22024

        https://www.researchgate.net/profile/Kazuko_Fujita/publication/228727346_Suppression_of_carcinogenic_process_in_mice_by_chronic_low_dose_rate_gamma-irradiation/links/09e415064d862cfda7000000/Suppression-of-carcinogenic-process-in-mice-by-chronic-low-dose-rate-gamma-irradiation.pdf

        I did my PhD in radiation biology.
        So I know the science of radiation carcinogenesis.
        I also know that you know essentially nothing on this subject and your comments on the subject are pure 100% politically motivated bluster.
        And if that’s what they are on this subject, it’s not hard to guess that this might characterise most or all of your comments here, on all subjects except possibly water system ecology.

        Both LNT and CAGW are politically motivated pseudoscience and represent rational and intellectual, technical and political failures of what you might call “western society” (USA and other regimes that are superficially democratic but ruled by a hard left-progressive deep state). Both will in due course have catastrophic consequences for those societies.

      • RIE
        The safety record of nuclear is not disputed – it is appalling and the impact on populations in the literature is compelling.

        This is also not true.
        The actual deaths and injury from nuclear incidents is low compared to heavy industry in general.
        However journalists, activists and political scientists like to create virtual deaths from radioactivity releases by just combining very ambitious theoretical rates of cancer causation from the linear no-threshold (LNT) hypothesis of carcinogenesis, with (also partly computer modelled) radiation releases. This allows them to create out of nothing as many deaths as they think they can get away with. None of this is real.

        That is why the LNT is so cherished by the anti-nuclear establishment. It has weak scientific grounding in the complex field of epistemology, but is soundly refuted by animal studies of radiation carcinogenesis. It’s value is political, not scientific, allowing virtual deaths to be created from nuclear incidents.

        In this regard they have won. The general public believe that hundreds of thousands if not millions have dies from Chernobyl, when the true number is in the hundreds or at most low thousands. Likewise from three Mile Island, where the actual health consequences were essentially zero.

        In Chernobyl anti-nuclear sentiment was responsible for more deaths than radioactivity. Many villagers were evacuated from regions affected by ever lower radioactivity levels which eventually were so low that they overlapped with background. This evacuation of more and more people was in response to political hectoring from the international antinuclear establishment. Forced relocation from villages where these generally old people had lived their whole lives in Ukraine and Belarus led to substantial shortening of their life-span. (I attended the 1996 10-year anniversary conference on the Chernobyl accident in Minsk where these data were presented). Unnecessary relocation of villagers killed more people than the radiation itself.

        Just like with CO2 global warming, the consequences of the wrecking ball hysterical response to hypothetical warming catastrophe will be much worse than any harmful effects of climate change itself, if these even exist at all. Welcome to an irrational post-scientific society.

      • It has weak scientific grounding in the complex field of epistemology,

        epidemiology, not “epistemology”

      • Yeah – like I said no one credible. Any official agency disagrees. Why do zealots take on these lost crusader causes? CAGW and nuclear meltdowns?

        e.g. https://www.arpansa.gov.au/sites/default/files/rhsac_-_position_statement_on_the_use_of_the_lnt_1_may_2017.pdf

        Advanced nuclear designs can be made safer and cheaper with 21st century technology and materials. One of the problems at Fukushima was hydrogen – generated from melted zirconium fuel cladding reacting with superheated steam – explosions. With General Atomics silicon carbide fuel cladding now commercially available this is a thing of the past.

        http://www.ga.com/general-atomics-awarded-doe-funding-to-pursue-advanced-reactor-research

        Phil’s lost none of his call-out proclivities or sense of humor. I obviously know something. Nuclear is one of things I have followed over the years. It was all the rage when I was a kid. But overall the accident record is appalling spewing novel radionuclides into air and ocean. The effects are subtle and far-reaching if remote. The effect on people and environments at large is an unknown.

      • Robert
        You’re good at winding people up and I guess I make it too easy. My bad – must do better. The agencies setting radiation safety standards no doubt think they are taking a position of caution and conservativeness but combining this with the inevitable uncertainties of biology and epidemiology leads inexorably to an overly alarmist and reactionary position. It must seem very reasonable politically to fudge the risk down to zero but this leads to all those virtual victims of radiation releases where hundreds of millions exposed to minuscule doses with supposed tiny risks, by a mathematical flourish raises an army of the dead to fight against nuclear technology.

        I agree that emerging nuclear technologies are much more promising than the generation 1-2 of all the current fleet. There’s a paradox there – if the nuclear industry had not been so besieged by the antinuclear lobby then the advanced designs might already be generating power instead of still on the drawing board after 70 years. But we have to play the ball as it lies and let’s hope that some are given a chance to prove their worth.

      • Phil Salmon,

        Do you have any comments or points to discuss to add to the seven comments I added starting with this one: https://judithcurry.com/2019/11/09/week-in-review-science-edition-112/#comment-902814

      • Empirical data clearly shows the LNT hypothesis is wrong below the threshold. Radiation is beneficial below the threshold. The indirect consequences of the regulatory authorities accepting “the LNT hypothesis as a conservative model for estimating radiation risk” are that their acceptance has contributed to the world missing out on :

        The benefits for the global economy and human well-being could have been substantial: clean, safe, reliable power supply, 4.2 to 9.5 million deaths … avoided, and nuclear providing up to 66% of the world’s power at around 10% of its current cost.

  40. Ireneusz Palmowski

    The polar vortex pattern is formed in the upper stratosphere depending on the influx of ozone. Strong current in the stratosphere is created due to the temperature difference.

    The distribution of ozone during the solar minimum over the polar circle is highly asymmetrical.

    The frosty front moves East.

  41. Ireneusz Palmowski

    Look at the geomagnetic field in the north and the stratospheric polar vortex.


    When the magnetic field of solar wind weakens, the polar vortex pattern depends on the geomagnetic field.

  42. The Pacific has not done anything again last month. Curioser.

    And what are all these pretty pictures with vague assertions about solar winds and geomagnetism attached? Normally you would need some sort of math in a journal article.

  43. Ireneusz Palmowski

    NOVEMBER 11, 2019
    The National Weather Service predicts that more than 250 new cold records could be tied or set during the first half of the week.

  44. Why is Minnesota one of the coldest places in a warming earth?
    My local story: https://www.mprnews.org/story/2019/11/07/why-is-minnesota-one-of-the-coldest-places-in-a-warming-earth
    He does an Okay job. Paul Huttner. However we aren’t talking about long time frames.

  45. Each science week in review you put up at least one paper that is a long hard slog. This time it’s this: New paper evaluating UKESM1 climate model.

    There may be more. I have not worked through the whole list yet.

  46. Ireneusz Palmowski

    A positive temperature anomaly in Iceland during the winter of 1708/1709 indicates a breakdown of the polar vortex. Was it a fault of CO2 growth?

    https://en.wikipedia.org/wiki/Great_Frost_of_1709

    • Ireneusz Palmowski

      lt would be interesting to find out what was going on in the eastern USA during this time. ls there any recording of weather events in the USA that go back as far as 1709?

  47. Ireneusz Palmowski

    The temperature on the coast of the Gulf of Mexico in the US drops below 0 C.

  48. Ireneusz Palmowski

    Another cold front from Canada reaches North Dakota.

  49. fyi: The continuation of the Lippie et al paper about the Pacific deep warming long time behavior ( it was described here: https://judithcurry.com/2019/01/14/ocean-heat-content-surprises/ 9)
    was just released: https://tos.org/oceanography/article/atlantic-warming-since-the-little-ice-age . It’s about the Atlantic and it’s out of eqilibrium state in 1750 ( the zero line of the most GCM). A nice read!

  50. I have made my 2019-2020 cold season Arctic Oscillation forecast public so I will post it here too. I expected the North Atlantic Oscillation to remain generally more positive with the blocking effects of the new Northeast Pacific warm blob, as in Jan-Feb 2014, and had warned of flood risks to the UK during the negative forecast periods.

    The dates indicate changes in vectors of the Arctic Oscillation values. So where I have for example positive from Oct 2nd, it would start shifting toward the positive from then even if still in negative values at the time.
    – – and + + for stronger negative and positive signals respectively.

    09.17 – – (verified)
    10.02 + (verified)
    10.16 – (verified)
    10.27 + + (verified + not ++)
    11.05 – (verified)
    11.15 + +
    12.12 – –
    12.22 +
    01.06 +
    01.27 –
    02.05 – –
    02.16 + (weak)
    02.20 –
    02.28 + +
    03.31 – (weak)

    Major snowstorms typically occur when the index is turning towards the positive after being strongly negative.

  51. Following are abstracts from four papers on the LNT hypothesis.

    But first, see Slide 29 here: http://efn-usa.org/component/k2/item/693-why-fear-of-radiation-is-wrong-personally-scientifically-environmentally-wade-allison-uk-1-news

    “Chernobyl early firefighters – Not LNT

    • Above 4000 mSv 27/42 died from acute radiation sickness (ARS) in 2/3 weeks
    • Below 2,000 mSv zero of 195 died
    • Acute threshold about 2000 mSv (ARS)”

    • Luckey (2006), Radiation Hormesis: The Good, the Bad, and the Ugly
      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2477686/

      “Abstract
      Three aspects of hormesis with low doses of ionizing radiation are presented: the good, the bad, and the ugly. The good is acceptance by France, Japan, and China of the thousands of studies showing stimulation and/or benefit, with no harm, from low dose irradiation. This includes thousands of people who live in good health with high background radiation. The bad is the nonacceptance of radiation hormesis by the U. S. and most other governments; their linear no threshold (LNT) concept promulgates fear of all radiation and produces laws which have no basis in mammalian physiology. The LNT concept leads to poor health, unreasonable medicine and oppressed industries. The ugly is decades of deception by medical and radiation committees which refuse to consider valid evidence of radiation hormesis in cancer, other diseases, and health. Specific examples are provided for the good, the bad, and the ugly in radiation hormesis.”

    • Vaiserman et al (2018), Health Impacts of Low-Dose Ionizing Radiation: Current Scientific Debates and Regulatory Issues
      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6149023/

      “Abstract
      Health impacts of low-dose ionizing radiation are significant in important fields such as X-ray imaging, radiation therapy, nuclear power, and others. However, all existing and potential applications are currently challenged by public concerns and regulatory restrictions. We aimed to assess the validity of the linear no-threshold (LNT) model of radiation damage, which is the basis of current regulation, and to assess the justification for this regulation. We have conducted an extensive search in PubMed. Special attention has been given to papers cited in comprehensive reviews of the United States (2006) and French (2005) Academies of Sciences and in the United Nations Scientific Committee on Atomic Radiation 2016 report. Epidemiological data provide essentially no evidence for detrimental health effects below 100 mSv, and several studies suggest beneficial (hormetic) effects. Equally significant, many studies with in vitro and in animal models demonstrate that several mechanisms initiated by low-dose radiation have beneficial effects. Overall, although probably not yet proven to be untrue, LNT has certainly not been proven to be true. At this point, taking into account the high price tag (in both economic and human terms) borne by the LNT-inspired regulation, there is little doubt that the present regulatory burden should be reduced.”

    • Doss (2018), Are We Approaching the End of the Linear No-Threshold Era?
      http://jnm.snmjournals.org/content/59/12/1786.long

      Abstract

      “The linear no-threshold (LNT) model for radiation-induced cancer was adopted by national and international advisory bodies in the 1950s and has guided radiation protection policies worldwide since then. The resulting strict regulations have increased the compliance costs for the various uses of radiation, including nuclear medicine. The concerns about low levels of radiation due to the absence of a threshold have also resulted in adverse consequences. Justification of the LNT model was based on the concept that low levels of radiation increase mutations and that increased mutations imply increased cancers. This concept may not be valid. Low-dose radiation boosts defenses such as antioxidants and DNA repair enzymes. The boosted defenses would reduce the endogenous DNA damage that would have occurred in the subsequent period, and so the result would be reduced DNA damage and mutations. Whereas mutations are necessary for causing cancer, they are not sufficient since the immune system eliminates cancer cells or keeps them under control. The immune system plays an extremely important role in preventing cancer, as indicated by the substantially increased cancer risk in immune-suppressed patients. Hence, since low-dose radiation enhances the immune system, it would reduce cancers, resulting in a phenomenon known as radiation hormesis. There is considerable evidence for radiation hormesis and against the LNT model, including studies of atomic bomb survivors, background radiation, environmental radiation, cancer patients, medical radiation, and occupational exposures. Though Commentary 27 published by the National Council on Radiation Protection and Measurements concluded that recent epidemiologic studies broadly support the LNT model, a critical examination of the studies has shown that they do not. Another deficiency of Commentary 27 is that it did not consider the vast available evidence for radiation hormesis. Other advisory body reports that have supported the LNT model have similar deficiencies. Advisory bodies are urged to critically evaluate the evidence supporting both sides and arrive at an objective conclusion on the validity of the LNT model. Considering the strength of the evidence against the LNT model and the weakness of the evidence for it, the present analysis indicates that advisory bodies would be compelled to reject the LNT model. Hence, we may be approaching the end of the LNT model era.”

    • Calabrese, E.J. (2017). The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT
      https://www.sciencedirect.com/science/article/pii/S0013935116309343?via%3Dihub

      Abstract

      “This paper reveals that nearly 25 years after the National Academy of Sciences (NAS), Biological Effects of Ionizing Radiation (BEIR) I Committee (1972) used Russell’s dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present.”

    • Peter Lang: Following are abstracts from four papers on the LNT hypothesis.

      Thank you for the links. And for quoting the abstracts.

  52. However, what is relevant for assessing the relative safety of nuclear power is how it ranks against other electricity generation technologies on the basis of life cycle deaths per TWh. On this basis, nuclear power is the safest way to generate electricity by a wide margin, and has been since the first nuclear power station began sending power to the grid in 1954.

    Technology Deaths per TWh
    Coal, India 99
    Coal, China 90
    Coal, World 60
    Coal, USA 15
    Natural gas 4
    Hydro 1.4
    Solar 0.44
    Wind 0.15
    Nuclear 0.04
    Nuclear 0.09

    Source: https://www.nextbigfuture.com/2012/06/deaths-by-energy-source-in-forbes.html

    Other sources, see Note [VIII] here: https://www.mdpi.com/1996-1073/10/12/2169/htm#B23-energies-10-02169

    Note that wind and solare are not comparable with the dispatchable technologies. Their LCA deaths per TWh would be significantly higher when the necessary grid and storage needed to make them dispatchable are included.

    • Grim reading this.
      Yet I still would go for more caution in looking at those figures.
      From my own experience (I spent 55yrs mostly in fossil power generation) the first decades I would describe conditions as deplorable to abysmal in terms of risk to health. Asbestos was also a major hazard, yet I would attribute the greater killer to tobacco which was also a site hazard. But the latter does not attract compensation of any sort, so it doesn’t show.
      From early days I was deeply involved; in all matters related – tech engineering; financial; and the health of subordinates. The long arm of the law has considerable reach when it comes to warranted engineers. It is all in the past now, only memories are left since all the sites are today a brown site. We draw a thick line on the balance sheet.
      That is where the picture changes with nuclear. From my perspective, and it is a limited one, end of generation is not end of life. Following Fukushima lately there seems to be no end to the misfortune.
      Shakespeare comes to mind: ” the evil lives after them, the good is interred with their bones”.

  53. melitamegalithic,

    Thank you for your feedback. I was also involved in the electricity generation industry for a substantial part of my career, including nuclear power spanning about 20 years.

    The deaths per TWh figures quoted here are from fairly recent published studies. They change only very slowly. EPA attributes about 15,000 to 35,000 deaths deaths per TWh in the USA to air pollution from coal fired power stations. The ranking of the deaths/TWh of the technologies by authoritative studies has been virtually unchanged since the late 1970s.

    The deaths from the Chernobyl and Fukushima accidents are trivial compared with accidents in the other energy chains.


    Figure: Risks of severe accidents in the different energy chains in the EU. Original Source of this chart (link no longer available). Original Data is in Figures 7 and 8 is here.

    These studies on the impact of major nuclear power accidents are well worth reading:

    Thomas, P.; May, J. Coping after a big nuclear accident. Process Safety and Environmental Protection 2017, 112, 1-3. https://www.sciencedirect.com/science/article/pii/S0957582017303166

    Thomas, P. Quantitative guidance on how best to respond to a big nuclear accident. Process Safety and Environmental Protection 2017, 112, 4-15. https://www.sciencedirect.com/science/article/pii/S0957582017302665

    Waddington, I.; Thomas, P.; Taylor, R.; Vaughan, G. J-value assessment of relocation measures following the nuclear power plant accidents at Chernobyl and Fukushima Daiichi. Process Safety and Environmental Protection 2017, 112, 16-49. https://www.sciencedirect.com/science/article/pii/S0957582017300782

    Waddington, I.; Thomas, P.; Taylor, R.; Vaughan, G. J-value assessment of remediation measures following the Chernobyl and Fukushima Daiichi nuclear power plant accidents. Process Safety and Environmental Protection 2017, 112, 16-49. https://www.sciencedirect.com/science/article/pii/S0957582017302173

    These are four of 10 papers. Thomas and May (2017) is the overview of them. Excerpt:

    “In both cases, the authorities’ principal response for protecting
    the public was to move large numbers of people away
    from the surrounding area. A total of 335,000 were relocated
    after Chernobyl, never to return. Meanwhile after the accident
    at Fukushima Daiichi, 111,000 people were required to leave
    areas declared as restricted and an additional 49,000 joined
    the exodus voluntarily; about 85,000 had not returned to their
    homes by 2015. Were these sensible policy reactions? Was
    there an alternative? How should we respond to a big nuclear
    accident in the future? These were the questions behind the
    multi-university NREFS research study – Management of Risk
    Issues: Environmental, Financial and Safety – carried out for
    the Engineering and Physical Sciences Research Council as
    part of the UK-India Civil Nuclear Power Collaboration (NREFS, 2015). This Special Issue carries the 10 closing papers from that
    project.”

    • Peter Lang
      Thank you for the reply, and the info provided. Whichever way one looks at it, it is all good to know. And I do not in any way contest the figures since I am convinced the effects of fossil fuel burning on health are far reaching and insidious. The price of our way of life.
      There is a common factor that exacerbates all. It is the shortcomings of the “design and manage” factor. My experience of QA at the design and manufacture stage – from many reputable sources – is terrible. We all make mistakes but that is where quality assurance should work – it does not. To some its OK to cut corners – as long as you don’t stay around long enough.
      In ‘manage’, it is another mine-field. Some of the worst incidents (I limit here to my own experience) were ‘managed’. It is worse if ‘managed from a distance’ where tech merges with politic. Fossil and nuclear are very different.
      The above are all unplanned cost the are over and above what may be anticipated. I have no confidence that humanity has improved; on the contrary technology is being taken very much for granted, – a new religion.

      • melitamegalithic

        Thank you for your reply and additional points. I agree that “there are shortcomings of the ‘design and manage’ factor” and with QA. But this applies to all industries and everything we do – hospitals, medical, pharmaceutical, driving a car, etc. I suggest we need to recognise there are downside issues of technology but these are enormously outweighed by the benefits. Consider where humanity would be now if we had not developed fossil fuels and electricity systems.

        “I have no confidence that humanity has improved.” I think humanity has improved enormously since the start of the Industrial Revolution. Consider how life expectancy has increased over this time as one key measure of progress.

        However, I’d suggest we could have been much better off now if not for the disruption to the development and deployment of nuclear power that began around 1967. That disruption has cost us enormously. Consider where we could be now if the learning rates and deployment rates had continued. We’d have much safer and much cheaper nuclear power by now. By now most of the world’s electricity could be generated by electricity at perhaps 10% of current cost and saving around half a million lives per year (ref. Table 3 in Lang, 2017). Also, we could be producing much of our transport fuels from cheap electricity and sea water.

        As an example of what could have been achieved by now consider what was achieved in another industry that also has a high public concern about accidents – the aviation industry. Below I copy Note [XII] from Lang (2017) https://www.mdpi.com/1996-1073/10/12/2169/htm#B23-energies-10-02169

        “Some readers may question the credibility of the projections of OCC in 2015. This is a counterfactual analysis of what the consequences would have been if the pre-disruption learning and deployment rates had continued. There is no apparent physical or technical reason why these rates could not have persisted. Actual learning rates may have been faster or slower than the pre-disruption rates depending on various socio-economic factors. It is beyond the scope of this paper to speculate on what global economic conditions, electricity demand, public opinion, politics, policy, regulatory responses and a multitude of other influencing factors may or may not have occurred over the past half century if the root causes of the disruption had not occurred. However, consider the following. A defensible assumption is that if the high level of public support for nuclear power that existed in the 1950s and early 1960s [12,27,28] had continued, the early learning rates may have continued and, therefore, the accelerating global deployment rate may have continued. With cheaper electricity, global electricity consumption may have been higher, thus causing faster development and deployment. In that case, we could have greatly improved designs by now—small, flexible and more advanced than anything we might envisage, with better safety, performance and cost effectiveness.

        Rapid learning rates persisted since the 1960s for other technologies and industries, where public support remained high. The aviation industry provides an example of technology and safety improvements, and cost reductions, achieved over the same period in another complex system with high public concern about safety. From 1960 to 2013, US aviation passenger-miles increased by a factor of 19 [46], while aviation passenger safety (reduction in fatalities per passenger-mile) increased by a factor of 1051 [47], a learning rate of 87% for passenger safety. The learning rate for the cost of US commercial airline passenger travel during this period was 27% [46,48]. Similarly, the learning rate for solar PV (with persistent strong public support, favourable regulatory environments and high financial incentives) has remained high at 10 to 47% according to Rubin et al. (Figure 8) [13]. Cherp et al. [49] compare energy transitions of wind, solar and nuclear power in Germany and Japan since the 1970s and find their progress depends on the level of public support, political goals and policies of each country.”

      • Peter Lang
        Thank you for the very informative reply.
        I have only a small comment to add here. I get the impression reading your paper that each is looking at the different face of the same coin. However please note that my interest in nuclear was from a point somewhat removed, though with some concern.
        I refer to your paper (fig1 and 2) with reversal point I think between 1970 72. Sometime in 1972 I shared my office with a semi-retired engineer from a maker to help in reinstating a turbo-unit. Expressing my fear of nuclear in a situation were management was done by committing every cardinal sin in the book, I was told nuclear makers no longer dealt with governments due to their high risks and their lack of due attention to proper management, as they had found. Then followed an article in a technical periodical, that makers (some) of nuke plant for overseas skimped on instrumentation and protection to cut costs.
        I can imagine why the reversal in costs around 1972. It would also be interesting to know the project cost overrun in earlier plant builds.
        I would agree with you in a near ideal situation. Technology has improved greatly, and with it the quality of life -as you say, (thanks to the brains of a few)- , but human mentality in general and seeing it through my experience, has not.

      • Repost in correct place.

        melitamegalithic,

        Thank you again. I have been involved in and following the nuclear debate since 1980.

        You said: “I refer to your paper (fig1 and 2) with reversal point I think between 1970 72.” The learning rate reversal in the USA was at 32 GW cumulative global capacity of construction starts, which occurred in 1967. It occurred in 1968 in CA, FR, DE and later in other countries (see Table 1 https://www.mdpi.com/1996-1073/10/12/2169/htm ).

        You said: “Expressing my fear of nuclear in a situation were management was done by committing every cardinal sin in the book, I was told nuclear makers no longer dealt with governments due to their high risks and their lack of due attention to proper management, as they had found. Then followed an article in a technical periodical, that makers (some) of nuke plant for overseas skimped on instrumentation and protection to cut costs.”

        There is an enormous amount of material from authoritative sources on the causes of the disruption and what went wrong. Here are a few:

        Wyatt, A. The nuclear challenge : understanding the debate. Canadian Nuclear Association: Toronto, 1978

        Daubert, V.; Moran, S.E. Origins, Goals, and Tactics of the U.S. Anti-Nuclear Protest Movement.; Rand Corporation: Santa Monica, CA, 1985. https://www.rand.org/content/dam/rand/pubs/notes/2005/N2192.pdf

        Cohen, B. Costs of Nuclear Power Plants–What Went Wrong? In Nuclear energy option, Plenum Press: New York:, 1990. http://www.phyast.pitt.edu/~blc/book/chapter9.html

        Grover, J. The struggle for power : the real issues behind the uranium and nuclear argument, viewed both internationally and nationally. E. J. Dwyer (Australia): Surry Hills, N.S.W, 1980. https://nla.gov.au/nla.cat-vn1564742

        An online summary by Rafe Champion of Part 9 of John Grover’s book (PART NINE – THE ANTI-ENERGY MOVEMENT), http://www.the-rathouse.com/2011/Grover-Power.html

        In answer to a question on another blog site about the causes of the disruption I replied as follows:

        “To properly identify the causes would require a sophisticated root-cause analysis. That is not within the scope of the paper. Below is a simple root-cause analysis.

        Root-cause and causative factors of nuclear power cost escalation since late-1960s

        What was the root cause(s) and causative factors of the disruption of nuclear power learning rates and cost escalation thereafter?

        Root-cause:

        1. The anti-nuclear power protest movement’s scaremongering (see Daubert and Moran, 1985, ‘Origins, Goals, and Tactics of the U.S. Anti-Nuclear Protest Movement’, see reference and URL above )

        2. Failure of policy analysts, politicians, regulatory bodies, industry bodies (e.g. WHO, IEA, OECD, NEA, IAEA, DOE, EIA, and equivalents in other countries) to recognise the root-cause and counteract the risk perception factors by educating the public that nuclear power, although not totally risk free, is actually the safest way to generate electricity.

        Some causative factors:

        3. acceptance of the anti-nuclear propaganda by media and public
        4. increasing concerns and fear of nuclear power – accidents, nuclear weapons proliferation, nuclear waste, decommissioning, and health impacts of radiation and radioactivity
        5. politicians have to react to the publics fears with legislation and regulation
        6. regulatory bodies are set up to apply the laws and regulations.
        7. anti-nuclear bodies and concerned citizens use the laws and regulations to disrupt projects and operating power plants.
        8. regulatory bodies become overly zealous because of concern about the likely public and media outrage if any accidents occur
        9. response to accidents is not appropriate for the actual health consequences and risks, and is not comparable with the risks and consequences of the actual health consequences of other technologies
        10. construction time and costs increase
        11. utilities and vendors respond by increasing the size and complexity of nuclear power plants
        12. financial and commercial risk for utilities and investors increases
        13. orders are cancelled, and rate of new orders slow
        14. learning rate turns negative
        15. deployment rate stalls
        16. Rate of development slows
        At the top I suggested two root-causes. It was probably impossible to prevent the first; the second suggested root-cause is perhaps what should, in retrospect, have been tackled at the time, and throughout the five decades since the disruption, to counteract the first of the suggested root-causes.
        I’d welcome constructive contributions and rational discussion on the root-cause, especially from those with expertise in root-cause analysis.”

    • LNT (the linear no threshold hypothesis of radiation carcinogenesis) is politically mandated pseudoscience. There is a threshold below which ionizing radiation is harmless or beneficial. In a few seconds on Google Scholar it is possible to find highly robust and repeatable biological data that show that low dose ionizing radiation REDUCES cancer rates and INCREASES lifespan in mice.

      https://journals.sagepub.com/doi/full/10.2203/dose-response.06-115.Sakai
      https://ecee.colorado.edu/~ecen5009/Resources/Radiation/Ina2004.pdf
      https://www.karger.com/Article/Abstract/22024
      https://www.researchgate.net/profile/Kazuko_Fujita/publication/228727346_Suppression_of_carcinogenic_process_in_mice_by_chronic_low_dose_rate_gamma-irradiation/links/09e415064d862cfda7000000/Suppression-of-carcinogenic-process-in-mice-by-chronic-low-dose-rate-gamma-irradiation.pdf

      The actual deaths and injury from nuclear incidents is low compared to heavy industry in general. However journalists, activists and political scientists like to create virtual deaths from radioactivity releases by just combining very ambitious theoretical rates of cancer causation from the linear no-threshold (LNT) hypothesis of carcinogenesis, with (also partly computer modelled) radiation releases, and multiplying by large populations. This allows them to create out of nothing as many deaths as they think they can get away with. None of this is real.

      That is why the LNT is so cherished by the anti-nuclear establishment. It has weak scientific grounding in the complex field of epidemiology, but is soundly refuted by animal studies of radiation carcinogenesis. It’s value is political, not scientific, allowing virtual deaths to be created from nuclear incidents. The animal data refuting LNT is not contested, it is simply ignored.

      In this regard they have won. The general public believe that hundreds of thousands if not millions have died from Chernobyl, when the true number is in the hundreds or at most low thousands. Likewise from three Mile Island, where the actual health consequences were essentially zero.

      In Chernobyl, anti-nuclear sentiment was responsible for more deaths than radioactivity. Many villagers were evacuated from regions affected by ever lower radioactivity levels which eventually were so low that they overlapped with background. This evacuation of more and more people was in response to political hectoring from the international antinuclear establishment. Forced relocation from villages where these generally old people had lived their whole lives in Ukraine and Belarus led to substantial shortening of their life-span. (I attended the 1996 10-year anniversary conference on the Chernobyl accident in Minsk where these data were presented). Unnecessary relocation of villagers killed more people than the radiation itself.

      The agencies setting radiation safety standards no doubt think they are taking a position of caution and conservativeness in acquiescing to LNT. But combining this conservativeness with the inevitable uncertainties of biology and epidemiology leads inexorably to an overly alarmist and reactionary position and the logically inevitable shut down of the nuclear industry. It must seem very reasonable politically to fudge the risk down to zero but this leads to all those virtual victims of radiation releases where hundreds of millions exposed to minuscule doses with supposed tiny risks, by a mathematical wave of a magic wand raises like the Night King an army of the
      dead to fight against nuclear technology.

      • Phil Salmon,

        Thank you for your reply. You have much more knowledge of this subject than I do. Despite that, I’d like to add a few points, and welcome your response to them:

        1. There is a threshold below which ionizing radiation is harmless or beneficial.

        True. I’ve seen the charts. They show deaths per dose are linear above the threshold; they drop rapidly to zero near the threshold, then are substantially beneficial from the threshold to zero. The peak benefit is at about ½ to 2/3 of the dose between zero and the threshold, and zero at zero dose.

        [Phil, Can you give me links to those charts, as I haven’t found them on a quick search.]

        2. In a few seconds on Google Scholar it is possible to find highly robust and repeatable biological data that show that low dose ionizing radiation REDUCES cancer rates and INCREASES lifespan in mice.

        There is also a great deal of empirical data demonstrating that low dose radiation that is at the very highest levels of back ground radiation and above is beneficial for humans; and the very high levels of low dose radiation is not harmful. Here are a few I can find quickly (they refer to authoritative sources.

        What can we learn from Kerala? https://bravenewclimate.com/2015/01/24/what-can-we-learn-from-kerala/

        Wade Allison To Change The Nuclear Future https://www.nuclear4life.com/
        “Scientific evidence shows that radiation safety criteria are about 1000 times too cautious and endanger lives.”

        Why radiation is safe & all nations should embrace nuclear technology – Professor Wade Allison https://www.youtube.com/watch?v=YZ6aL3wv4v0
        Note the high radiation levels and mostly minimal consequences for the Brazilian children who found a container of radioactive material (medical waste) in a dump and plastered it over their bodies, their friends bodies and their kitchen.

        Wade Allison Radiation and Reason https://www.youtube.com/watch?v=l300XZlT-s4

        3. The general public believe that hundreds of thousands if not millions have died from Chernobyl, when the true number is in the hundreds or at most low thousands.

        From memory, I understand the best recent data is that about 70 people are known to have died as a result of the Chernobyl accident and radioactivity from the accident:

        • 2 Chernobyl workers died as a result of the explosion (when steam pressure blew the cap of the reactor)
        • 1 fire fighter died of a heart attack
        • 28 fire fighters died as a result of acute radiation sickness within 30 days of the accident
        • About 40 people in the nearby contaminated zone have died of diseases known to be attributable to radioactive contamination from the accident.
        • 4000 people were diagnosed with thyroid cancer that was attributable to radioactive contamination from the accident. Of these only about 6 or 8 died.

        Best estimate is a few tens to 200 other people may have died from diseases known to be attributable to radiation or radioactive contamination from the accident.

        [The above figures are from memory. It would take me a while to check them and provide links to the authoritative reports.]

        It is now recognised that the early projections of 4,000 to 9,000 deaths was a gross overestimate.

        [The estimated 0.04 and 0.09 deaths per TWh from nuclear I gave in an earlier comment (here https://www.youtube.com/watch?v=l300XZlT-s4) are overestimates because they are based on the early projections of 4000 and 9000 deaths from Chernobyl, plus all other deaths in the nuclear energy chain from mining to decommissioning and waste disposal.]

        4. In Chernobyl, anti-nuclear sentiment was responsible for more deaths than radioactivity.

        Correct – by a huge margin. Read the authoritative studies linked in this comment https://judithcurry.com/2019/11/09/week-in-review-science-edition-112/#comment-902826, and the other six related studies that I did not link.

        5. The agencies setting radiation safety standards no doubt think they are taking a position of caution and conservativeness in acquiescing to LNT. But combining this conservativeness with the inevitable uncertainties of biology and epidemiology leads inexorably to an overly alarmist and reactionary position and the logically inevitable shut down of the nuclear industry.

        I agree. And that’s been the result. An enormous set-back for humanity worldwide.

        I see a similar situation ahead with the impact of the climate change alarmist movement. I expect the consequences of the climate change alarmist movement may be similar to the consequences of that anti-nuclear power protest movement.

      • Correction to this point:

        “• 4000 people were diagnosed with thyroid cancer that was attributable to radioactive contamination from the accident. Of these only about 6 or 8 died.”

        The correct numbers are:

        “At Chernobyl, an iron deficient region, 6000 children contracted thyroid cancer but just 15 died”.
        The others were diagnosed in time, and treated successfully.

        Source: Radiation and Reason – Fukushima and After
        Slide 19 “How many will die from radiation cancer at Fukushima?” https://issuu.com/johna.shanahan/docs/111003_fccj_allison_final?e=15279492%2F11147698

      • Peter
        [Phil, Can you give me links to those charts, as I haven’t found them on a quick search.]

        There’s no single study that shows the threshold. It’s a general conclusion from the totality of published research including the large amount of animal research done in the US national laboratories (Davis Utah, Lawrence Livermore, Argonne, Brookhaven etc) during the “golden age” of scientific nuclear research and development in the ‘50s – 70’s, before the whole atomic program died of left-cancer during the 80’s – 90’s. All animal studies consistently showed a threshold below which there was no radiation carcinogenesis.

        However one of the human epidemiological studies was particularly compelling and showed a clear threshold. This was the shipyard worker study. Epidemiology is a difficult field where the challenge is to compare groups of people with a very different exposure to the test agent, but with everything else being the same as far as possible. Factors such as socioeconomic status especially have a massive effect on lifespan and disease incidence dwarfing risk factors such as radiation. The shipyard study found an especially elegant solution to this challenge and is one of the best examples of such a cohort study. What the study found is why so few have heard of it.

        This study followed up the health of thousands of workers for many years, in several shipyards, some nuclear and the others conventional. Thus all the socioeconomic and age-structure variables were exceptionally well controlled between the two groups. The only thing different was exposure to ionising radiation, which was carefully measured by personal dosimetry.

        In a nutshell, the study found a cancer excess only in the highest dose group, something like 50 mSv and above per year. All lower dose groups had no cancer excess. This was perhaps one of the most clear demonstrations of the threshold of radiation carcinogenic risk.

        https://www.ncbi.nlm.nih.gov/m/pubmed/17690532/

        Although there was no cancer excess below 50 mSv / yr, elaborate statistical contortions have been made to rescue a “risk” down to zero to keep LNT alive.

      • Phil Salmon,

        Thank you for your reply.

        I understand “[T]here’s no single study that shows the threshold”. However, I have seen charts showing linear risk response above a threshold then, as dose decreases, the response drops rapidly to zero at the threshold and continues to drop below zero (i.e. beneficial) as dose decreases to a maximum beneficial respoonse at about 2/3 to ½ the threshold dose.

        On another matter, Emeritus Professor Wade Allison has a mass of material on the impacts of radiation on this web site: http://www.radiationandreason.com/

        Book in pdf – ‘Radiation and Reason: The impact of science on a culture of fear’ http://www.radiationandreason.com/download/ipmftt

        The chart below shows that the allowable radiation levels for the public are about a factor of 1000 below the safe levels.

        Graphical comparison of typical monthly radiation doses for therapy, ALARA* and AHARS**.

        Conclusion: Safety is unjustifiably strict by a factor of about 1000

        * ALARA = As Low As Reasonably Achievable

        ** AHARS = As High As Reasonably Safe

      • Peter
        Your mention of hormesis helped me suddenly realise the reason for a strange aspect of the statistical analysis of the shipyard study. I had thought it curious that, to get radiation-disease risk factors, they compared the highest dose group (>50 mSv) with an intermediate dose group (5-9.9 mSv), not the lowest dose group (<5mSv) as one would expect. This struck me as odd:

        An internal comparison of workers with 50.0 mSv exposures to workers with exposures of 5.0-9.9 mSv indicated relative risks for leukemia of 2.41 (95% CI: 0.5, 23.8), for LHC, 2.94 (95% CI: 1.0,12.0), for lung cancer, 1.26 (95% CI: 0.9, 1.9) and for mesothelioma, 1.61 (95% CI: 0.4, 9.7) for the higher exposure group. Except for LHC, these risks are not significant.

        But allowing for hormesis makes the motivation clear. Hormesis would make the cancer incidence in the 5-9.9 mSv group lower, not higher than the lowest dose group. So this gave a bigger difference relative to the >50 mSv group than the – more appropriate – comparison to the lowest dose group.

        So they used hormesis to inflate the risk factors and get one or two of them over the line of statistical significance!

    • Correction: “deaths per year” not “deaths per TWh” in this sentence:

      “EPA attributes about 15,000 to 35,000 deaths deaths per TWh in the USA to air pollution from coal fired power stations.”

    • Thomas (2017), Quantitative guidance on how best to respond to a big nuclear accident https://www.sciencedirect.com/science/article/pii/S0957582017302410

      Highlights

      • Relocation should be used sparingly if at all after a major nuclear accident.

      • 3 diverse methods support this: J-value, optimal economic control and PACE-COCO2.

      • Remediation will be cost-effective after a major nuclear accident.

      • The J-value method has been validated against pan-national data.

      • J-value and loss of life expectancy are aids to communicating radiation risk.

      • The downside health risk has limits even after the worst nuclear accidents.

      • Demystifying the effects of big nuclear accidents should improve decision making.

      Abstract
      A review is made of the quantitative methods used in the NREFS project (Management of Nuclear Risks: Environmental, Financial and Safety) set up to consider how best to respond to a big nuclear accident. Those methods were: the Judgement- or J-value, optimal economic control and a combination of the computer codes PACE and COCO2 produced at Public Health England. The NREFS results show that the life expectancy lost through radiation exposure after a big nuclear accident can be kept small by the adoption of sensible countermeasures, while the downside risk is less severe than is widely perceived even in their absence. Nearly three quarters of the 116,000 members of the public relocated after the Chernobyl accident would have lost less than 9 months’ life expectancy per person if they had remained in place, and only 6% would have lost more than 3 years of life expectancy. Neither figure is insignificant, but both are comparable with life expectancy differences resulting from the different day-to-day risks associated with living in different parts of the UK. It is clear in hindsight that too many people were relocated after both the Chernobyl and the Fukushima Daiichi accidents. Remediation methods can often be cost-effective, but relocation of large numbers following a big nuclear accident brings its own risks to health and well-being and should be used sparingly, a message coming from all three of the quantitative methods. There is a need to understand and hence demystify the effects of big nuclear accidents so that decision makers are not pressurised into instituting draconian measures after the accident that may do more harm than good.

    • Waddington et al, (2017), J-value assessment of remediation measures following the Chernobyl and Fukushima Daiichi nuclear power plant accidents

      Highlights

      • J-value shows remedial measures after Chernobyl and Fukushima Daiichi to be worthwhile.

      • Remediation proportionate to the risk should be instituted.

      • Remediation rather than relocation should be highest on the agenda of decision makers responding to a big nuclear accident.

      • The results will be of strong interest and value to public, politicians and regulators alike.

      • The results promote the optimal application of resources to protect the real interests of the people living nearby.

      Abstract
      Actions set in train shortly after the accidents at Chernobyl (1986), and Fukushima Daiichi (2011) had the aim of reducing the more immediate health effects on people living near the plants, with population relocation being especially prominent. The important topic of relocation is the subject of a companion paper, and this article will concentrate on other measures, such as soil treatment and urban decontamination, that have been put in place to reduce the radiation risks in the medium and long term to people living and farming in areas subject to some degree of radioactive contamination. The J-value method of risk assessment has been used to judge the cost-effectiveness of a range of agricultural and urban remediation actions. Many remedial measures instituted after the Chernobyl and Fukushima Daiichi accidents have been found to be highly cost-effective.

    • The other papers in the Special Issue: Coping with a big nuclear accident; Closing papers from the NREFS project
      can be found here: https://www.sciencedirect.com/science/article/pii/S095758201730352X

      The links for the above two papers are:
      https://www.sciencedirect.com/science/article/pii/S0957582017300782?via%3Dihub
      https://www.sciencedirect.com/science/article/pii/S0957582017302173

  54. Ireneusz Palmowski

    Winter began on November 13, 2019 in the northeast US.

  55. A Planet-Without-Atmosphere Effective Temperature Calculating Formula, the Te formula which is based on the radiative equilibrium and on the Stefan-Boltzmann Law, and which is in common use right now:

    ……………………Te = [ (1-a) S / 4 σ ]¹∕ ⁴

    is actually an incomplete Te formula and that is why it gives us very confusing results.

    A planet-without-atmosphere effective temperature calculating formula
    …………………. Te = [ (1-a) S / 4 σ ]¹∕ ⁴
    is incomplete because it is based only on two parameters:

    1. On the average solar flux S W/m² on the top of a planet’s atmosphere and
    2. The planet’s average albedo a.

    Those two parameters are not enough to calculate a planet effective temperature. Planet is a celestial body with more major features when calculating planet effective temperature to consider.
    The planet-without-atmosphere effective temperature calculating formula has to include all the planet’s major properties and all the characteristic parameters.
    3. The sidereal rotation period N rotations/day
    4. The thermal property of the surface (the specific heat cp)
    5. The planet surface solar irradiation accepting factor Φ (the spherical surface’s primer quality).
    For Mercury, Moon, Earth and Mars without atmosphere Φ = 0,47.

    Earth is considered without atmosphere because Earth’s atmosphere is very thin and it does not affect Earth’s Effective Temperature.
    Altogether these parameters are combined in a Planet-Without-Atmosphere Effective Temperature Complete Formula:

    ……….Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

    A Planet-Without-Atmosphere Effective Temperature Complete Formula produces very reasonable results:
    Te.earth = 288,36 K, calculated with the Complete Formula, which is identical with the
    Tsat.mean.earth = 288 K, measured by satellites.

    Te.moon = 221,74 K, calculated with the Complete Formula, which is almost the same with the
    Tsat.mean.moon = 220 K, measured by satellites.

    A Planet-Without-Atmosphere Effective Temperature Complete Formula gives us a planet effective temperature values very close to the satellite measured planet mean temperatures (the satellite measured planet effective temperatures).

    Thus we have to conclude here that the satellites measured planet mean temperatures should be considered as the satellite measured Planet Effective Temperatures.

    We have collected the results now:

    Comparison of results the planet Te calculated by the Incomplete Formula, the planet Te calculated by the Complete Formula, and the planet Te (Tsat.mean) measured by satellites:

    Planet or……..Te.incomplete…….Te.complete ….Te.satellites
    Moon ……………..formula ………….formula ….measured

    Mercury ………… 437,30 K ……… 346,11 K ………..340 K
    Earth ………………255 K …………288,36 K ………..288 K
    Moon ……………. 271 Κ …………221,74 Κ ………..220 Κ
    Mars ………………211,52 K ……..215,23 K …………210 K

    As you can see Te.complete.earth = 288,36 K.

    That is why I say in the real world the Δ 33 oC difference does not exist.

    ..……….Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ ……..(1)

    http://www.cristos-vournas.com

  56. Bob Fernley-Jones in a post at Jonova discusses BoM adjustments. This graph illustrates how cleverly the klimatariat cooled the past to warm the present.

    See how they centred the rotation over two of the coldest decades.

  57. A Planet-Without-Atmosphere Effective Temperature Calculating Formula, the Te formula which is based on the radiative equilibrium and on the Stefan-Boltzmann Law, and which is in common use right now:
    ……………………Te = [ (1-a) S / 4 σ ]¹∕ ⁴
    is actually an incomplete Te formula and that is why it gives us very confusing results.

    Comparison of results the planet Te calculated by the Incomplete Formula, the planet Te calculated by the Complete Formula, and the planet Te (Tsat.mean) measured by satellites:

    Planet or……..Te.incomplete…….Te.complete ….Te.satellites
    moon ……………..formula ………….formula ……..measured
    Mercury ………… 437,30 K ……… 346,11 K ………..340 K
    Earth ………………255 K …………288,36 K ………..288 K
    Moon …………….. 271 Κ …………221,74 Κ ………..220 Κ
    Mars ………………211,52 K ……..215,23 K …………210 K

    a Planet-Without-Atmosphere Effective Temperature Complete Formula:

    ……….Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

    is not a product of adjustments. A Planet-Without-Atmosphere Effective Temperature Complete Formula:
    ……….Te.planet = [ Φ (1-a) So (1/R²) (β*N*cp)¹∕ ⁴ /4σ ]¹∕ ⁴ (1)

    is based on a newly discovered Rotating Planet Surface Solar Irradiation Absorbing-Re-emitting Universal Law.

    It is obvious now that the incomplete planet-without-atmosphere effective temperature formula:
    ……………………Te = [ (1-a) S / 4 σ ]¹∕ ⁴
    should not be in use anymore.

    http://www.cristos-vournas.com

  58. What Climate Science Tells Us About Temperature Trends – Ronald Bailey

    “Overall, these noncondensing greenhouse gases account for about 25 percent of the Earth’s greenhouse effect. They sustain temperatures that initiate water vapor and cloud feedbacks that generate the remaining 75 percent of the current greenhouse effect. Lacis and his colleagues suggest that if all atmospheric carbon dioxide were somehow removed most of the water vapor would freeze out and the Earth would plunge into an icebound state.”

    What I see only in the above is the 3 to 1 ratio which goes into the water vapor impact suggested. That’s what we found before. Why not assume it works like that? So much so, without CO2, there is no water vapor. A double down. If you suggest an Earth without water vapor that’s a strong relationship.

    Fun fact, Lake Superior sees its highest evaporation rates in Fall. Why is the Arctic warming? Water vapor. To be fair, Lacis is talking about a long term steady state I suppose. But, we can come out of a glacial period and that’s tough. Water vapor being not the thermostat can return with a small nudge as we switch into an inter-glacial period.

    So in conclusion, this 3 to 1 one ratio is something I would’ve done. It’s simple math. And it ends up in the enduring Charney numbers. Which with its lower limit says, But if I am wrong.

  59. Harvard-Yale Football Game protest.

    An indication of just how pedestrian, undiscerning and unenlightened our most esteemed institutions have become, both for playing football as well as for failing to understand that “climate change” is a fabricated scare tactic to invoke political power.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s