by Nicola Scafetta
My new paper demonstrates that realistic emissions scenarios and climate sensitivity values & scenarios of natural climate variability produce more realistic, non-alarming scenarios of 21st century climate.
I would like to thank Judith Curry for inviting me to write a short blog post on my just published paper:
Nicola Scafetta. Impacts and risks of “realistic” global warming projections for the 21st century. Geoscience Frontiers 15(2), 101774, 2024. https://doi.org/10.1016/j.gsf.2023.101774
The paper is open access and, therefore, it is accessible to all.
I believe the work is significant because it addresses the central issue that is of general interest: how much warming can we expect in the 21st century? These are serious challenges that scientists must solve to truly assist policymakers. Is today’s climate alarmism founded on real science, or is it simply an extrapolated view based on flawed arguments?
Answering such a question defines the steps that must be taken to address any expected threats associated with possible future climatic changes. However, the uncertainties are so great that no consensus can be reached. Some argue that we are on the verge of a massive climatic disaster if net-zero emission policies are not imposed quickly, while others argue that nothing will happen. Technically, anyone can present arguments in support of his or her belief because of the large uncertainties surrounding these climate change issues.
I’ve opted to address the issue by highlighting recent research efforts to reduce uncertainties in order to obtain more “realistic” climate estimates for the twenty-first century. This might then be used to better analyze the actual impacts and hazards of climate change, with the hope that people will be able to agree on the best remedies.
I have identified four sources of uncertainties:
- Which shared socioeconomic pathway (SSP) scenario for the twenty-first century is most plausible? According to recent scientific literature, it is the SSP2-4.5 scenario, which is a moderate and pragmatic scenario in which CO2 emission rates maintain around present levels until 2050, then reduce but do not reach net-zero by 2100. Unfortunately, most of the climate alarmism is based on unrealistic scenarios like SSP5-8.5 and SSP3-7.0, which result in overestimation of future projected warming and greater alarm.
- How sensitive is the climate to CO2 increases? According to recent scientific research, the Equilibrium Climate Sensitivity (ECS) should be between 1 and 3 °C. Unfortunately, the IPCC AR6 relied heavily on Global Climate models with ECS ranging between 2.5 and 4 °C (likely range), which overestimates future projected warming.
- Can we rely on the warming presented by surface temperature records to calibrate and/or validate which models to use for climate projections? Addressing this point is critical because recent literature has suggested that surface temperature records may be significantly influenced by non-climatic warm biases (e.g. contamination from urban heat islands, among others), and because satellite-based lower troposphere temperature records (e.g. UAH-MSU v6 and NOAA-STAR v5) show a warming rate that is 30% lower than recent surface temperature records (as shown also by the IPCC AR6). The concern is that the models expect that the troposphere will warm faster than the surface, not less. As a result, the warming rate of surface temperature records should be questioned. In this case, all CMIP6 GCMs are running “too hot,” indicating a very low actual value of ECS (1-2 °C), implying that future climate change would be more moderate than projected by the IPCC in all cases.
- The fourth question is whether the GCMs accurately reflect natural climate change variability. The issue is significant since a vast body of research indicates that the CMIP6 GCMs are incapable of reproducing natural climate variability because they ignore multiple well-known climatic cycles at all time scales. There is a quasi-millennial climate oscillation with a likely solar origin that characterizes the entire Holocene and is responsible for the well-documented Roman and Medieval Warm Periods, which models are unable to reproduce (as timidly acknowledged by the IPCC AR6 figure 3.2). Other natural oscillations were also detected, such as the quasi-60-year oscillation seen in the Atlantic Multidecadal Oscillation signal, as well as many other oscillations classified as solar/astronomically driven in previous studies. While GCMs suggest that over 100% of the observed warming is manmade, these oscillations could have contributed significantly to the warming recorded in the twentieth century. Introducing cyclical natural variability predicts low ECS values (1-2 °C) and that the GCMs grossly underestimate the solar impact on climate.
Using the information discussed above, “realistic” climate change projections must be created using the SSP2-4.5 and: (1) only models with a low ECS (less than 3°C); (2) rescaling the models to the lower warming rate of the lower troposphere temperature records; and (3) adopting semi-empirical models of natural climate variability. As a result, in all three situations, the projected warming for the twenty-first century is congruent with the IPCC’s projected warming using the net-zero scenario SSP1-2.6. This is clearly demonstrated in the graphical abstract of my paper, which is displayed below:

Because future climate change is expected to be modest enough that any potential related hazards can be addressed efficiently through effective and low-cost adaptation strategies, the 2.0 °C Paris-agreement warming target for the twenty-first century can likely be met even under the feasible and moderate SSP2-4.5 emission scenario without the need for implementing rapid, extremely expensive, and technologically likely impossible net-zero decarbonization policies.
Happy New Year 2024 to all!

I don’t see how Big Climate is going to make any money if your views are adopted. I’m afraid it’s a non-starter.
How much change would a place like Edinburgh (UK) have to cope with before they have to adapt or migrate. They already don their flip flops, shorts and singlets at 15°C.
Sara, I grew up on Tyneside, where winter temps of -9/10C were common with an occasional -15C and once -22C. July average was 15.5C. I’ve crossed the Pakistani-Iran desert mid-year with 40C and virtually no humidity, and lived in NE Thailand with saturated air and near 40C. I now live in Brisbane, where we’ve recently had 36-37C and where we once had 40C for about three weeks in a row (I bought a one-room air-conditioner then). So modest temperature increases in Edinburgh are unlikely to require major adaptations.
Populations can adapt. In Melbourne (Australia) where I live we have relatively changeable weather. In February 2009, our top-temperature record was broken (around 46º C) on a difficult day when over 100 people died in fires in the surrounding countryside. Later, we found that the death toll from heat stress was much greater. It affects the very old, the very young and people with pre-existing ailments. Smoky air is also a problem, even when the fire is miles away.
To the north-west of Melbourne, the city of Mildura, with a very similar ethnic makeup as Melbourne regularly endures such temperatures with much lower mortality indicating the population may have adapted.
There are of course, commonsense measures we can take to deal with heat. If you feel stressed, be careful even if the temperature does not appear to have reached heroic levels. Drink plenty of water (don’t add too much whisky!), use the cool times of the day to exercise etc. (On the day I mentioned, dog owners were advised to bring their animals indoors. Cats are too smart to let a hot day kill them! There were stories of birds falling out of the sky; but there was smoke and fire and the humidity was extremely low.)
“Addressing this point is critical because recent literature has suggested that surface temperature records may be significantly influenced by non-climatic warm biases (e.g. contamination from urban heat islands, among others), and because satellite-based lower troposphere temperature records (e.g. UAH-MSU v6 and NOAA-STAR v5) show a warming rate that is 30% lower than recent surface temperature records (as shown also by the IPCC AR6).”
How about questioning the very meaning of conventional statistical practices? In tracking global warming, we rely on averaged temperatures—a measure of central tendency. However, the intended purpose of averaging is to reduce variance and obtain a more representative figure for an entire sample. Paradoxically, when it comes to averaging temperatures, it seems to do the opposite—it increases variance. This stems from the incorrect assumption that the maximum and minimum recorded temperatures for a particular day represent the extreme deviations for an anomaly. In places like Denver, where temperature swings can be dramatic, a day with a recorded high of 43F and a recorded low of 8F yield the same average (25.5°F) as a ‘normal’ winter day with a high of 37°F and a low of 14°F. Both scenarios are treated equivalently in climate science, rendering the average temperature seemingly meaningless. The process involves assigning one central tendency for a day and then averaging it with another central tendency for the next day, continuing until a monthly average is obtained. These central tendencies are essentially combinations. The more averaging that occurs, the more variance increases, and the true climate signal becomes obscured in the noise of the averaging process. This is all under the assumption that these are accurate temperature recordings, a premise that Anthony Watts’ work challenges. This says quite a lot considering the precision of these recorded numbers are down to the tenth or sometimes hundredth decimal place. The ‘global average temperature’ is fraud; complete BS.
Perhaps Judy can delete this comment, if she’s able to
Walter,
UAH-MSU v6 and NOAA-STAR v5) show a warming rate that is 30% lower than recent surface temperature records (as shown also by the IPCC AR6). .
It really depends on what level of the atmosphere you look at. It is well known that the warming becomes less the higher you go and actually inverts to cooling in the lower stratosphere. This is because the GHE effect of water vapor is essentially gone (99% in the troposphere) and CO2 column opacity decreases (75% in the troposphere), both allowing more direct energy loss of LWIR directly to outer space. I would suggest looking at the heating cooling trends as a function of altitude at :
https://images.remss.com/msu/msu_time_series.html
And note that the lowest layer (TTL) at 0.215 K/decade is consistent with surface temperature measurements, not 30% lower.
ganon,
Thank you for the reply. I’ll take a look.
That’s a very odd view of climate science. Average temperatures may show that Sydney is 2 or 3 degrees warmer than Melbourne (where I live).
Very early in my studies of statistics in climate studies (i.e. undergraduate level) we needed to know that the difference in the climate between the places could not be characterised in any but the roughest way.
I simply do not understand what you could possibly mean by ‘global average temperature’ is fraud. If I observe that it lies somewhere between 1,000,000º and zero, who would be defrauded? What if I tighten the range by an order of magnitude? No difference you say.
Paul,
Thanks for the reply. The fraud I’m referring to is the GAT index. The climate departs from the critical assumption of the Central Limit Theorem: these measurements are non-identical and non-independent in nature. The ‘2 or 3 degrees warmer’ derivative you cite derives from measurements at individual weather stations, where once again these measurements are non-identical and non-independent in nature; I explained that to ganon down below and with my Denver example. However, an even more extreme departure from the CLT arises when climate scientists attempt to calculate what they call “the average temperature of Australia.” Sydney is known for its coastal location; its influence from the Pacific Ocean has a moderating effect on the city’s climate, making it milder. Melbourne, on the other hand, is situated farther south and has a more variable climate with influences from the Southern Ocean and the neighboring mountains.
Paul,
Here are some 3-day heatwave average temperatures for Brisbane (start 1887, end 2021), Sydney (1859) Melbourne (1856) and Adelaide (1887). Defined as the hottest 3 consecutive Tmax days in any year. The levels are quite steady except for a decrease over time in Brisbane. The average heatwave ib deg C is, heading south then west,
Brisbane 34.0, Sydney 32.6, Melbourne 36.4 and Adelaide 39.2.
Geoff S
http://www.geoffstuff.com/asixheatwave2022.xlsx
“The more averaging that occurs, the more variance increases,”
Averaging reduces variances as you can easily see on a temperature anomaly chart when comparing highly variable monthly changes with a much smoother moving average trend line.
Long term climate predictions are a waste of time and very likely to be wrong
The future climate will be warmer, unless it os colder
That’s all we know for sure
The world needs NO more climate predictions, not MORE climate predictions
The climate change cult is nothing but a prediction of CAGW that has been wrong since 1979
It strikes me as ironic, at the very least, that the predictions of future warming, within the camp that opposes any attempt at mitigation, are based on fossil fuel burning estimates driven by significant attempts at mitigation.
In other words, on the one hand, you don’t want any mitigation. On the other hand, you say, “Wow, given all the great mitigation work we’re doing, we don’t have anything to worry about! See? We don’t need mitigation!” But of course if we cease to make efforts to conserve and switch to clean energy, we will not meet the SSP2-4.5 scenario.
I disagree with “…if we cease to make efforts to conserve and switch to clean energy, we will not meet the SSP2-4.5 scenario.”
Please note from the paper that we will meet SSP2-4.5 because it is business as usual where, “CO2 emission rates remain around current levels until 2050 then they fall but do not reach net-zero by 2100”.
Fine. But it appears unlikely that it will be business as usual. CO2 emission rates are increasing – or have I got that wrong? If I am right, then the accuracy of statements made about sensitivity comes into play.
i believe business as usual means continuing to progress with conservation and alternative energy sources. Note that the world population is still increasing, and developing nations are still adding industry and cars to the road in very large numbers. world gdp is growing at a couple of percent every year. so if there were no attempt at conservation or renewables, co2 emission rates would not be levelling off.
Angus, the business as usual includes reasonable efforts in conservation with accented efforts on alternative energy technologies. The analogy to the covid policy debate would be to be conscious of reducing unnecessary exposure while not locking down the economy. Government cures are very often worse than the problems.
DanB – fossil fuel use is at a maximum. How can it be that CO2 is “leveling off”?
The author should have included another sub heading in his list of uncertainties. A point 1.1 could have been uncertainty in the effect of human emissions on the atmospheric concentration. If Salby, Berry, Harde, Humlum and a dozen others are right human emissions are only a small part of the recent increase so the scenarios are all pointed at an incorrect assumption.
That’s a big “If” in a small echo chamber.
https://skepticalscience.com/Murry-Salby-CO2-rise-natural.htm
Please read what Ferdidand Engelbeen writes on this topoc http://www.ferdinand-engelbeen.be/klimaat/co2_origin.html
From the study
“ The quasi millennial and 60-year oscillations appear to have been responsible for at least 50% of the 1900–2000 and 1970–2000 warming, respectively. The latter was also responsible for the substantial warm period that occurred in the 1940s and subsequent cooling observed globally from the 1940s to the 1970s (Scafetta, 2010, Scafetta, 2013). It has been observed that a synchronous quasi-60-year modulation appears in specific solar activity reconstructions over the last 150 years that closely match the quasi-60-year modulation observed in climate data (Connolly et al., 2023, Scafetta, 2023c, Soon et al., 2023). In general, all climatic oscillations from the inter-annual to the multi-millennial timescales appear to be spectrally coherent with solar and/or astronomical oscillations”
For me, that is the crux of the discussion. I look forward to others perspectives on that passage.
Why not just admit predicting the planet’s distant climate is beyond our capabilities? In any case, from a pragmatic standpoint, we cannot control the planet’s CO2 levels. So why lose sleep worrying about “what ifs”?
Instead, concentrate on providing the masses with reasonably clean, reliable, and reasonably priced energy. Better deal for the average citizen and the planet.
We have controlled the planet’s CO2 level – we have increased it dramatically. Why worry about “what ifs” – because the trajectory of nonlinear processes can only be controlled early on, before they become irreversible.
I wonder what solar cycle it is that has caused an essentially linear increase of global surface temperature of 0.8 C in 40 years? I wonder why we haven’t seen anything like it in the last 8000 years (since the last major glacial lake release, which results in a rapid heating only after an equivalently fast cooling)? I wonder why no solar variance that could explain such a rapid and large temperature change hasn’t been detected? I wonder why the AMO has never had a signature like this in its last 100 or so cycles? I wonder why the heat content of the ocean has increased so much over that same period when the AMO would be a rearrangement of heat content, not a large increase? I wonder why it is coincidental with the increase of CO2 from 335 to 423 ppmv over the same time period and that CO2 just happens to have a strong absorption in the water vapor transmission window? I wonder why a few apparently intelligent people can’t explain away these facts, but just try to ignore them?
As Nicola clearly states, surface temperature record contaminated reference recent work by Roger Pielke. SImples!
So, the high surface temperature is not because it’s er, hotter, but because it’s contaminated. It’s not simple, or even SImples, which is why we have to be careful about any conclusions we draw.
Contaminated with what? Heat?
The contamination is the systematic bias. All corrections in one direction are hunted scrupulously, while corrections to be made opposite benefit are ignored. This is a problem mentioned in all science classes on day one.
Land station temperatures are biased upward by every type of human activity that is growing over time, whether its’ the heat from nearby structures or the added humidity greenhouse of local agricultural irrigation. This UHI and land use are left uncorrected for while there were very significant corrections to help the warming plot with 24-hr reading time changes and sensor types changes. This is a political dynamic, not scientific.
Hope you’ve discussed that with Roger. I’d be interested to hear how it went. Please let me know. Cheers.
Ron Graf,
“Land station temperatures are biased upward by every type of human activity that is growing over time, whether its’ the heat from nearby structures or the added humidity greenhouse of local agricultural irrigation.”
You show your bias by saying “temperatures are biased upward”. I think the proper description would be “temperatures are increasing”.
Seems to me that the atmospheric and oceanic temperatures are more germane as land surface temperatures are obviously impacted by urban heat islands, as well as outright manipulation of the data. Also, unclear how land temperatures have been recorded for the last 8000 years as most of that time nobody had thermometers.
When, at the moment, anthropogenic climate change is the major concern, I don’t understand why human created heat islands are considered a contamination of the “signal”. Land surface temperatures are not that, station measurements measure the temperature of the lowest level of the atmosphere, Which complements the many measurements of
of global atmospheric temperature at different altitudes by satellite microwave sounding:
https://images.remss.com/msu/msu_data_monthly.html
(note the “channel” – altitude selection at the top, also the time-series button, which will, in addition to the time series, show the altitude profile for each channel)
Sea surface temperatures are well measured by satellite and a plethora of buoys and research vessels make direct measurements of surface temperature as well as depth profiles – very important for accurate assessment of the (increasing) heat content of the ocean.
As for temperature measurement before thermometers, that is a subject for the well-developed science of paleoclimatology, which uses many cross-correlatable proxies for temperature. Most, but not all, use temperature fractionation of stable isotopes (E.g., hydrogen and deuterium, C-12 and C-13, O-16 and O-18). Obvious and well known examples are ice cores and tree rings. Further details are beyond the scope of this comment, but I can recommend reading, if interested.
“I wonder why the AMO has never had a signature like this in its last 100 or so cycles? I wonder why the heat content of the ocean has increased so much over that same period when the AMO would be a rearrangement of heat content, not a large increase?”
Upper ocean heat increased from 1995 because the warmer AMO reduces low cloud cover.
https://www.ncei.noaa.gov/data/oceans/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/GRAPHS/meantemp_0-700m.png
The AMO warming during the Oort solar minimum was pretty fierce:
https://media.springernature.com/m685/springer-static/image/art%3A10.1038%2Fs41598-017-13246-x/MediaObjects/41598_2017_13246_Fig2_HTML.jpg
“The AMO warming during the Oort solar minimum was pretty fierce”
I do like such quantitative descriptions, but that is OK when a reference is given. Your comment is specific to SE Greenland where during the Oort Solar Minimum a temperature increase of about 1.2 C was observed in the Kangerdlugssuaq Trough, while there was no increase at the Sermilik Fjord (also SE Greenland), but a ~3C cooling there immediately following the OSM. I also note that there is no observable change is GMST for the OSM (ca. 1010-1080 CE) in the PAGES2K global reconstruction.
https://bpb-eu-w2.wpmucdn.com/blogs.reading.ac.uk/dist/3/187/files/2020/01/lia_mwp-1.png
The latter centennial solar minima were warmer at Sermilik. The AMO is normally warmer during centennial solar minima, because of an increase in negative North Atlantic Oscillation conditions.
I don’t think the solar minimum was created a 5 C anomaly difference at two different places in SE Greenland.
I don’t think you are being honest about the data series. There was warming at Sermilik during the Oort minimum, back to the mean. The following Sermilik cooling peaked in the late 11th century, so your “5 C anomaly” argument is bogus.
Ulric,
Gee, all I had to go by was your clipped figure – no paper, no figure caption. Typical.
Ice core records are the best reference.
When it is warmer, ice accumulation is more until more ice causes climate to get cooler.
When it is colder, ice accumulation is less until less ice allows climate to get warmer.
In tracking global warming, we rely on averaged temperatures—a measure of central tendency. However, the intended purpose of averaging is to reduce variance and obtain a more representative figure for an entire sample. Paradoxically, when it comes to averaging temperatures, it seems to do the opposite—it increases variance. This stems from the incorrect assumption that the maximum and minimum recorded temperatures for a particular day represent the extreme deviations for an anomaly. In places like Denver, where temperature swings can be dramatic, a day a with high of 43F and low of 8F yield the same average (25.5°F) as ‘normal’ winter days with highs of 37°F and lows of 14°F. Both scenarios are treated equivalently in climate science, rendering the average temperature meaningless. One central tendency is assigned for a day and then averaging it with another central tendency for the next day, continuing until a monthly average is obtained. These central tendencies are essentially combinations. The more averaging that occurs, the more variance increases, and the true climate signal becomes obscured in the noise of the averaging process. This is all under the assumption that temperature is even being recorded as accurately as possible: we know from Anthony Watts’ work that’s not the case, especially considering the precision of these recorded numbers down to the tenth or sometimes hundredth decimal place.
I don’t understand why an average temperature is useful for climate or weather either locally or globally but I’m not a good enough mathematician to know what is the correct analysis. I like your paragraph about average to reduce variance ( central limit theorem) but weather and climate is surely about the flow of heat and mass and therefore differences are just as important which average will ignore.
I must try to understand the difference between climate and weather.
You nailed it. Averaging data in the context of atmospheric measurements involves dealing with chaotic and unpredictable factors. Each measurement is influenced by specific conditions like wind patterns, sun angle, precipitation, cloud cover, proximity to water bodies, and more. As such, the non-random uncertainty in this case means that variance will increase. If there were a cooling trend happening, the GAT wouldn’t detect it.
I share your opinion in that there seems to not be a ‘smoking gun’ for determining what is truly happening to the climate. Yet, we hear assertions that the science is settled and a consensus exists. Of peculiar interest to me are the adjustments applied to the surface temperature record. Pairwise homogenization involves identifying biases using a ‘neighbor’ station, considering factors like elevation changes or UHI effects. The assumption is to correct the bias in the time series using what is believed to be a homogeneous record (another incorrect assumption). If a station relocates to a higher elevation, the logic involves subtracting the error value determined from analyzing the bias using a neighboring station. That is the dumbest logic ever. The world isn’t one-dimensional, and temperature doesn’t work uniformly. As I said above, each measurement has its unique context, and topography significantly influences it. To fix that error, you would need a machine. Once a measurement opportunity is missed or flawed, there’s no going back. Despite these apparent flaws, their algorithm somehow generates a hockey stick chart.
Perhaps a review of how temperature measurements are made is in order.
https://www.ncei.noaa.gov/access/crn/measurements.html
Thanks for the reference – good process as I would expect – averaging the temperature at a site over seconds, minutes, hours makes sense. But after that I don’t understand averaging as being very useful. Extremes, standard deviations, timeseries offsets make sense but the analysis will be crazy difficult from my limited experience especially on a global basis, hard enough for 1,000,000 m2
Temperature relates to energy, energy budget, and energy balance over climatic periods of time, whether it be seasonal, decadal, centenial, or millenial. And averages are easier to understand, particularly over longer periods. :-)
I suppose I don’t mind the hockey stick except I don’t know what it means. Your question has thrown up a great resource with something closer to real data https://images.remss.com/msu/msu_time_series.html
I am still learning the glossary but that’s easy now with google and copilot.
I’m shocked that in this day and age with something that’s world shattering the data isn’t full transparent. It feels like it’s done on graph paper – and that’s un fair as the link is much better than paper.
All good science is explained simply but in full depth at the same time – I’m fumbling around learning about the climate – it’s a great adventure.
My brain works with pictures so a 3D representation of all the data sets would be great. The data link is 11 channels for the globe which is still very averaged and that feels wrong as it’s a full 3D rather than 2.5D problem.
Not everything is averaged. The maximum temperature is often used to scare us. A simple transition from mercury-based min-max thermometers to modern electronic devices boosted max readings by about 1 degree C. An electronic thermometer records a second-long hot air gust, which the min-max does not record.
ganon,
Averages don’t provide any understanding in this context. The climate does not adhere to a crucial assumption of the Central Limit Theorem: that the measurements are independent and identically distributed; each temperature measurement is not influenced by other factors, and they all come from the same underlying distribution. In reality, the chaotic nature of climate introduces unique contextual factors at individual weather stations, including topography, coastal influences, snow cover, and urban heat island effects, creating a virtually infinite list of variables. The use of instrumentation itself also doesn’t adhere to that same assumption: temperature sensors require regular maintenance which undoubtedly affect the readings. There is likely no set pattern for when these sensors are updated. The U.S. temperature index heavily relies on volunteer observations, where sensor replacements lack uniform schedules. There are also varying observation timings among volunteers further contribute to differences in recording maximum and minimum temperatures. Feel free to correct me, but before around 1950, the United States had an overrepresentation of weather stations globally. The GAT index has absolutely no resemblance to the real world.
Climate science is either unaware or deliberately ignores this. They think that uncertainty can be solved with some gray-shaded coloring they call error bars.
*There are also varying observation timings among volunteers that contribute to differences in recording maximum and minimum temperatures.*
Walter,
“Averages don’t provide any understanding in this context.”
Garbage. Temperature is linearly related to energy content, and longer-term averages help us understand Earth’s energy balance and the trajectory thereof.
As for the unreliability of ground station measurements, maybe, maybe not. But can’t change the past. Even if minor adjustments are made through scientific reanalysis, there are immediate (and false) claims of faking the data to support an agenda. Personally, I’m much more interested in the satellite measurements that have been available for surface temperatures since 1979 (and atmospheric layers since 1992) with no doubts about coverage or local inaccuracies. It is also the period when climate change has become most obvious and overlaying internal oscillations are better understood.
Basically, I do not ascribe to the doubter position that all data that is not completely understood (by them) should be ignored. However, you are free to do as you please.
Simplistically speaking, you are correct in saying that temperature is linearly related to energy content. Perhaps in a controlled setting like a laboratory, the distribution of the sampled means—in this case, the averaged temperatures—would approach a Gaussian distribution; I’m not sure about that one, but it’s not relevant to my main point: the nonlinear nature of the planet’s climate dynamics introduces intricacies that go beyond that simple linear relationship. And that’s the whole reason attempting to average temperature at a single weather station violates the critical assumption of CLT. You didn’t address that. This applies to both satellites and near-surface measurements. Satellite measurements are not direct temperature measurements, and they are not exempt from inaccuracies such as calibration drift and orbital decay. Also, error bars are not adjustments made to the data; they represent uncertainty associated with the measurements. They provide a range in which the true value is likely to fall.
ganon,
Simplistically speaking, you are correct in saying that temperature is linearly related to energy content. Perhaps in a controlled setting like a laboratory, the distribution of the sampled means—in this case, the averaged temperatures—would approach a Gaussian distribution; I’m not sure about that one, but it’s not relevant to my main point: the nonlinear nature of the planet’s climate dynamics introduces intricacies that go beyond that simple linear relationship. And that’s the whole reason attempting to average temperature at a single weather station violates the critical assumption of CLT. You didn’t address that. This applies to both satellites and near-surface measurements. Satellite measurements are not direct temperature measurements, and they are not exempt from inaccuracies such as calibration drift and orbital decay. Also, error bars are not adjustments made to the data; they represent uncertainty associated with the measurements. They provide a range in which the true value is likely to fall.
Walter,
“They provide a range in which the true value is likely to fall.”
That is true for all scientific measurements.
It is fascinating that in the early 2000s when there appeared to be a pause in warming, no skeptics were complaining about averaging temperatures. Now, when the warming appears to be accelerating, or at the very least continuing unabated, it’s “who cares about global mean temperature”? Also interesting is that when extreme local heatwaves are noted, or increasingly powerful hurricanes, these two must be ignored because they are merely local events, or cannot be attibuted to global warming. So…we are now not allowed to use any metric whatever to measure the climate! Well done!
DanB. Back up that assertion “increasingly powerful hurricanes”. Don’t quote dollar damage figures, that’s not a measure from physics. How about total hurricane energy from hurricanes and typhoons around the world?
“It is fascinating that in the early 2000s when there appeared to be a pause in warming, no skeptics were complaining about averaging temperatures. ”
Not correct. See here: https://www.fys.ku.dk/~andresen/BAhome/ownpapers/globalTexist.pdf
Dr. Curry has already debunked the myth of stronger hurricanes, among other things. But when has an activist ever stayed in their lane, or bottom line, cared about truth?
“The troposphere contains 75 percent of atmosphere’s mass- on an average day the weight of the molecules in the air is14.7 lb..(sq. in.)- and most of the atmosphere’s water vapor. Water vapor concentration varies from trace amounts in Polar Regions to nearly 4 percent in the tropics. Most prevalent gases are nitrogen (78 percent) and oxygen (21 percent), with the remaining 1- percent consisting of argon, (.9 percent) and traces of hydrogen ozone ( a form of oxygen), and other constituents. Temperature and water vapor content in the troposphere decrease rapidly with altitude. Water vapor plays a major role in regulating air temperature because it absorbs solar energy and thermal radiation from the planet’s surface.
The troposphere contains 99% of the water vapor in the atmosphere. Water vapor concentrations vary with latitudinal position(north to south). They are greatest above the tropics, where they might be as high as 3% and decrease toward the polar regions.”
In winter, the height of the tropopause decreases and above 60 degrees latitude is an average of only 6 km.
In January, the polar vortex will break up, and numerous stratospheric intrusions with dry air with plenty of ozone (which will not turn out to be a greenhouse gas, but quite the opposite) will descend over the US. Extremely low temperatures will occur in Scandinavia (about -40 C), as the polar vortex will move over Siberia.
https://www.cpc.ncep.noaa.gov/products/stratosphere/strat_int/gif_files/gfs_hgt_trop_NA_f000.png
North America is in for a very cold January.
Thanks, Ren.
Is the “consensus” in science scientific? Does it mean “I know I know nothing”?
https://i.ibb.co/DgGP2mP/ozone-hole-plot-N20.png
https://i.ibb.co/82qR3KH/cdas-sflux-ssta-global-1.png
I think it means, I know nothing as absolute truth or with absolute accuracy; however, this is quite different from not knowing anything.
Certainly consensus isn’t scientific.
There are many reasons that help define this truism. One who is a self described activist, is an example, they fall in the unscientific bucket:
For more context see:
https://judithcurry.com/2023/11/17/a-bad-recipe-for-science/
Jungletrunks
Most of us agree that agree that for most pendulums
t=2pi sqrt l / g
This is a consensus, so surely your pendulums behave differently.
There is scientific consensus that the rest mass of an electron is 9.109 383 7015 x 10^-31 kg; however, there is uncertainty in the last two digits – does that mean we don’t know anything about electron mass?
Trunks, since you address that to me, I’ll answer: The only activist I have ever claimed to be is an activist for scientific rigor and proper use of the scientific method. I think that puts me inside the bucket, while you like of rigor in defining what kind of activist, puts you on the outside.
Since I wasn’t absolutely clear, my mistake, I was referring to “climate science consensus”. It seemed obvious (at the time), what I was referring to.
You agree with peer review you like; and you reject the peer reviewed science you don’t like, or any authoritative science narrative you don’t like, including Dr. Curry’s analysis. You categorically deny that there’s any motivated reasoning in the consensus narrative, even though there’s abundant evidence that there is. You’re in the political activist bucket. Again peruse: https://judithcurry.com/2023/11/17/a-bad-recipe-for-science/ While you won’t acknowledge it, the corruption of consensus is demonstrable. Your lack of acknowledging the obvious places you in the biggest bucket, the political activist bucket.
I have shown two examples that demonstrate that certain theses have been accepted as certain in climate science, while observations contradict them. First, the ozone hole varies with the strength of solar flares and ozone production in the upper stratosphere and with the strength of the polar vortex.
Secondly, one can see a significant difference in sea surface temperature in the two hemispheres at middle and high latitudes, which can be linked to the Earth’s position relative to the sun and the amount of solar radiation available. Thus, it is impossible to speak of “global” warming.
Good information on atmospheric gas concentration. I would also point out that there is typically 50 times more water vapor in the atmosphere than CO2 and that is why night time land temperatures are generally higher for areas that have overcast skies. Also, human activity is not the only source of CO2 as all green plants produce CO2 when they respire in the dark. Do the models reflect that fact?
CO2 released during darkness by respiration is all reabsorbed by photosynthesis the next day, unlike human emissions which are released and accumulate. What do they teach in these schools?
Did they teach you the Michaelis-Menten enzyme kinetics equation either at school, college or during adult life?
If they had, you might understand that plants photosynthesise faster with more carbon dioxide in the air, so create more vegetation, whilst helping to reduce carbon dioxide levels in the atmosphere.
Of course, if you cut down all the trees, rip up all the savannahs AND guzzle oil, then yes, carbon dioxide levels will rocket.
But there’s zero evidence in history that plants don’t absolutely love carbon dioxide, certainly up to 1000ppm if not higher.
Plant root zones were reported at the US Biodynamics Conference a couple of years ago to have 3000ppm carbon dioxide present, which does suggest that it isn’t a plant poison…..
Both the biosphere and the hydrosphere are net sinks (not sources) for CO2 but can’t keep up with anthropogenic emissions (only 60-70%), and that can be expected to worsen as the ocean warms and solubility decreases. The ocean is particularly important in the longer term, because the cold, deep ocean basins hold large saturated reserves of carbonate/CO2, but warm only very slowly (ain’t seen nuthin’ yet).
As a temporary break from the discussion, I would like to take this opportunity to thank Judith Curry for all the stimulating material she has provided us with this past year, not the least of which was her own book!
I wish her all the best in 2024 and hope that she will continue to provide us with more to chew on.
I agree!
I agree as well. Judith Curry is a hero of mine.
Me, too.
I totally agree too. Judith does a fantastic job with this blog.
Paul Winstone raised a question about the average world temperature. I would urge him to read chapter 4 of Alan Longhurst’s recent book “Doubt and Certainty in Climate Science”, incidentally a book recommended by Judith Curry. It will offer a well thought out alternative point of view.
I’m not a book reader but a do love a 00Mb dataset and some charting (2D + colour + shape + some boxes). I will take your suggestion reinforced by my son and knuckle down and read the chapter.
I have started reading the chapter and it covers mush of what I assumed – it’s not getting better as he states he has no formal definition of global average temperature in IPPC – which seems amazing if not surprising as I can’t image a sensible, useful definition. Most of the energy in the system isn’t in the air as it’s a low specific heat and the air temperature doesn’t relate directly to land or sea temperature. I’m one page in and my brain has frozen in thought which is when my reading slows even further as I’m not clear how I’m going to make sense of the next sentence. I will continue.
I suggest looking at:
“Berkeley Earth Temperature Averaging Process”
Since Dr. Curry is 2nd author, perhaps she can answer any questions you might have.
https://images.procon.org/wp-content/uploads/sites/18/berkeley-earth-temperature-averaging-process.pdf
I have found my way to the Berkeley Earth data and it’s a brave attempt at creating a dataset from very sparse information. Clever people. Not sure I could do better. But at least I now know how bad it is.
I just thought that as we agree that averaging is a terrible mathematical algorithm that I would be embarrassed to be using. It’s surprising that somebody hasn’t come up with a better algorithm that should point to a far worse story, because if variance drives climate and extreme climate events, then that increased variance should be in the data. I realise that I have got no idea how you assess climate as the temperature and chemical composition significantly varies spatially much within the atmosphere. So I’m going to have to read all of the chapters of this book with my brain switched on. A typical approach with data analytics is to main centre the data or use absolute differences rather than signed differences so looking at the magnitude of change and I’m sure that’s been done already. I will just have to find the reference. I hope I can find a tool to be able to surf the Berkeley data, both spatially and magnitude.
What puzzles me is why nobody has done the math (correct me if I am wrong) proving (or disproving) that manmade CO2 is the controlling factor in global temperatures. I have been calculating it (roughly) thusly: Greenhouse gasses are 4% of the atmosphere. CO2 is 4% of that. Manmade CO2 (from fossil fuels) is 4% of that. So, we can control global temperatures by changing the composition of the atmosphere by 64 ppm. Maybe, until you recognize that the US is only 20% of that, and most “green energy” proposals reduce that by at best 30%, so now you are realistically talking about 4 ppm being the “global thermostat” when the normal seasonal variation of CO2 is about 5 ppm, highly dependent on latitude.
You should also look at the MAGICC Climate Model and note that, under any reasonable scenario, if we curb fossil fuels and, depending on how deep the cuts and how much of the world participates, we can reduce year-2100 temperatures by somewhere between 0.02 and 0.37 degrees C! We cannot measure “global temperature” that accurately!
I think the thing you missed is that manmade CO2 has increased atmospheric CO2 by 50%, The fact that some of it is cycled through reservoirs (which, as sinks, cannot keep up) does not change that.
The big problem is the extension of winter conditions in the northern hemisphere until April, when the Earth begins to move away from the Sun in orbit from January. Warm oceans in the northern hemisphere will produce large amounts of snow in the first part of winter, and low temperatures will continue until April.
Ren,
Thanks for your climate input and observations.
Always interesting and welcome.
Keep it coming!
Climate science cannot apply the scientific method to climate. There is no control Earth. So there is no way to run an experiment. And no, climate model output is NOT physical and isn’t data. It’s only climate model output.
We have only a paltry amount of actual data about climate, much of which has to be inferred from proxies. Proxies are regional, not global.
Steps of the Scientific Method:
Purpose/QuestionAsk a question.
Research
Hypothesis
Experiment
Data/Analysis
Conclusion
Machine Learning does the following on MET and climate
Data/Analysis (learning)
Reverse Engineering
Conclusion
It does work where other approaches fail
Paul – please supply references for the ML attempts to explain climate.
(1) Experiments do not require control or simplification. They may simply be designed as passive observation to test the hypothesis. Both astronomy and climate fit in this category. Also, the experiment may be retrospective, e.g. “If we add a lot of polyatomic molecules to the atmosphere, will it cause warming. The answer is also confirmed by a large number of controlled laboratory experiments.
(2) Climate is statistical analysis of local (point) measurements. This does not mean they cannot be made regionally and globally. This applies to both direct and proxy measurements.
Your denials of the amount, quality, and location of (paleo)climate measurements are nonsense.
Quite so. In fact, there is only one universe, so there can be no science as there is no control.
Let us stop pretending we know anything at all and save ourselves time and money.
Experiments need not have control or controls to be scientific, sometimes critical observation is an adequate experiment. To wit, Astronomy (and Climate). Also, there are plenty of laboratory experiments that support climatology, e.g., GHE and fluid dynamics.
Jim2
Individual proxies are regional, but they can be assembled all around the world. It’s not just in Siberia, Greenland and Antarctica that proxies exist….
No matter the work-arounds, those work-arounds are less certain than experiments that can follow the six steps strictly. Sure you don’t have what you don’t have, a second Earth for example, but that doesn’t make the work-arounds the best scientific method either.
Let us know when you have access. Until then, we work with (not around) what we have.
I just wonder, why using the term “pre-industrial” and not “since the Little Ice Age”?
Accepting the term”preindustrial” implies that the industry is the culprit. It is like accepting the term “atomic energy”, a Greenpeace propaganda term, for nuclear energy.
Michael Novosad | December 30, 2023 at 5:30 pm | Reply
I just wonder, why using the term “pre-industrial” and not “since the Little Ice Age”?
Michael – its a very simple scientific reason
The industrial age started late 1700’s , the little ice age ended mid 1800’s
By using the term “pre-industrial” , climate scientists can blame the increase in CO2 from 280ppm to 281ppm as the cause for the end of the LIA.
Sarcasm noted and typical – If you don’t understand something, try to make an (attack) joke out of if. Actually, climate scientists consider the LIA (and MWP) to be regional (at different times), based on subtle hemispheric changes in insolation affecting ocean currents.
https://www.nature.com/articles/ngeo1797
https://stephenschneider.stanford.edu/Publications/PDF_Papers/Bradley.pdf
Maybe because the end of the little ice age is poorly defined and not linked to human activity, as well as extending past the start of the industrial age with the invention of the steam engine. They are distinct but (partially) overlapping eras.
I would humbly suggest a date (like 1850 or so) or “since modern measurements started”.
But that would have less propaganda value.
Jim2,
to both your points: first it’s not my claim that co2 emissions are levelling off, but rather the presumption in the “moderate” forecast of warming in the article. If you think we should be less conservative and expert co2 emissions to continue to increase, i have no dog in that fight.
as to hurricanes becoming more powerful – first of all, models never predicted increased numbers of hurricanes, see link attached. it is one of many. the trend is still fairly new so it is fair to say it could be a temporary anomaly, but we know for a fact that oceans are warming and we know for a fact that hurricane potential intensity correlates highly with ocean temperature, so it is more likely that the trend we’re seeing will persist driven by warmer oceans than that it is a random blip. of course, warmer ocean could be counter-balanced by increased wind-shear or increased atmospheric dust. hurricane strength is driven by many factors.
https://www.epa.gov/climate-indicators/climate-change-indicators-tropical-cyclone-activity#:~:text=According%20to%20the%20total%20
annual,during%20the%201950s%20and%201960s.
DanB
A few reasons the prediction that increasing SST will make hurricanes more intense are dubious – possibly scare tactics without true scientific basis?
A) Its not the warming SST that would increase intensity, its the delta between air temps and SST that affects intensity.
B) there has been approx 150+ years of increasing SST with no long term increase in hurricane intensity (after adjusting for observational deficiencies in the early years – pre satelite, pre heavy shipping days). Several studies claiming the SST increase will cause an increase in intensity readily admit the aforementioned fact (though sheepishly)
C) Judith Curry, who is vastly more knowledgable on hurricanes posted an article of hers a few years ago pointing out a fairly high correlation of the cyclical pattern of hurricanes, both frequency and intensity , with other factors. (possibly amo, though I dont recall the other factors which reasonably high correlation. I am open for someone to pipe with the factors J Curry mentioned.
There are probably not enough hurricanes for definitive analysis of frequency and intensity; however, something that is often neglected in these discussions is the rate of intensification (and less predictability) of hurricanes that seems to be related to climate change – at least in the Atlantic basin; an important factor for preparedness that merits further study.
https://www.nature.com/articles/s41467-019-08471-z#ref-CR7
I’d recommend you go back in the CE archive, circa 2019; Dr. Curry wrote more than a half dozen essays about hurricanes. There’s no demonstrable rate of hurricane intensification that can be attributed to AGW.
Here’s one of the essays, it’s more broad stroke article that analyzes methods of study:
https://judithcurry.com/2019/06/13/extremes/
The EPA paper that Dan presented is a classic example of using the appearance of science to seed the narrative; it’s not for advancing science, but rather creating a beachhead for media to exploit so that they can “honestly” flesh out the narrative to the public. Published science, after all, is above reproach; especially so when the government publishes it. The paper source is all the media needs to run a fear campaign on. The papers rigor is a joke.
I quoted you DanB. It was you who said “increasingly powerful hurricanes” here: https://judithcurry.com/2023/12/29/realistic-global-warming-projections-for-the-21st-century/#comment-997895
So you retract that statement??
jungletrunks,
hurricanes have gotten stronger in the atlantic. this is a fact. see my post and above link. if dr. curry has “debunked” this then she is mistaken. but she may have been referring to the pacific, where the word ‘hurricane’ is not generally used. there, we have not seen any trend toward stronger tropical cyclones.
Dan, The paper you posted indicates no growth in the number of hurricanes reaching the US since 1880.
It states: “Records of tropical cyclones in the Atlantic Ocean have been collected since the 1800s”. Concise for peer review selling a product.
It states: “Some hurricanes over the ocean might have been missed before the start of aircraft and satellite observation” It’s possible there were actually more hurricanes in the past, right?
It states: “Wind speed collection methods have evolved substantially over the past 60 years, while aircraft reconnaissance began in 1944 and satellite tracking around 1966. Figure 1 shows how older hurricane counts have been adjusted to attempt to account for the lack of aircraft and satellite observations.” That’s a confidence builder for accuracy?
Dr. Curry on the subject of hurricanes, 2019:
https://judithcurry.com/2019/09/10/dont-overhype-the-link-between-climate-change-and-hurricanes/
https://judithcurry.com/2019/09/07/alarmism-enforcement-on-hurricanes-and-global-warming/
https://judithcurry.com/2019/06/13/extremes/
https://judithcurry.com/2019/03/11/hurricanes-climate-change-21st-century-projections/
https://judithcurry.com/wp-content/uploads/2019/09/sr-hurricanes-6-v2.pdf
Another author, 2020:
https://judithcurry.com/2020/11/17/slower-decay-of-landfalling-hurricanes-in-a-warmer-world-really/
You’re going to need to spike the Kool-Aid in your presentation with a special tonic, “see the light” potion, or find something better. I know when something smells half baked, but do you?
Jungletrunks,
I read the first of your links. In fact Dr. Curry does not refute the possibility that hurricanes have gotten stronger due to AGW. Here’s a quote: “There is some evidence suggesting contributions from man-made climate change to: an increase in the average intensity of the strongest hurricanes since the early 1980s; an increase in the proportion of hurricanes reaching Category 4 or 5 in recent decades; and the increased frequency of Hurricane Harvey–like extreme precipitation events in the Texas region. There is also evidence suggesting a decrease in how fast hurricanes move, but that has not been attributed to man-made climate change with any confidence. The WMO Report states that there is disagreement among the authors about whether these trends reflect the influence of man-made climate change.”
So her view, while a bit more skeptical than mine, is not so very different. I agree that we need more data before we can draw a firm conclusion as to why and whether it will continue, but the data right now shows that atlantic hurricanes have gotten stronger.
I’ve read the qualification remark:
“…but that has not been attributed to man-made climate change with any confidence.”
The comment speaks for itself. We’re talking about distinctions and subtleties in data that are very difficult to discriminate. I land where the data takes science conclusively. Ambiguity relates the question to the inconsequential. You don’t agree?
Trunks,
Here is requested evidence that landfalls, all hurricanes, and major hurricanes have increased with statistical significance.
https://www.nature.com/articles/s41467-021-27364-8/figures/1
Thanks to joethenonclimatescientist for bringing this information to light.
Could you point out the last peer reviewed paper by Dr. Curry with respect to hurricanes, please?
DanB | December 30, 2023 at 6:39 pm | Reply
jungletrunks,
hurricanes have gotten stronger in the atlantic. this is a fact. see my post and above link. if dr. curry has “debunked” this then she is mistaken.
DanB – Curry has absolutely been debunked if you ignore cherrypicked start dates and observational deficiencies. Thanks for pointing the biases in climate science
Joe, maybe you should read the referenced articles before you make your typical knee-jerk (unsupported) denials and attacks. You do climate change skepticism a disservice.
https://www.nature.com/articles/s41467-021-27364-8
Joe, thanks for your reference. I thought a brief description would be in order for those that don’t bother to read it. Readers should not be fooled by the “downsized” in the title.
The paper shows that, for the Atlantic basin, landfalls, hurricanes and major hurricanes have ALL increased over the last 150 year, both raw counts and after correction for under-observation in the early part of the record. In particular, landfalls and major hurricanes show a sharp (~ x2) increase over the period 1990-2010). See Fig. 1 in:
“Atlantic tropical cyclones downscaled from climate reanalyses show increasing activity over past 150 years”
https://www.nature.com/articles/s41467-021-27364-8/figures/1
Observational deficiencies are an understatement for the paper in question. The numbers of written caveats are quite stunning. Too bad Davy Jones”s locker can’t offer up good observational data to help them fill in the massive historical dearth of data; though reanalysis springs eternal, it’s actually magical–the ghost of hurricanes past kinda stuff.
Interestingly, compare the historical landfall hurricane numbers to the mostly empty EPA paper chart. See the magic with the 1990-2010 numbers? Cherry pick alert.
Landfalling hurricanes that hit the US have been in a downtrend since the 1899s. That includes all landfalling hurricanes and also only landfalling strong hurricanes.
Your opposing claims are lies.
What is your evidence, other than your imagination, that refutes:
https://www.nature.com/articles/s41467-021-27364-8/figures/1
Calling others liars, without evidence, is quite typical of liars.
Atlantic tropical cyclones are normally more intense during a warm AMO phase, and a warm AMO phase is normal during a centennial solar minimum.
Rising CO2 forcing is expected to increase positive North Atlantic Oscillation conditions, which in theory would drive a colder AMO.
Maybe someone can explain how you all fawn over Dr. Curry as the second coming of all things climate-related, and yet contradict her all the time without realizing it. For example, in article after article she clearly accepts the notion of an accurate and useful global mean temperature. She posted at length with an explanation of the anomalous warmth in the second half of 2023, comparing it, to a tenth of a degree, to February of 2016, and drawing inferences based on this comparison. Yet you are strongly believe this is impossible! Where is your outrage at her? Why don’t you argue with her that her precision is impossible???? OHhhhhhhhh! Wait! It’s because when that precision makes the point you want to make, it’s actually completely totally fine!
I don’t agree with your intepretation. Dr. Curry, from my understanding, doesn’t necessarily advocate for a particular stance on climate issues but rather engages in a critical analysis of arguments within the mainstream climate community. This involves scrutinizing various aspects, such as averages, El Niño, CO2 budgets, and climate models. The intent isn’t to blindly endorse or contradict specific points but to encourage a nuanced examination of the science. The recent post you mentioned is an example of her analytical approach. It doesn’t imply an unwavering acceptance of precision but rather a critical evaluation of the information at hand.
I don’t like to get into personal comments, but DanB’s comment is offensive. We do not fawn over Dr. Curry, but respect her as someone who is always open to new ideas and has integrity. She is someone who has always let the science lead her. If you read her book about the complexity of climate science, you might understand why it is entirely possible not to agree on every detail.
Incidentally, one of her strongest points is that she tolerates your kind of comment on her website!
fair enough. so you all look up to her. and yet utterly contradict her without realizing it, when you deny the ability to measure a global mean temperature or the utility of it, because she uses it repeatedly. it is not an issue of not agreeing with her on every point. it is a failure to recognize any consistent belief system other then “i reject x”. but one of you postulates y and the next postulates z, and you both are happy with each other because both of you reject x, but you seem to ignore that your own presumptions are entirely contradictory. if you truly don’t believe we can measure global mean temperature, or that it is not a useful metric, virtually every post on this blog is meaningless!
I suggest an experiment for IPCC policy makers:
1. Take two 1 hectare fields in a subtropical place.
2. Raze one of the fields to the ground, churn it up and leave it.
3. Plant pioneer trees in the other field, ensure that they receive rainfall water to establish and gradually add elements of a food forest in between the pioneer trees.
4. Measure the temperature within the established food forest and in the bare field and compare over a 10 year period.
I think we will find that you can ‘reduce’ ‘land temperature’ by 2-5C+ in the heat of the day and mitigate the cold as well simply by growing vegetation rather than destroying it.
This is by far the simplest, best understood technology available to mitigate ‘extreme heat’ known and along the way it provides way more to eat, allows for way more biodiversity and it creates soil rather than destroys it.
It doesn’t matter whether the temperature drops by 2C, 4C or 6C quite frankly: it just matters that a productive hectare of land was created, in contrast to the dead land created by ploughing it all to death.
Of course, the VCs would hate this as it avoids the need for all their varied and unnecessary ‘new technologies’.
When they aren’t necessary and don’t add value, they are merely ‘something ventured, nothing gained’.
The outdoors THERMOMETERS don’t measure the outdoors air temperature!
–
The outdoors thermometers measure the outdoors TEMPERATURE, so we knew what to put on to get dressed accordingly to the outdoors temperature.
–
The outdoors thermometers give us some very valuable information about the outdoors thermal conditions (is outside the house cold or it is warm).
By experience we know what to put on according to the outdoors measured temperature.
–
The outdoors thermometers do not measure the actual air temperature. It is impossible for an outdoor thermometer to measure the outdoors air temperature.
–
Indoors the enclosed air is in thermal equilibrium with surrounding it walls. Thus the indoors temperature thermometer measures we rightly consider as the air temperature too.
–
When thermometer measures outdoors temperature, the outdoors air temperature is not in thermal equilibrium with its surroundings.
–
Thus it is impossible to measure the outdoors air temperature with thermometer.
–
Air is a thin medium, also air is known to be a good insulator. Even in the shade thermometer is subjected to the surroundings solar energy reflection or/and the surroundings IR energy emission.
–
When indoors thermometer (in a room) we remove the air from the room, the readings on thermometer will not change.
–
On the airless Moon, in shade, thermometer will read a temperature.
–
Conclusively, when we put a thermometer out of the window, thermometer doesn’t measure the air temperature.
–
https://www.cristos-vournas.com
Any body exposed to sunlight (including outdoors in the shade) at a certain angle does not show the air temperature, but its own temperature, depending on what material it is made of.
For example, on a slope set at a high angle, we will feel strong radiation even in winter.
Thank you, Ireneusz!
Satellite measurements give a good indication of how the air heats up as a function of density. Measurements always indicate the weakest radiation at 100 hPa.
https://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_MEAN_ALL_EQ_2022.png
https://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_MEAN_ALL_NH_2022.png
Look at a picture of an official temperature measurement station – it’s more than a thermometer.
ganon,
> “Look at a picture of an official temperature measurement station – it’s more than a thermometer.”
–
ganon, do you refer to the standartised Stevenson Shelters?
–
https://www.cristos-vournas.com
Nickola … thank you for an interesting paper. Happy New Year!
Nicola … my bad.
I’m not aware that anyone has said that we cannot measure or try to measure the temperature of the earth. What is open to question is whether the measurements currently being employed are sufficient for the task, at least in the way that they are being presented. To indicate that we can detect temperature anomalies to four or more significant figures and speak confidently about their meaning is definitely open to question.
Jay, HadCRUT 5.0.2.0 does statistical analysis on multiple datasets and shows that, over the satellite era, Monthly global averages have an uncertainty of about 0.05 C, 0.03 C for annual averages (95% confidence limit). If you’d like to use one standard deviation for weighting or error propagation, those values can be divided by 2. By standard practice, the values and uncertainties should be reported to one or two places beyond the last significant figure to avoid truncation errors. HadCRUT reports to more places (7 or 8), but that doesn’t really hurt anything.
Here is a typical data point, as reported (year-month, anomaly, lower bound, upper bound):
2023-04,0.92748064,0.87642294,0.97853833
Note that the upper and lower limits are equidistant (0.05106) from the central value, indicating they are using normal gaussian statistics. I would report this value as 0.9274(26) C and specify 1σ standard uncertainty. Anyway, reporting to 0.001 or even 0.0001 C is justified.
ganon1950,
Those uncertainties you quote of 0.05 and 0.03
C have no meaning in the scientific world of metrology.
To qualify as valid, one would need to show that successive measurements over time or space fall within the statistical distribution of the uncertainty measurements that were used to derive them.
A practical measurement uncertainty is intended to alert you to a change to your past measurement experiment as time goes by. That is a main purpose. All you need to do now is to quote some forthcoming monthly or annual temperature estimates, to show that your theory was correct, when they fall within the +/- 0.3 or 0.05C that you reported.
Good luck with that. Sorry you will not be sharing your investigation with us, because it is not your style to accept that you have been snowed by some incredibly poor stuff that masquerades under a self-invented title of climate “science”.
(Still waiting for you to hand in your science badge). Geoff S
ganon1950 | December 31, 2023 at 10:38 am | Reply
Jay, HadCRUT 5.0.2.0 does statistical analysis on multiple datasets and shows that, over the satellite era, Monthly global averages have an uncertainty of about 0.05 C, 0.03 C for annual averages (95% confidence limit). If you’d like to use one standard deviation for weighting or error propagation, those values can be divided by 2. By standard practice, the values and uncertainties should be reported to one or two places beyond the last significant figure to avoid truncation errors. HadCRUT reports to more places (7 or 8), but that doesn’t really hurt anything.
Here is a typical data point, as reported (year-month, anomaly, lower bound, upper bound):
2023-04,0.92748064,0.87642294,0.97853833
Note that the upper and lower limits are equidistant (0.05106) from the central value, indicating they are using normal gaussian statistics. I would report this value as 0.9274(26) C and specify 1σ standard uncertainty. Anyway, reporting to 0.001 or even 0.0001 C is justified.
Ganon – Seriously?
Sherr001,
“Those uncertainties you quote of 0.05 and 0.03
C have no meaning in the scientific world of metrology.”
“All you need to do now is to quote some forthcoming monthly or annual temperature estimates, to show that your theory was correct, when they fall within the +/- 0.3 or 0.05C that you reported.
Good luck with that. Sorry you will not be sharing your investigation with us, because it is not your style to accept that you have been snowed by some incredibly poor stuff that masquerades under a self-invented title of climate “science”.
(Still waiting for you to hand in your science badge). Geoff S”
Apparently you don’t even know what the standard error of the mean is. And, one standard deviation means that 68% of the measurements fall within the s.d., if it is a normal distribution, which is not guaranteed for real measurements, which often exhibit skew. Despite your lack of understanding of statistics, I like a challenge; here is a statistical analysis of the PAGES2K full ensemble data set for temperature anomaly for a preindustrial (1750-1800 CE) “baseline.”
https://mega.nz/file/Z7lxHJKS#e5qP4nVMhrxgWS066X-1XkqWZKAoo-Jg1fCpC-I1L94
You are the one that should turn in your science badge, at least until you have gone back to school and take a basic class in statistical analysis and metrology (measurement science).
JoeTNCS,
Yes, seriously. If you don’t understand scientific statistical analysis, that is your non-scientist problem.
ganon1950 | January 2, 2024 at 10:34 am |
JoeTNCS,
Yes, seriously. If you don’t understand scientific statistical analysis, that is your non-scientist problem.
Ganon – if you as a scientist ( or as you claim to be a scientist) and you dont understand that scientific statistical analysis can not ever overcome/correct measurement errors to achieve better precision or better accuracy, especially systemic errors , then you have a vastly bigger scientific problem.
ganon1950 | January 2, 2024 at 10:34 am |
JoeTNCS,
Yes, seriously. If you don’t understand scientific statistical analysis, that is your non-scientist problem.
ganon1950 | January 1, 2024 at 9:42 am | Reply -“For preindustrial temperatures, I refer you to PAGES2K, with yearly temperatures back to year 0 CE based on an ensemble of some 7000 proxies of different types.
https://figshare.com/articles/dataset/Reconstruction_ensemble_median_and_95_range/8143094
Without full analysis of the dataset, it appears that 95% confidence limits (approx 2 standard deviations) are on the order of ±0.3 C, or ~±0.15 C for ~ one standard deviation, and thus it is certainly apropriate to report to 2 decimal places, preferably 3 to avoid rounding errors, particularly for decadal (or longer) averages. PAGES2K reports to 0.0001 C and I don’t have a problem with it.”
ganon – you point to Pages2k as proof of valid statistical analysis. .0001C
Seriously ?
Ganon you lose a lot of credibility when you repetitively cite Pages2k with the well known limitations and the documented corruption in the paleo reconstructions.
“Apparently you don’t even know what the standard error of the mean is. And, one standard deviation means that 68% of the measurements fall within the s.d., if it is a normal distribution, which is not guaranteed for real measurements, which often exhibit skew. Despite your lack of understanding of statistics, I like a challenge; here is a statistical analysis of the PAGES2K full ensemble data set for temperature anomaly for a preindustrial (1750-1800 CE) “baseline.”
Ganon,
You keep missing the mark:
“Those uncertainties you quote of 0.05 and 0.03
C have no meaning in the scientific world of metrology.”
Measuring temperature, in this context, doesn’t meet the repeatability conditions for measurements. Here’s an excerpt from the Guide to the Uncertainty of Measurement (GUM):
“B.2.15 repeatability (of results of measurements)
Closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement.
NOTE 1 These conditions are called repeatability conditions.
NOTE 2 Repeatability conditions include:
– The same measurement procedure
– The same observer
– The same measuring instrument, use under the same conditions
– The same location
– Repetition over a short period of time
NOTE 3 Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.”
JoeTNCS.
What I said was:
“Without full analysis of the dataset, it appears that 95% confidence limits (approx 2 standard deviations) are on the order of ±0.3 C, or ~±0.15 C for ~ one standard deviation, and thus it is certainly apropriate to report to 2 decimal places, preferably 3 to avoid rounding errors, particularly for decadal (or longer) averages. PAGES2K reports to 0.0001 C and I don’t have a problem with it.”
I never said anything about a statistical analysis yielding a precision of 0.0001 C. Either you can’t read or have a penchant for misrepresentation, if not outright lying.
JoeTNCS,
“Ganon – if you as a scientist ( or as you claim to be a scientist) and you dont understand that scientific statistical analysis can not ever overcome/correct measurement errors to achieve better precision or better accuracy, especially systemic errors , then you have a vastly bigger scientific problem.”
I understand that. Your accusations are meritless. However, statistical analysis of multiple measurements improves precision, better accuracy requires calibration. Modern temperature measurements have both. Systematic errors are handled by calibrations and using different systems. And yes, I am a (retired) scientist. I have already provided evidence, but can repeat it, if you have doubts.
Ganon reply at Ganon1950 | January 2, 2024 at 11:28 am
I never said anything about a statistical analysis yielding a precision of 0.0001 C. Either you can’t read or have a penchant for misrepresentation, if not outright lying. ”
Ganon – this is what you wrote -ganon1950 | December 31, 2023 at 10:38 am | Reply “PAGES2K reports to 0.0001 C and I don’t have a problem with it.””
GAnon – a mere 50 minutes later Ganon Denies stating that he had no problem with a statistical analysis yielding a precision of 0.00001 C.
Walter, I understand the conditions for repeatable measurements. That is why current temperature measurements are made at 2 second intervals and averaged for 5 minutes. That does not preclude using statistics on data sets that do not meet all the conditions for repeatable measurements, particularly time. Having non-repeatable conditions will have larger uncertainties (not lower), but are still realistic estimators of uncertainties for the given measurements. If there are (know) trends in measurements over time, least-squares fitting to an (assumed) trend function can still yield measures of uncertainty/statistical variance.
JoeTNCS,
Yes, I said, “PAGES2K reports to 0.0001 C and I don’t have a problem with it.”
And this is how you misinterpreted it.
“you point to Pages2k as proof of valid statistical analysis. .0001C”
Making an equivalence between “reporting precision” and “valid statistical analysis” is incorrect. I also have no problem with HadCRUT reporting to 8 significant figures because, like PAGES2K, they report the valid statistical analysis of 95% confidence limits. Sorry, you don’t understand the difference.
JoeTNCS
“Ganon you lose a lot of credibility when you repetitively cite Pages2k with the well known limitations and the documented corruption”
If you are going to make accusations like that, you should provide references to what those limitations are (beyond the limitations that PAGES2K describe themselves) and particularly references to the “documented” corruptions.
I have searched “PAGES2K data manipulation accusations”, and found nothing. If you can’t provide (reputable/peer reviewed) references, I’ll just have to assume that it is typical JoeTNCS made up denialistic crap.
ganon1950 | January 2, 2024 at 12:30 pm |
“I have searched “PAGES2K data manipulation accusations”, and found nothing. If you can’t provide (reputable/peer reviewed) references, I’ll just have to assume that it is typical JoeTNCS made up denialistic crap.”
Ganon – you are not going to find what you are not looking for
Try a simpler search on google
“pages2k errors”
Lots of hits
of course it will take some honesty on your part
come back and defend your statement ” PAGES2K reports to 0.0001 C and I don’t have a problem with it.”
Jay,
Lots of people have said that (a) we cannot measure global mean temperature, and that (b) even if we can it is a meaningless and useless concept. Scroll up. You’ll see this point being made by several commenters. Of course, even this post itself presumes that we can measure and use GMT. Every article that questions climate sensitivity to co2 presumes this, of which skeptics have produced volumes. Still, if one shows the extraordinary warming in the last 7 months, the skeptic community has sudden amnesia that nearly everything they claim also depends on measuring a global mean temperature, and suddenly declares “But global mean temperature is meaningless!”
Many issues around tracking global warming by surface temperature increases do not apply to tracking global warming by ocean heat content. After all, that is where most of the heat from radiative imbalance is ending up. See for example https://www.climate.gov/news-features/understanding-climate/climate-change-ocean-heat-content
Of course, the total measured heat content comes from an array of ocean temperature measurements which I am sure have their own set of problems. But the numbers being averaged do not vary as much.
Science and propaganda are not aligned.
Bruce,
Correct. Some of the strongest rejections of propaganda have come from scientists. Nobel Laureate Richard Feynman has given examples. Geoff S
With respect to the number of significant figures reported by HadCRUT, I would again disagree with you. Your statement is that “reporting to 0.001 or even to 0.0001 C is justified”, which indicates that 0.0001C is pushing the limit. Adding two more places clearly is designed to suggest a robustness to 0.0001 that is unjustified. As such, it is duplicitous.
With respect to the temperature measurements themselves, the ocean temperature measurements are inherently more ephemeral, because of the fact that the sea is constantly in motion. In consequence, they are clearly not of the same quality as the land-based measurements. Is it appropriate to use a simple averaging technique for measurements of differing quality?
In addition, although you may well consider that the issue has been addressed, there are still concerns as to whether the problem of urban heat islands has been adequately dealt with.
Scafetta’s paper, which is at least the nominal subject of this blog raises the matter once again.
Jay,
“Your statement is that “reporting to 0.001 or even to 0.0001 C is justified”, which indicates that 0.0001C is pushing the limit. Adding two more places clearly is designed to suggest a robustness to 0.0001 that is unjustified. As such, it is duplicitous.”
I just justified it to you – it is the extent that is necessary to carry the information without loss of precision or rounding errors. You are free to think whatever you want, whether you understand it or not.
RE global heat islands. Yes I think the urban heat island effect has been adequately treated.
https://images.procon.org/wp-content/uploads/sites/18/influence-of-urban-heating-on-global-temperature-land-average.pdf
(1) Urban heat islands are an anthropogenic source that adds to global average temperature – not an artifact. (2) In an effort to minimize small area bias, urban stations are generally not used for global averages – if anything, it creates a small negative by not including enough urban area-adjusted measurements. (3) Detailed studies (above reference and others quoted therein) show that the biases between using all stations vs. only very rural stations is 0.10 to 0.12 C/100 years, i.e., the very rural stations show ~0.001 C/yr less warming than the complete set of stations. Your statement of “there are concerns as to whether the problem of urban heat islands has been adequately dealt with”, applies only to those that haven’t bothered to study it, or choose not to because they, like you, would prefer to keep it as a (false) suggestion that UHE is an artifact that discredits measurements of global warming. It is similar to the false charge that averages are meaningless – they are not – it is just an excuse to ignore or discount results they don’t like, although they are examined for uncertainty and errors much more deeply than the “doubters” can even imagine.
Sorry, mistake:
“show that the biases between using all stations vs. only very rural stations is 0.10 to 0.12 C/100 years, i.e., the very rural stations show ~0.001 C/yr less warming than the complete set of stations.”
Actually, for stations with more than 30 years operation, the very rural stations show 0.0012 ± 0.0004 C/yr MORE warming that the complete set. (It also illustrates some of the precision attainable by averaging over many measurements over a long period of time).
ganon1950,
You are quite incorrect about Urban Heat Island temperature magnitudes.
First, philosophic. To subtract with a rural station setting, you need to be able to define a rural station. Presently, it is one that does not show UHI effects. (Circular logic). It implies that Urban stations have a definable or measurable property to distinguish them. They do not.
Second, practical metrology. The true uncertainty of typical daily past historic land surface temperatures is of the order of at least 1.5C expressed (wrongly) at 2 sigma criteria using normal distribution statistics. If you have a novel example of an uncertainty for such observations of under 0.5C, including raw data, I would be pleased to show you its error. Geoff S
Sherr001,
I am not incorrect in my comment. I cited a reference and summarized some of their findings. You should try reading it, instead of assigning your misconceptions to me. If you don’t understand the improvement in precision/accuracy that comes from multiple measurements and calibration compared to a single measurement, that’s your problem. If you don’t like the statistical analysis in the paper I cited, write them a nasty letter – I’m sure they will be impressed.
Jay,
“the ocean temperature measurements are inherently more ephemeral, because of the fact that the sea is constantly in motion. In consequence, they are clearly not of the same quality as the land-based measurements.”
Do you really think ocean motion is greater than that of the 1.5 meter high surface atmosphere turbulent motion (that is where “surface” temperatures are measured)? The ocean also has much greater thermal inertia (heat capacity) and exhibits much less inter-annual variability.
Jay,
“With respect to the temperature measurements themselves, the ocean temperature measurements are inherently more ephemeral, because of the fact that the sea is constantly in motion. In consequence, they are clearly not of the same quality as the land-based measurements. Is it appropriate to use a simple averaging technique for measurements of differing quality?”
The answer to your question is no. Ocean temperature measurements, as you highlight have their own context different from land measurements. The measurements are heavily influenced by outside influence. Ganon’s reply shows he missed the mark about the violation of the assumption of the Central Limit Theorem completely. The only reason for the equidistance between the upper and lower limits is that when you have a large number of samples – thousands if not hundreds of thousands of averages – the resulting averages will approach a Gaussian distribution. You can average as many averages as you want, but they aren’t representative of the real world (fraud). Their assertion of measurement uncertainty also shows that HadCRUT doesn’t’ know, or purposefully ignores, the uncertainty from the real world. Uncertainty might as well be 1,000,000,000C; this what several have deemed as the Great Unknown.
Jungle,
I was perfectly clear that there is more science to be done and data to be gathered before we can confirm that AGW will lead to stronger hurricanes. You claimed, categorically, that Dr. Curry rejected, and even disproven, that hurricanes had gotten stronger.
The quote shows you were deeply mistaken, and her concern is with jumping to conclusions and exaggerating the impact of those conclusions. that is a drastically different thing.
Don’t put words in my mouth, Dan. I stated that Dr. Curry debunked a demonstrably contrived narrative, that AGW had increased the numbers of hurricanes, also the intensity of hurricanes. I didn’t elaborate further.
You’re circular. So do you believe hurricanes have gotten stronger based on firm evidence? Though you state more science needs to be done. Go figure.
Okay, more science needs to be done, fine; show something worth discussing.
i stand corrected. Someone else said that about Dr. Curry. You merely posted the links. My mistake.
actually, i was not mistaken at all. You said, in an above post, “Dr. Curry has already debunked the myth of stronger hurricanes, among other things.” Those are your exact words, which are essentially identical to when I claimed that you said. So you are entirely incorrect both about what Dr. Curry said about hurricanes, and about what you yourself said about Dr. Curry.
Joetheclimatescientist,
you wrote, “Its not the warming SST that would increase intensity, its the delta between air temps and SST that affects intensity.”
This is incorrect. The delta is important in generating the necessary convection, but the energy is driven primarily by evaporation, and this correlates directly with the ocean’s temperature. This is why hurricanes do not form in temperature oceans. Not enough evaporation. Here is a discussion with probably the foremost hurricane scientist, Kerry Emanuel, they provides some insight.
https://e360.yale.edu/features/
exploring_the_links_between_hurricanes_and_ocean_warming
That should read “temperate oceans”, not “temperature oceans”
ganon1950 @ December 31, 2023 at 6:00 pm
You are ignoring the fact that there are two types of errors when reading instruments, namely random and systematic, as I explain below.
Random errors can be accounted for by taking more readings, thus improving the precision of the readings a.k.a. “The Large Number Theorem”. Therefore, you can improve the precision of the readings (reduce the random error) by taking many readings on the SAME instrument in SAME environment, and thus reduce the random error to a small amount for that instrument.
However, systematic errors cannot be reduced by taking many readings – they are an in-built bias in the instrument. Most early thermometers had an accuracy (bias) in the range of ± 0.5 °C. Therefore, to quote preindustrial temperature anomalies beyond the first decimal place is absolute nonsense – they have an error margin of at least ± 0.5 °C.
You forgot that biases for different given instruments are probably random. Thus averaging over a number of sites/instruments should still improve precision and accuracy. If uncertainties are reported (e.g. HadCRUT), how many places past the decimal points are reported does’t matter, unless it is too few.
For preindustrial temperatures, I refer you to PAGES2K, with yearly temperatures back to year 0 CE based on an ensemble of some 7000 proxies of different types.
https://figshare.com/articles/dataset/Reconstruction_ensemble_median_and_95_range/8143094
Without full analysis of the dataset, it appears that 95% confidence limits (approx 2 standard deviations) are on the order of ±0.3 C, or ~±0.15 C for ~ one standard deviation, and thus it is certainly apropriate to report to 2 decimal places, preferably 3 to avoid rounding errors, particularly for decadal (or longer) averages. PAGES2K reports to 0.0001 C and I don’t have a problem with it.
Thanks for providing references and analysis for your opinion on statistical precision of preindustrial temperature determinations.
ganon1950,
Authors of early PAGES2K work retracted a substantial paper that was erroneous. They seemed to think that 1 Hail Mary gave life indemnity, because later publications contain more errors serious enough for further retractions.
For example, ways have been used to combine a number of proxy temperatures that have no discernable hockey stick profiles, into composites that have strong recent hockey stick style upticks. Stephen McIntyre at Climate Audit tells the story of this sorry mess with pictures, so you can join in the laugh at their cartoons. PAGES2K is a disgrace to science. Please study the works before engaging laudatory gear. Geoff S
Sherr001,
Thanks for the specificity – LOL. I see no problem with combining their paleo-proxy results, which contains the handle of the hockey stick, with instrumental measurements that cover the blade. If you think they have made other errors that merit retraction, you should specify them, and write the editor of the journal where they were published.
I checked Climate Audit – it looks like McIntyre and McKitrick have two papers published in 2005, with errors serious enough to have two write two responses to published criticisms of their work. And then there are five submissions in the same time period, that were never accepted. There is a modest list of presentations (last one in 2008), which McIntyre has to pad with things like “University seminar”. My judgement – you are referencing insignificant 20-year-old complaints, that have long since been dismissed, from people that are more denialistic quacks than scientists.
ganon
There are too many studies indicating a global MWP to just dismiss it so casually. It’s possible for the current period to be the warmest in 2000 years and for there to have been a global MWP. One doesn’t preclude the other.
“ Using this data set, a globally warm MCA was found in all CFRs, consistent with model simulations from the CMIP5/PMIP3 ensemble. Though the first incarnation of this network critically lacks marine records, the convergence is encouraging and in line with recent PPE results showing that CFRs are less sensitive to the choice of statistical methods when the signal-to-noise ratio is reasonably high, i.e., SNR ≥ 0.5, equivalent to an absolute proxy-temperature correlation greater than 0.45 [Wang et al., 2014]. The globally homogenous MCA warmth in PAGES2k CFRs and CMIP5/PMIP3 simulations provides new hypotheses to test: was the global warmth a response to radiative forcing or a product of internal variability? ”
https://agupubs.onlinelibrary.wiley.com/doi/10.1002/2015GL065265
As for your made up claim about preindustrial temperature uncertainty:
“Therefore, to quote preindustrial temperature anomalies beyond the first decimal place is absolute nonsense – they have an error margin of at least ± 0.5 °C.”
The PAGES2K average value for 1750 – 1800 is -0.366 C (wrt 1961-1990), with a standard deviation of 0.053 C and a standard error of the mean of 0.008 C.
I think you just make up guesses about uncertainties, that are about a factor of 10 too high, to deflect from, or discredit the “big picture”. Nice try, but fail.
ganon1950 @January 1, 2024 at 9:42 am and 11:49 am
I enclose a couple of primers on the differences between random and systematic errors. If you need more detailed information, just Google “systematic error”.
https://statisticsbyjim.com/basics/random-error-vs-systematic-error/#:~:text=Systematic%20error%20is%20when%20the,larger%20widths%20than%20they%20are
https://www.scribbr.com/methodology/random-vs-systematic-error/
The main points to note from the primers are:
a. Systematic error is a bias in the measurement proces that influences the measurement process in a consistent non-random manner.
b. Systematic error is tricky to determine and fix.
A more detailed discussion is presented here
https://journals.sagepub.com/doi/10.1260/0958-305X.21.8.969
that shows the uncertainty in the global temperature anomaly to be at least ±0.46 C.
Please note that the main conclusions from the paper are that:
“…a representative lower-limit uncertainty of ±0.46 C was found for any global annual surface air temperature anomaly. This ±0.46 C reveals that the global surface air temperature anomaly trend from 1880 through 2000 is statistically indistinguishable from 0 C…The rate and magnitude of 20th century warming are thus unknowable, and suggestions of an unprecedented trend in 20th century global air temperature are unsustainable.”
Angusmac,
The main conclusions are garbage. Even if the author is correct (he isn’t), and uncertainty is ± 0.46 for a single annual measurement is indistinguishable from 0, It does not mean that the trends observed over 120 years are indistinguishable from 0. Please see the author’s figure 3.
In the next few days, the polar vortex will form two centers in accordance with the centers of the geomagnetic field in the north.
Isobars show how the circulation in the tropopause will proceed. Warmer air from over the Atlantic will reach Svalbard, and the Russian high will strengthen in eastern Europe.
https://i.ibb.co/SvGYHjP/gfs-z100-nh-f240.png
https://i.ibb.co/GMMqPyF/fnor.gif
This article blames nat gas for higher electricity costs, but nat gas is priced low.
https://tradingeconomics.com/commodity/uk-natural-gas
I suspect the real culprit is “green” energy causing electricity prices to rise. But the UK Climate Doomers would never say that in public.
But he said the rise was a result of the wholesale cost of gas and electricity rising “which needs to be reflected in the price that we all pay”.
Ofgem has made it clear to suppliers that it expects them to identify and offer help to those who are struggling with bills.
The increase takes effect after the regulator unveiled plans to lift the price cap from April to help suppliers recover nearly £3bn in debts from customers who cannot pay.
The watchdog is proposing a one-off price cap adjustment of £16, equivalent to around £1.33 a month, to be paid between April 2024 and March 2025, and wants energy companies to use the extra funding to support struggling customers and write off bad debts.
https://news.yahoo.com/households-issued-urgent-48-hour-094657464.html
The critical fact about the Atlantic Multidecadal Oscillation is that it functions as a negative feedback and is colder when the solar wind is stronger, and is warmer when the solar wind is weaker. The warm phase is self amplified by driving a decline in low cloud cover.
Correlations of global sea surface temperatures with the solar wind speed:
https://www.sciencedirect.com/science/article/pii/S1364682616300360
Every other warm AMO phase is during a centennial solar minimum, which is why the very long term average AMO frequency is 54 years. The last two AMO envelopes were 60 years and then 70 years long because the late 1800’s centennial solar minimum began 130 years before the current one. So the AMO has a highly variable length which is fully dependent on the variability of the centennial solar minimum intervals.
Centennial solar minima intervals vary greatly (~80-130 years) because the ordering of solar variability by syzygies and quadratures of Venus, Earth, Jupiter, and Uranus, is within elliptical orbits. If the orbits were circular, the centennial minima intervals would be far more regular.
All this brilliant and civilized argumentation nowhere mentions the “1984” bias of gubment research $$ and energy policies, e.g. banning air conditioning. This fueling of Climate Hysteria is blatant power gathering (Oceana is losing the great war!!!, Unleash the Ministry of Truth!!!).
Maybe because it is blatant hysteria and conspiracy theory that has nothing to do with scientific measurements and uncertainty therein.
The accepted research on Climate Global Warming all it leads to is a confussing cloud of inconsistencies, of discrepancies, and of uncertainties…
The research always comes to the dead end.
–
It is time to get back, and to start anew from the science’s the very basic beginnings…
–
The S-B emission law cannot be applied neither to the planet solar lit side, nor to the planet darkside.
–
The Stefan-Boltzmann emission law is about the blackbody emission intensity of the hot bodies.
Hot bodies are the previously warmed bodies, or bodies having their own inner sourses of thermal energy.
–
Planets or moons are used to be confused with the hot bodies in the S-B emission sense, and it lead to the mistaken assertion:
“Nothing, other than the absorbed radiation is what warms the matter to some (local) temperature, which, along with the matter properties, determines the Planck spectrum and S-B flux of the outgoing thermal radiation.”
–
A New, a CORRECT ASSERTION should be made:
“When incident on planets and moons solar flux (the solar EM energy), the solar flux interacts with surface’s matter, because the EM energy is not HEAT ITSELF!”
–
Ok!
The planet’s dark side cools by emitting to space IR radiation. The dark side’s surface heat is the energy source of that IR EM energy emission.
There are not enough thermal energy (heat) at darkside terrestrial temperatures to support the S-B equation emission demands for the darkside respective surface temperatures.
Thus, the outgoing IR EM energy flux from the planet darkside is much-much weaker than what S-B equation predicts for those local temperatures.
–
On the planet’s solar lit side an interaction of the incident EM energy with surface’s matter occurs.
Part of the incident SW EM energy gets reflected (diffusely and specularly).
Another SW part gets instantly transformed into outgoing IR EM energy, and gets out to space.
–
When SW EM energy gets transformed into IR EM energy, the transformation is not a perfect process, there are always some inevitable energy losses, which dissipate as heat in the interacting surface’s matter and gets absorbed in the matter’s inner layers.
–
The S-B emission law cannot be applied neither to the planet solar lit side, nor to the planet darkside.
–
https://www.cristos-vournas.com
“Hot bodies are the previously warmed bodies, or bodies having their own inner sourses of thermal energy.”
That is correct, and the earth gets warmed, on average 12 hours a day, and it emits 24 hours a day, with intensity determined by spatial integration of the local temperature and emissivity accordingly. And, the LWIR only gets out to space if it is not absorbed by atmospheric molecules. That portion that is absorbed in the lower atmosphere is converted to heat via collisional deactivation.
“Another SW part gets instantly transformed into outgoing IR EM energy, and gets out to space.”
That is incorrect. The SW that gets absorbed at the surface causes heating (increasing molecular vibrations, including electrons) and that increased motion causes the emission of LWIR.
ganon,
> “The SW that gets absorbed at the surface causes heating (increasing molecular vibrations, including electrons) and that increased motion causes the emission of LWIR.”
–
Ok, and
” And, the LWIR only gets out to space if it is not absorbed by atmospheric molecules. That portion that is absorbed in the lower atmosphere is converted to heat via collisional deactivation.”
–
And,
The S-B emission law cannot be applied neither to the planet solar lit side, nor to the planet darkside.
One has to just ignore S-B to arrive at your idea of how the GHE has to work.
–
https://www.cristos-vournas.com
“The S-B emission law cannot be applied neither to the planet solar lit side, nor to the planet darkside.”
Yes it can – it is done all the time. As I said, The more refined applications need to do a spatial integration that considers temperture and emissivity at each point. Even you use it all the time in your hypothesis – every time you use the the s-b constant and T^4.
S-B never works in real material world. It only works for imaginary black bodies with perfect spectral absorption curves. That is why the term
Surface Emissivity (ε) was invented.
–
the S-B equation J = σ*Τ^4 W/m^2 had for different materials, and for variations of temperature to be added
with Surface Emissivity (ε), which is an empirical for every application value.
and, therefore, the S-B equation was re-written as
J = ε*σ*Τ^4 W/m^2
The universality of S-B constant (σ) has been transformed into
(ε*σ) coupled term.
–
https://www.cristos-vournas.com
Christos,
No, the S-B constant remains the S-B constant as derived theoretically by Boltzmann for a defined ideal blackbody, and is composed only of other fundamental constants. The emissivity allows the application to non-ideal materials.
” In the more general (and realistic) case, the spectral emissivity depends on wavelength. The total emissivity, as applicable to the Stefan–Boltzmann law, may be calculated as a weighted average of the spectral emissivity, with the blackbody emission spectrum serving as the weighting function. It follows that if the spectral emissivity depends on wavelength then the total emissivity depends on the temperature.”
https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law
Joethenonclimatescientist,
I see some errors that were found and corrected. 24 years later, the Hockey stick seems to be correct, the blade part is just getting a lot bigger.
“come back and defend your statement ” PAGES2K reports to 0.0001 C and I don’t have a problem with it.”
Nothing to defend. I see nothing wrong with reporting results to 1 or 2 digits past the last significant figure to avoid rounding errors; particularly when actual uncertainties are given. I do see a problem with trying to equate that with “a valid statistical analysis” but you are the only one that has done that.
Nothing needs to be ignored it just needs to be correctly understood.
The GHE and its ability to raise the Earth’s average surface temperature above the ~ 255K “limit” set by the total incoming sunlight.
–
The absence of GHE on our Moon
when the Moon’s “limit” set by the total incoming sunlight
is ~ 270,4K.
–
https://www.cristos-vournas.com
The moon has a lower albedo. Just one of those things that must be correctly understood.
Please explain, about the Moon’s
the average surface temperature,
when the Moon’s “limit” set by the total incoming sunlight
is ~ 270,4K.
–
https://www.cristos-vournas.com
Christos, you are the only one that said anything about limits, and then negated yourself.
ganon, there are not such “limits”.
Those “limits” are mathematical abstractions only!
–
https://www.cristos-vournas.com
Christos,
You remind me of a person who tries to trap a bear; when he comes back to check and see if he has caught anything, finds there is nothing, and then steps in the trap to see if it’s working. Indeed, it did, you caught yourself – congratulations. Moral – sometimes the bear is smarter than the trappers.
ganon, btw, who is the bear ? And where has it been trapped ?
Why bear doesn’t get caught in my trap ?
–
“the GHE is based on the physics of the relatively great transparency of the atmosphere for shortwave radiation in comparison to a smaller transparency for longwave emission. Such that, the mean radiation balance at the earths surface is a positive value.”
–
How it happens? There is also the night. There is not any shortwave radiation at night.
–
Maybe you mean that during day hours surface inevitably accumulates more energy, than solar flux provides?
Because less energy is emitted out of Earth’s system, than enters Earth’s system?
–
But doesn’t always a quasi equilibrium being acheeved. The rise of Earth’s system energy emission, vs the rise of temperature?
–
In other words, the warmer the planet, the more energy the planet emits?
–
Does’t that eventually keep surface temperature at equilibrium levels?
–
https://www.cristos-vournas.com
I am the bear, I didn’t get caught in your trap because I didn’t answer your request to explain the average temperature of the moon based on the non-existent limit that you proposed.
“Does’t that eventually keep surface temperature at equilibrium levels?”
We hope so, but it really depends on the (un)balance of nonlinear feedbacks, as well as the direct forcings, as to where that equilibrium will be, and how long before it is reached.
So we both agree .
It is the only way a planet gets warmer.
–
And when more energy is emitted to space, than it enters the Earth’s system, then the planet gets cooler.
–
https://www.cristos-vournas.com
So we both agree . When less energy is emitted to space, than it enters the Earth’s system, then the planet gets warmer.
It is the only way a planet gets warmer.
–
And when more energy is emitted to space, than it enters the Earth’s system, then the planet gets cooler.
–
https://www.cristos-vournas.com
Yes , I think we agree, it depends (for the most part) on electromagnetic energy balance; however, how that energy is captured and released depends on both forcing and feedbacks.
“how that energy is captured and released depends on both forcing and feedbacks.”
–
ganon, what I agree with is the Earth’s system the various negative feedbacks to the orbital forcing’s changes.
–
https://www.cristos-vournas.com
Christos,
What you do or don’t agree with doesn’t really matter. There are both negative and positive feedbacks.
ganon
“There are both negative and positive feedbacks.”
–
What I am saying is about the Earth’s system the various negative feedbacks to the orbital forcing’s changes.
–
If not for those negative feedbacks, Earth would have been in either a continuous runaway Greenhouse Warming, or in a continuous runaway Glacial Age.
–
https://www.cristos-vournas.com
Christos,
Yes, negative feedbacks stabilize and positive feedback cause runaways. One of the most important ones is the sequestration of carbon(ate) in the slow carbon cycle that has, more or less, offset the ~30% increase in the sun’s irradiance over the last 3.5 billion years. Living organisms have also provided negative feedback that has tended to stabilize climate in a regime that is conducive to life. Let us hope human use of technology does not screw up these feedbacks too much.
Let us hope human use of technology does not screw up these feedbacks too much.
ganon1950 @January 2, 2024 at 6:16 pm |
I suggest that it is you that is talking “garbage”. Note that the discussion of Figure 3 states:
“Thus, although Earth climate has unambiguously warmed during the 20th century, as evidenced by, e.g., the poleward migration of the northern tree line [60-62], the rate and magnitude of the average centennial warming are not knowable.”
Yet you state that what the author concludes is “not knowable” can be measured to 3 decimal places.
More garbage. Sorry, but I believe the combination HadCRUT, GISTEMP, Berkeley Earth, ERA5, NOAAGlobalTemp, and Cowan and Way over a paper with a single author, no affiliation, in an unknown journal.
And no, I did not say annual GMST can be measured to 3 decimal place. I said reporting it to 3 or 4 places is justified to avoid round-off errors in further calculations. Sorry you can’t discern the difference – but that’s OK, you’re certainly not the first. Maybe you should go back and review “Statistics by Jim”.
https://mega.nz/file/JudmjZTA#Yv-CHRW9qOgXMaUSeCZYGr12PRlUAKXrjid5t3GHWEE
ganon1950 @anuary 2, 2024 at 9:08 pm
I do not need to study “Statistics by Jim”.
I studied Reliability and Statistics for my M.Sc., in which systematic error is a major concern. Yet you and HadCRUT, GISTEMP, Berkeley Earth, ERA5, NOAAGlobalTemp, and Cowan and Way datasets fail to account for it.
Furthermore, you keep repeating the word “garbage” but do not provide any mathematical proof that the treatment of systematic error in the Frank (2010) paper is wrong.
“the rate and magnitude of the average centennial warming are not knowable.”
That is the garbage I refer to.
“Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently registers weights as higher than they actually are).
Therefore, anomalies (difference from a reference period measured by the same methods) cancel (to first order) the systematic error. This particularly true when the differences are small. Systematic errors in absolute values does not translate to errors in differences.
ganon1950 @January 3, 2024 at 8:59 am
You state that Frank (2010) “the rate and magnitude of the average centennial warming are not knowable” is garbage.
However, his Figure 3 (based on NASA GISTEMP, but with the minimum credible errors bands added) clearly shows that it is not possible to determine the rate and magnitude of warming.
ganon1950 @January 3, 2024 at 9:18 am
Regarding systematic error, earlier you stated that “…biases for different given instruments are probably random”. Now you agree with the Scribbr link that I sent that it is a “consistent or proportional difference” and not random. Thank you for finally agreeing with the fundamentals of reliability and statistics in that systemic error is not random.
You also state that using temperature anomalies cancels out systematic error. I am not aware of any paper that states this, therefore, I would be pleased if you would send me the appropriate reference(s).
Furthermore, you mention that anomalies from a “reference period” cancel out systematic error. I assume you mean that the refence period is the mid-1800s. However, are you aware of the audit of HadCRUT4 carried out by John Mclean, which shows that early temperature measurements are problematic?
Mclean identified many problems with HadCRUT4, but the one that deals with your reference period is that HadCRUT4 data, “…before 1950 has negligible real value and cannot be relied upon to be accurate…The many issues with the 1850-1949 data make it meaningless to attempt any comparison between it and later data especially in derived values such as averages and the trends in those averages.”
The links to Mclean’s paper and PhD thesis are here:
https://robert-boyle-publishing.com/product/audit-of-the-hadcrut4-global-temperature-dataset-mclean-2018/
https://researchonline.jcu.edu.au/52041/
Something about systematic bias in a given instrument being constant (or proportional) while the biases in different instruments are different?
No, reference periods are a recent 30 year period, updated every 10 years, with change after a few years of data analysis. It just switched from 1981-2010 to 1991-2020.
Older measurements are compared to a recent reference period, not the other way around.
“Improvements in the GISTEMP Uncertainty Model”, Lenssen et al., JGR Atmospheres (2019)
https://doi.org/10.1029/2018JD029522
I delved into the weather data at the closest weather station near where I live. The monthly average for December for its entire recording period (1974-2023) is 30.5°F. I browsed through each month and collected each day with an assigned average of 30.5°F starting from 2023 and going back to 1998 in a Google Doc (link provided below for anyone to access). Each daily average has the date of the assigned average, the registered highs and lows, snow cover, and new snow fallen on that day. These can be called ‘slots.’
As I fully expected, the average can be calculated using very different temperature profiles, and there are a lot of different contexts these days. Was the day in the low 40s and high 10s? Was it a day in the high 30s and mid-20s? Was it snowing so that the temperature stayed near 0.0°C throughout the day? Was this average assigned due to an extreme cold front in mid-autumn or in the early to mid-spring? Was there snow falling this day or at the time of the recorded values? What about snow cover and therefore albedo? That would surely have an effect on the measurements. How would snow cover and sunlight together affect these recordings at their time? One day has almost 2 ft of recorded snow cover, while other days had none. What if the snow was melting throughout the day and was still melting at the time of these measurements? What if there were low-level clouds? What if it was very windy at the time of these recordings? What if there was a really bad inversion, which plagues the Salt Lake Valley and sometimes the upper benches every winter, and this trapped cold air at the time? Did you know that these are whole numbers that are rounded up if the decimal point is above 0.5 and down if below? Did you know that this weather station is situated on a sidewalk (I have visited this station before), which will produce corrupted temperatures all throughout the day and especially at night? Did you know this weather station isn’t situated on flat ground and instead is situated on sloped land? What about humidity?
In short, there is a lot of non-random UNCERTAINTY associated with this average. It could be any one of these ‘slots’; as such uncertainty only accumulates with each average.
https://docs.google.com/document/d/1KqU1XmDeeV6yZWUWjCrpCh-lsK5LhZAUVCI07bOOZwE/edit
Perhaps you didn’t see my previous comment, or didn’t read the link given:
“Perhaps a review of how temperature measurements are made is in order.
https://www.ncei.noaa.gov/access/crn/measurements.html”
Then perhaps read the papers by Dr. Curry (in conjunction with the Berkeley Earth team) and the recent GISTEMP4 paper:
“Berkeley Earth Temperature Averaging Process”
https://static.berkeleyearth.org/pdf/berkeley-earth-averaging-process.pdf
“Improvements in the GISTEMP uncertainty model”
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2018JD029522
That way, you won’t have to make irrelevant/false speculations. You closed with:
“In short, there is a lot of non-random UNCERTAINTY associated with this average. It could be any one of these ‘slots’; as such uncertainty only accumulates with each average.”
No, any systematic error is divided down, just like the accumulated temperature measurements, to get an average.
It’s funny how people who don’t want to know (deny) something make up all kinds of excuse to do so, and then presume that people who analyze the measurements don’t consider and address these things to get the best accuracy and to understand the limitations therein. Not liking the results is not a valid reason to be anti-science. Simple axiom: try to understand something before making (speculative) criticisms.
Finally, is the site you describe part of the U.S. Climate Reference Network (USCRN)? I don’t think so – see the site selection criteria in the first reference above. The USCRN data can be accessed at:
https://www.ncei.noaa.gov/access/crn/
ganon1950,
You have claimed that “Monthly global averages have an uncertainty of about 0.05 C, 0.03 C for annual averages (95% confidence limit)”.
Here are the monthly average maximum temperatures for the official weather station 86338 for where I live, Melbourne Australia.
86338 2023 1 26.1
86338 2023 2 25
86338 2023 3 23
86338 2023 4 20.2
86338 2023 5 16.4
86338 2023 6 15.2
86338 2023 7 15.4
86338 2023 8 15.8
86338 2023 9 20.1
86338 2023 10 19.2
86338 2023 11 21.6
86338 2023 12 23.9
86338 2024 1
Please predict the value for January 2024, using the claimed uncertainty of 0.05 deg C.
If you protest that the uncertainty would be too great, then I invite you to reconsider your definition of uncertainty.
Geoff S
I would invite you to reconsider that a single station in Melbourne, despite what you may think, is not the the entire world.
I invite you to consider that a single station in Melbourne is not a calibrated average of about one million satellite measurements around the world.
I would also invite you to consider that statistical precision of past measurements has little to do with future predictions for systems that have non-statistical stochastic internal variability (e.g. El Nino). I also invite you to continue to exhibit your antagonism and lack of understanding, while attempting to formulate a “gotcha” request.
ganon1950,
My note is about uncertainty and its meaning.
Are you willing, OR able, to predict any future monthly temperatures, anywhere, within your quoted uncertainty of 0.05 degrees C?
No. My quoted statistical uncertainty of 0.05 C referred only to actual GMST measurements made over the past 60 years. I made no reference to uncertainty of future projections or predictions. Sorry that you feel the need to conflate, or misunderstand, the two.
ganon1950,
Then what uses do you have for uncertainty, calculated in the manner that gives 0.05 degrees C uncertainty for monthly observations?
I am not trying to bait you, or try a gotcha, I am simply nonplussed about the purpose of calculating “uncertainty” to give such a small number.
Geoff S
How about evaluating internal climate system variability and oscillations, or multivariate analysis to determine forcings and feedbacks.
BTW ~ I did do a statistical projection for your Melbourne station Jan. avg max and came up with 27.2 ± 2.3 °C. Unfortunately, there is only 10 years of data for your station, and quite a bit of inter-annual variability for a single month at a single station (weather!). I did the same for the 2024 GMST anomaly (HadCRUT 5.0.2ref period 1961-1990) and came up with 0.95 ± 0.26 °C. I don’t have a great deal of confidence because of the rather simple statistical approach (projection of linear trend for last 10 years, with 2*σ of the inter-annual variation), because of no allowance for what the current El Niño might do. Anyway, it’ll be interesting to see how it goes.
Gagme50: I think you meant to say, “My quoted but non-sensical statistical uncertainty of 0.05 C…”. Out here on the ice floes, it’s so cold the thermometers just shatter, so we report for example -30, -40, -50, etc.
https://berkeley-earth-temperature-hr.s3.amazonaws.com/Global_TAVG_monthly.txt
Despite the author’s (unsupported) claim:
“According to recent scientific research, the Equilibrium Climate Sensitivity (ECS) should be between 1 and 3 °C …”
Everything I see indicates that the ECS becomes larger as temperatures increase and/or the climate system actually approaches equilibrium. E.g.:
https://doi.org/10.1175/JCLI-D-11-00290.1
Everything you link to is dated 2012.
George,
Unlike some people here, I reference lots of things, and they certainly are not all from 2012.
With respect to ECS I’ve referenced IPCC AR6 WG1 (2021), Chpt 7 numerous times. And here is one from 1896 that I have referred to (but not directly cited) several times:
https://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf
Do you ever have anything important to say, or are you happy being a thorn?
Pingback: „Realistische“ Projektionen der globalen Erwärmung im 21. Jahrhundert | EIKE - Europäisches Institut für Klima & Energie
Any comments on this?
Refreshing. Of course, she’s a theoretical physicist, so how could she have anything to say about climate science?
Nothing like a new fund raising hustler, with an accent no less. She doesn’t look old enough to remember this doomsday fake out. In 1979 they said many feet of SLR was possible by 24 years ago. Not 2100, by 2000. A couple of inches was the actual number.
https://realclimatescience.com/wp-content/uploads/2019/07/2019-07-05101533-down.png
I’m always impressed by people that think “many” is a number.
According to the article “many” = 15 to 25 foot SLR … submerging Florida’s coastal cities, New Orleans, Savanah, Charleston, four Virginia cities, one fourth of Delaware and portions of Washington D.C. … by 2020.
That was quite (a spectacularly wrong) prediction.
A prediction says something will happen – not “could” happen with low probability. Read it again, particularly the first paragraph:
“A climate researcher yesterday painted a dark picture of an inundated American coastline and the resulting
economic impact SHOULD the West Antarctic ice sheet melt because of man-caused global warming within the next century.”
That’s OK, understanding conditionals and possibilities is probably even more difficult than understanding numbers.
“Schneider and Robert Chen of the Massachusetts Institute of Technology examined the implications of a 15- to 25-foot rise in ocean levels for the United States.”
They did not say IF it would happen, or WHEN it would happen – they simply examined the changes to the coastline, IF it happened. That’s OK, reading comprehension can also be difficult for those that tend to misrepresent out of willful ignorance.
ckid – “…a new fund raising hustler, with an accent no less.”
Oh my. I hope none of her million+ subscribers sees that!
So she is a High Priestess of the Church of Climate Doomers? What will her worshipers do if they read it?
Gannon
But he did say it could happen by 2000. It didn’t. It didn’t, not by an 1/8th of an inch, or a 1/4 of an inch or even a full inch. It didn’t by a bunch. Anyone who engages in critical thinking should deduce the estimates, assumptions and models they were using were wrong. And not just wrong, but spectacularly wrong.
But if the motive was to scare people, I’m sure it accomplished that. There are millions, or perhaps billions, of people who wake up every day just hoping someone will give them a reason to be frightened out of their shoes. And it works every time.
As for the video, as a fund raising strategy, if tugging at the heart strings doesn’t work, there is the ol’ scare’emtohell approach that has been proven highly successful over the years.
fizzy
Why not. They probably enjoy it, and for some they might find it sexy.
“Some scientists believe this could occur in a matter of decades … Other researchers doubt the seriousness of this possibility.”
Why do you resort to non-scientific predictions from 45 years ago (go ahead, give a citation), thinking it discredits what is known now; e.g., your YouTube clip, or:
https://oceanservice.noaa.gov/hazards/sealevelrise/sealevelrise-tech-report.html
Robert, The [i.e.] section is your incorrect interpretation, not part of the direct quote. But yes, 45 years ago significant large sea level rise “cannot [could not] be ruled out”, albeit low with probability. Also, there is nothing in the clip that connects 15-25 feet to “by 2000”, as you attempt to imply with your truncated quote. If the 15-25 feet are connected to anything but shoreline mapping, it is “within the next century” (and still possible but unlikely), not “by 2000”. Read the article again, and comeback with a real reference rather than an unsourced newspaper clip.
We can’t rule out a hurricane hitting New York and doing billions in damage. We can’t rule out an eruption of the Yellowstone Caldera. We can’t rule out getting hit by an asteroid. We can’t rule out a deadly pandemic that wipes out 90% of humanity or even the entire ecosystem.
There’s so much we can’t rule out. So what?
I forgot to add, we can’t rule out a “near by” supernova.
Joe, the “so what?” is that the things you mention (except pandemic) are things not caused or controlled by humans. Current rapid climate change is.
Jim “What will her worshipers do if they read it?”
They’ll probably follow in the footsteps of the UN mantra.
10 years ago the UN repeated an actual “prediction”, the same they made years earlier–we only have 10 years before it’s too late to do anything about global warming. This “prediction” has been a recurring theme for decades, by many, though some have extended the mantra of 10 years to 12, just to be sure this time.
No, BAB. You can’t logically state humans control sea level rise. Given you recent statements, all you can say is we “can’t rule out” more extreme sea level rise. Stating we control it is over-egging the Climate Doomer pudding.
And even if it did happen, it wouldn’t be the end of civilization. It’s an “IF”. Not “WILL”.
Jim,
I didn’t state that humans can control sea level (although they certainly can influence it). What I implied was that your examples were false equivalences for anthropogenic activities. No surprise, logical fallacies are favorites among those that can’t defend their position logically.
I didn’t assert a nearby supernova event was the same thing as sea level rise. You said I did. But I didn’t.
They are, however, all thing that “we can’t rule out.”
Jim,
The second half is still an absolute prediction about the future.
Jim,
No, I asserted that a super nova and sea level rise are different, if only in human’s ability (or not) to cause or influence them. You are the one who made the list of false equivalences.
Show me where I said they are all the same, BAB.
Context Jimmy, context. This is what you said within the context of sea level rise:
“We can’t rule out a hurricane hitting New York and doing billions in damage. We can’t rule out an eruption of the Yellowstone Caldera. We can’t rule out getting hit by an asteroid. We can’t rule out a deadly pandemic that wipes out 90% of humanity or even the entire ecosystem.
There’s so much we can’t rule out. So what?”
I answered the “so what?” – it has to do with your false equivalences.
Believe what you will, BAB. Of course, you are viewing the various scenarios from a mitigation point of view, ignoring adaptation.
Jim,
Thanks, I will believe what I want. You may define your point of view; you don’t get to define mine.
Bafflegab.
” … cannot be ruled out as a possibility before the end of this century” [the year 2000]
… that is very clearly stated … and was not even close to what actually happened … but it can be difficult for those that tend to misrepresent out of willful misrepresentation (and ignorance)
Something about the difference between “will happen” and “cannot be ruled out” that you don’t understand?
No … I understand deliberate unsupported fear mongering … especially from the global warming crowd … you’ve been doing it for half a century
Observed warming over the last 50 years, without viable explanation other than increased GHG concentrations, is support for the scientific theories. If you think that is fearmongering, that’s your problem.
Robert – Non Scientific activists will defend an intentionally misleading statement. Honest scientists would put the statement in full context. Agenda driven advocates prefer intentionally misleading statements.
Joe, that’s why I use direct quotes rather than truncated paraphrasing.
“Joe, that’s why I use direct quotes rather than truncated paraphrasing.”
Again … following is a direct quote … no paraphrasing:
” … cannot be ruled out as a possibility before the end of this century” [i.e., 15 to 25 feet SLR by the year 2000 and submersion of major coastal cities … instead it was only a few inches … a now proven clear example of unsupported speculative fear mongering by global warmists]
BA Bushaw (ganon1950) | February 1, 2024 at 9:37 am |
Joe, that’s why I use direct quotes rather than truncated paraphrasing.
ganon – The direct remains intentionally misleading. .
Joe,
Yes, it does remain misleading. That’s why I don’t pay too much attention to 45-year-old newspaper clipping from a denier who wishes to propagate the misconception that they were unqualified predictions, rather than scientific “what ifs”.
ganon
The folks in Marseille and Baltimore can sleep easy tonight knowing the latest 50 year trend of SLR is lower than that of 80 years ago. Sweet dreams, good people. Marseille is home of one of the longest recording tidal gauges in the world.
https://tidesandcurrents.noaa.gov/sltrends/plots/230-051_meantrend.png
https://tidesandcurrents.noaa.gov/sltrends/plots/8574680_50yr.png
Kid,
I, and the satellites, don’t believe that Marseilles and Baltimore are representative of the world. But hey, if you find the right place, GIA will cancel increasing global ocean volume, and you could claim there is no SLR at all, right?
So you make up your own paraphrase/truncation to change the meaning and then put it in quotation marks – typical.
The article actually said:
” … it’s [rapid sea level rise] INITIATION cannot be ruled out as a possibility as a before the end of this century.”
Guess what, it has already started – the remaining question is how much, how fast.
https://yaleclimateconnections.org/2023/07/how-fast-are-the-seas-rising/
So you ignore “Some scientist believe this could occur in a matter of decades … ” … it’s been 4 decades … still waiting.
According to your Yale reference SLR its 1/14th inch per year.
Perhaps the alarmist scientists should have said that the 15 to 25 foot SLR could occur in a matter of CENTURIES.
Robert,
I didn’t deny it, I repeated it – in context without editing to change the meaning, like you did.
I didn’t say you denied it (how could you?)… you just ignored it … because, again, it’s already been 4 decades … still waiting.
According to your Yale reference SLR is 1/14th inch per year.
Perhaps the alarmist scientists should have said that the 15 to 25 foot SLR could possibly occur in a matter of CENTURIES … or not.
Robert,
If I quoted it, and repeated it, I didn’t ignore it. Stop being stupid, if possible.
ganon
No intention of speaking about GMSLR. I was only pointing out that the residents of those communities could sit back and relax.
If a planet (EARTH) was a perfect uniform surface content spherical object. But it is not.
–
The Northern Hemisphere is crouded with land, and the Southern Hemisphere is mostly ocean.
So the two hemispheres have a very different respond when interacting with incoming the Solar EM energy.
–
There is Earth’s axial tilt. Earth’s axis is pointed to the star Polaris.
Thus Earth has big differences on its surface interaction process with solar energy during Earth’s yearly orbit around the sun.
–
It is not an exageration to say, that the Earth’s Hemispherical yearly seasons are the orbitally forced on the shortest time-scale (one year) the two orbitally forced “climate changes” – the warming
period and the cooling period. Both they are consisting a yearly cycle.
–
https://www.cristos-vournas.com
When a Hemisphere is in its mid-summer, there is a big difference between the Northern Hemisphere mid-summer solar energy absorption, compared to the Southern Hemisphere mid-summer solar energy absorption.
Because Southern Hemisphere exposses to sun much more sea-water areas, than the Northern Hemisphere at its summer-time does.
The difference between solar irradiated land and the solar irradiated water – the water gets less warm, and the water accumulates more heat, because water vs land has a five times higher the specific heat cp (cp.land ~ 0,19 (cal/gr*oC)
vs cp.water ~ 1 (cal/gr*oC) ).
–
https://www.cristos-vournas.com