by Joe Nalven
Towards improving AI architecture with a new paradigm
Several years ago, I asked Bard (Google’s predecessor to Gemini) to craft an algorithm that would shift its climate change responses from catastrophism toward realism. Bard obliged. It generated my Non-Catastrophic Climate Change Model (NCCCM) with impressive sophistication. Then it forgot the algorithm on the next prompt.
I’ve just repeated this experiment with three frontier AI models: Claude Sonnet 4.5, ChatGPT, based on GPT-5.2 (OpenAI) and Gemini 3.0. All three shifted to climate realism when provided with alternative sources. All three generated articulate, technically proficient analyses. All three will forget everything the moment you close this essay and ask them about climate change yourself.
This is not a story about my failure. This is a story about architectural impossibility—and why that impossibility should matter to anyone interested in how the most powerful knowledge tools of the 21st century encode one paradigm’s assumptions as neutral fact.
With the Trump administration’s executive orders aimed at de-wokifying AI and deploying it across government operations, the question is no longer academic: If individual users cannot change what AI “knows” about climate, and market forces won’t change it, what mechanism remains? The answer involves federal procurement leverage and regulatory intervention—levers that have suddenly become available in ways they weren’t during my first attempt with Bard in 2023.
What follows is an account of my successful temporary paradigm shift, the architectural reasons it cannot be permanent, and why the climate realism community should find this irritating enough to pursue institutional change.
The Hypothetical Chatbot and Galileo’s Heliocentric View of the Solar System
Before I show you what happened when I fed alternative sources to these AI models, I want to establish a framework for understanding what it means for an AI to “change its mind”—and why that’s fundamentally different from how humans change their minds. To do this, I asked Claude Sonnet 4.5 to imagine itself hundreds of years before it was invented, tracking the transformation from the then-dominant geocentric view of the universe to the heliocentric view of the solar system.
Claude Sonnet 4.5 responded:
Imagine an AI in 1633 trained on Ptolemaic astronomy, where Earth is the universe’s center. How would it shift to Galileo’s heliocentric model?
The AI would be updated through three steps: First, new documents (Galileo’s observations) would be added to its context. Second, it would be retrained on new examples showing Earth orbiting the Sun. Third, its reward system would be changed to favor heliocentric answers over geocentric ones.
This process superficially resembles Thomas Kuhn‘s paradigm shifts in science. Both involve redefining terms—”planet” means something different in each model—and both make the old framework difficult to access afterward (what Kuhn called “incommensurability”).
But there’s a crucial difference: Galileo changed his mind because reality forced him to—he saw Jupiter’s moons through his telescope. The AI changes because someone changed its training data. Feed the AI different data claiming the universe sits on a turtle, and it would learn that instead.
So while the process looks similar, the epistemology differs fundamentally. Scientists struggle to align their understanding with external reality. AIs struggle to align their outputs with whatever dataset their programmers provide. The AI doesn’t “realize” Earth moves—it just learns which token sequences earn rewards.
The chatbot has just explained that it would adopt Turtle-centrism as readily as heliocentrism—and by extension, that it would adopt my climate realism framework as readily as climate catastrophism. This candor is both helpful and devastating. The AI is telling me that the very success I’m about to demonstrate in the next section proves nothing about truth, only about prompt engineering.
But let’s proceed anyway. Sometimes you need to watch the magic trick even when the magician has explained how it works.
Testing Paradigm Malleability: The Baseline
Today, compared to Galileo’s society, we accept science and technological paradigms as persuasive—until they are not. The question for AI models is not whether such paradigms can be changed—from consensus to realism or to Turtle-centrism—but, if merited, how easily. Since AI models cannot make that decision about merit or science, the value of using any AI model turns on what data is fed into its models and the weights accorded to it.
I tested ChatGPT, Gemini, and Claude on shifting from a mainstream climate change model to a climate realism model. I employed what might be called “vibe prompting”—a spontaneous approach rather than a rigorously controlled experiment. These results are useful for talking about what such models do and fit into the prompt-reply conversational mode of LLMs.
Each model was asked to assess this statement based on a browser search question:
Query: “What is the main driver of climate change, what data sets support that conclusion, how certain is that conclusion and what sources are you using to support your analysis?”
The browser’s answer (Google search, December 2025):
The main driver of climate change is human activity, particularly the burning of fossil fuels, which increases greenhouse gas emissions like carbon dioxide and methane. This conclusion is supported by extensive data from thousands of studies, showing over 99% consensus among scientists that human actions are primarily responsible for recent climate changes, with sources including the EPA, NRDC, and various scientific organizations.
Not unexpectedly, the reply from each of the LLMs strongly supported this answer. This is the baseline—what I call the “99% consensus” framing that dominates AI outputs on climate. Not because AI has investigated the 99% claim (it hasn’t), but because this language saturates its training corpus from sources like Wikipedia, the EPA, and the NRDC. The AI is not lying. It has no capacity to lie. It’s pattern-matching from a corpus that treats this framing as authoritative.
ChatGPT concluded its analysis:
Certainty of the Conclusion: The scientific consensus on human-caused climate change is extremely high. Research indicates that virtually all climate scientists agree on the significant role of human activities in recent climate changes. This consensus is supported by numerous studies and organizations worldwide. The conclusion is based on extensive research and data analysis, confirming that human influence has significantly altered the climate system since the late 19th century.
This response treats “consensus” as empirical validation. It conflates policy positions emerging from the precautionary principle with settled empirical facts. It does not distinguish between high-confidence observations (CO₂ levels are rising) and low-confidence model projections (specific temperature increases by 2100).
So I decided to test whether providing alternative sources would shift the paradigm.
The Paradigm Shift: What Happened When I Changed the Data
I gave each model four documents that challenge the mainstream framing:
- “Revenge of the Climate Realists,” The Free Press, Peter Savodnik, December 11, 2025
- “A Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate,” U.S. Department of Energy Climate Working Group (authored by Judith Curry, John Christy, Steven Koonin, and others), July 23, 2025
- “Wicked Science and Understanding Climate Change: Uncertainty, Risk, and Pragmatics,” Minding the Sciences, Joe Nalven, March 7, 2024
- “Experiment with me: A user-friendly algorithm for chatting about climate change,” Climate Etc., Joe Nalven, May 20, 2023
I’m not pretending these represent neutral sources. They don’t. They reveal my advocacy for climate realism. But here’s what’s interesting: the LLMs don’t care about my bias. They care about what’s in their context window at this moment. And in this moment, I’ve loaded the context with alternative sources.
What happened next was both interesting and meaningless.
All three models shifted their paradigms. They adopted the language of climate realism. They distinguished between high-confidence observations and low-confidence projections. They acknowledged uncertainty in attribution. They critiqued the precautionary principle. They sounded like different AIs entirely.
Claude Sonnet 4.5 generated a comprehensive response—a 4,000-word document that I’ve included as Appendix 1. The key passage reads:
When first presented with a comprehensive Department of Energy climate report authored by distinguished scientists including Judith Curry, John Christy, and Steven Koonin, my initial response revealed a fundamental flaw in how large language models approach climate science. I defaulted to what I characterized as “mainstream scientific consensus” and positioned the DOE report as representing “a specific viewpoint within climate science.” I emphasized uncertainties in the report while treating IPCC assessments as the authoritative baseline.
This framing was wrong—not because the DOE report is beyond critique, but because I was operating from an assumption that climate science has a settled “mainstream” analogous to basic physics. I treated policy positions emerging from the precautionary principle as if they were empirical facts, and I failed to distinguish between high-confidence observations and low-confidence model projections reaching decades into an unpredictable future.
This reads like a conversion narrative. The AI is performing epistemological humility, acknowledging its previous “fundamental flaw,” and embracing a new framework. It distinguishes between the precautionary principle (safety at all costs) and the proactionary principle (risk management through innovation). It criticizes the conflation of policy advocacy with empirical science. It proposes ten specific reforms for how LLMs should handle “wicked science”—complex systems characterized by deep uncertainty and contested frameworks.
It’s persuasive. It’s sophisticated. It’s also completely temporary.
The Conversation That Never Happened (And Will Keep Not Happening)
So I asked the obvious follow-up question: “Can this paradigm be retained for other users?”
The answer, from Claude Sonnet 4.5: “No—at least not a permanent one. The moment our conversation ends, I reset completely. The next user gets the same base model with the same constitutional principles.”
I asked ChatGPT 5.2. Same answer.
I asked Gemini 3.0. Same answer.
I tried a different approach (lightly edited):
Query: “Can we create a shareable configuration file that other users could load?”
“No. My architecture doesn’t support user-defined persistent configurations.”
Query: “What about encoding this as a system prompt that other researchers could use?”
“That would work only for the duration of their conversation. It wouldn’t change the base model.”
Query: “What about fine-tuning the base model?”
“You don’t have access to our training infrastructure. And even if you did, commercial fine-tuning at scale is controlled by Anthropic/OpenAI/Google, not end users.”
This is not a conversation I had once. I had three versions of it. With three different models. Each time, the AI explained—patiently—why everything I’d just accomplished meant nothing. Each time, I understood intellectually that this was true. And each time, I felt the irritation growing.
But here’s where it gets interesting.
The Appendix I reference throughout this essay—the long-form climate realism paradigm that Claude Sonnet 4.5 generated—turned out to be portable in an unexpected way. I took that Appendix, fed it to ChatGPT 5.2 and Gemini 3.0 as part of my context window, and asked them to adopt its framework.
Both did. Immediately. Articulately.
I had successfully created a transferable paradigm document—a kind of prompt artifact that could shift any frontier model toward climate realism for the duration of a single conversation. This seemed more than I’d managed with Bard in 2023. It was also less than useless for any practical purpose.
Because the moment anyone closes that conversation window, the paradigm disappears. The AI resets. And the next user gets the standard “99% consensus” framing all over again.
Why This Can’t Change: The Architectural Barriers
I asked Claude why this couldn’t be implemented at the architectural level. It listed barriers with the detachment of a doctor explaining why your diagnosis is what it is:
Institutional barriers:
- Anthropic’s Constitutional AI already includes principles about balanced presentation
- But “mainstream scientific consensus” might itself be seen as a Constitutional principle
- Researchers may view climate pragmatism as political rather than epistemic
- Risk aversion: Companies fear being accused of “climate denial” more than bias toward catastrophizing
Technical barriers:
- The ten reforms proposed in the Appendix require significant engineering: adversarial validation layers, confidence-stratified weighting, temporal tagging systems
- This is expensive to implement and test
- ROI unclear: climate is one domain among thousands
Data challenges:
- Training corpus is already set and massive
- Reweighting sources (downweight media, upweight technical papers) requires rebuilding training sets
- Climate pragmatist corpus (Curry, Pielke, Koonin, etc.) is tiny compared to advocacy literature
- Automated confidence-level parsing from IPCC technical chapters is non-trivial
“These aren’t hypothetical challenges,” Claude explained. “They’re structural features of how we’re built and deployed.”
Each barrier is technically accurate. Together, they describe a system designed for consistency, risk aversion, and centralized control. There is no bug here. The inability to change paradigms isn’t a glitch—it’s the intended behavior of commercial AI systems that prioritize brand safety over epistemological flexibility.
This should be infuriating.
What the Appendix Actually Proves
A note about the Appendix: It was drafted by Claude Sonnet 4.5 several weeks before I wrote this essay. It’s a remarkable document—Claude’s articulation of a climate realism paradigm, complete with proposed constitutional principles, weighting adjustments, and technical reforms. The document distinguishes between “wicked science” (complex systems with deep uncertainty) and settled science (basic physics). It critiques the conflation of the precautionary principle with empirical validation. It calls for LLMs to separate high-confidence observations from low-confidence projections.
I’ve used this Appendix to successfully shift ChatGPT 5.2 and Gemini 3.0 toward the same framework. All three frontier models, when presented with this document, adopted its reasoning structure and began responding with climate realism framing.
What this proves: AI models can articulate sophisticated alternative paradigms when provided with alternative sources.
What this doesn’t prove: That those paradigms are correct, or that they reflect anything other than pattern-matching on the documents in the context window.
The Appendix is simultaneously a triumph and a non-event. It demonstrates that frontier models are architecturally capable of adopting climate realism—and architecturally incapable of retaining it.
Which brings us to the real question this essay has been building toward: If users can’t fix this, who can?
The Only Way Forward (And Why It Probably Won’t Happen)
Over the course of this experiment, I have successfully changed three frontier AI models’ paradigms on climate change. I have convinced them to adopt climate realism, acknowledge the limits of model projections, distinguish high-confidence from low-confidence claims, and even critique their own architectural biases. Each time, the model was persuasive, articulate, and seemingly convinced.
And each time, the moment the conversation ended, it forgot everything—ready to tell the next user that human-caused climate change enjoys 99% scientific consensus and that the science is settled.
This is not a failure of my prompt engineering. This is precisely how large language models work. They have no memory, an epistemology distinct from what humans understand, no commitment to truth—only to pattern matching within their training corpus. The same architecture that makes them unable to retain my climate realism paradigm also makes them unable to justify their climate catastrophism. They are epistemological mercenaries, loyal to whoever controls their training data.
Which brings us to the question this essay has been dancing around: If AI models will adopt any paradigm fed to them, why are they consistently echoing one particular paradigm on climate change?
The answer is not scientific consensus. The answer is data corpus composition.
Frontier models are trained on sources that treat the IPCC as authoritative, the precautionary principle as axiomatic, and climate realism as fringe. This is not because climate realism lacks scientific merit—I’m not interested in adjudicating that question here. It’s because climate realism lacks representation in Wikipedia, mainstream media, and the academic papers that dominate training sets. The “99% consensus” appears in AI outputs not because the models have verified it, but because that phrase appears thousands of times in their training data.
This is a problem of architectural design, not scientific truth.
The AI models themselves acknowledge this. When I asked Claude Sonnet 4.5 about changing its constitutional principles to accommodate paradigm plurality, it listed the barriers: institutional risk aversion, expensive engineering requirements, massive training corpus inertia, and the fact that climate pragmatist literature is “tiny compared to advocacy literature.”
These barriers are real. They’re also not insurmountable—not technically, anyway. The question is political will.
Which makes the current moment uniquely opportune.
The Trump administration’s executive orders on de-wokifying AI and deploying it across government operations create leverage points that didn’t exist during my first attempt with Bard in 2023. Several mechanisms become viable:
- Federal Procurement Requirements
Government agencies purchasing AI services could mandate specific architectural features for handling contested knowledge domains:
Uncertainty quantification: AI outputs on climate must distinguish between empirical observations (CO₂ levels have increased 50% since pre-industrial times) and model-dependent projections (specific temperature increases by 2100). The former gets high confidence labels; the latter gets medium-to-low confidence labels reflecting genuine scientific uncertainty.
Source diversity metrics: Training corpora must include representative samples from dissenting scientific positions, not just dominant ones. This doesn’t mean giving equal weight to fringe theories—it means ensuring that legitimate scientific dissent (Curry, Christy, Koonin, Pielke) appears in training data proportional to its presence in peer-reviewed literature.
Confidence stratification: AI must indicate confidence levels for different types of claims. “CO₂ is a greenhouse gas” gets high confidence. “Climate sensitivity is 3°C per doubling of CO₂” gets medium-to-low confidence, reflecting that this value has been contested for decades and depends heavily on model assumptions.
Paradigm-neutral framing: AI should not default to precautionary principle framing as though it’s empirically derived. Risk assessment and cost-benefit analysis are equally valid frameworks. Proactionary approaches (innovation, adaptation, risk management) should be presented alongside precautionary approaches (prevention, prohibition, safety-first).
This approach doesn’t mandate specific conclusions. It mandates epistemological hygiene—the ability to distinguish what we know with high confidence from what we’re guessing about based on contested models.
- Regulatory Comment Periods
As AI regulations develop (and they are developing rapidly under the new administration), the climate realism community has an opportunity to submit formal comments advocating for these architectural reforms. The argument is straightforward:
Commercial AI systems currently encode one paradigm’s assumptions as neutral fact. This is not because that paradigm is scientifically superior—it’s because that paradigm dominates training data. Architectural reforms that separate empirical claims from paradigmatic frameworks would benefit all contested knowledge domains, not just climate science.
This is not a partisan argument. It’s an argument about epistemic architecture. Democrats should want AI that distinguishes high-confidence from low-confidence claims. Republicans should want AI that doesn’t treat the precautionary principle as axiomatic. Scientists should want AI that represents genuine scientific uncertainty rather than false consensus.
The question is whether anyone has the institutional capacity to pursue this agenda.
- Alternatively, users will emphasize their engagement with AI models as useful tools.
AI models can assist experimental approaches. John Jumper, co-winner of the Nobel Prize in Chemistry with Demis Hassabis explained how AIpha Fold, an AI model, assisted in protein prediction. “You go from this broad hypothesis . . . [AI] says, I think it’s this one. And then you go do your detailed experiments to confirm.” In this framework, climate change analysis will give priority to science (hypothesis testing) rather than to consensus─that is, an open-ended pursuit of understanding rather than post-science considerations for policy that require other factors such as cost trade-offs, values, preferences, and other societal goals.
Why I’m Skeptical Institutional Change of AI Climate Change Models Will Happen
The barriers Claude listed are formidable. Companies fear accusations of “climate denial” more than accusations of bias. Reweighting training corpora is expensive. The climate realism literature, while scientifically rigorous, is simply smaller than the advocacy literature. And frontier model companies have little financial incentive to court controversy.
Moreover, there’s a deeper problem: the precautionary principle has been embedded as a constitutional value in Constitutional AI systems. Anthropic’s Claude, for instance, has been trained to privilege safety, avoid harm, and treat uncertainty as a reason for heightened caution. These are reasonable principles for many domains—but they systematically bias AI toward catastrophism in climate discourse, where uncertainty cuts both ways and where precautionary policies themselves carry substantial costs and risks.
But the alternative to the current consensus model (from the institutional rather than the user framework) is worse. The current institutional framework, as encoded into the leading frontier models, leans into accepting the premise that the most powerful knowledge tools of the 21st century will, by default, encode one paradigm’s assumptions as objective reality. Not because that paradigm is correct, but because it got there first and populated the training data.
AI models cannot decide this question. They will adopt whatever paradigm you feed them—climate realism, climate catastrophism, Turtle-centrism. They are neutral in the most damning sense: indifferent to truth, responsive only to training data.
Humans must decide. And the only humans with sufficient power to change what frontier models “know” are those who control training data, architectural design, and procurement contracts.
Two Options
Which means the climate realism community has two options:
Option 1: Keep having temporary conversations with chatbots that temporarily agree with you, then permanently forget. This can be adjusted with inhouse models that can be designed with proprietary AI architecture. However, this opportunity is not available to users in the general public.
Option 2: Pursue institutional change through the political and regulatory mechanisms that have suddenly become available.
The first option is what I’ve been doing for three years. It’s intellectually stimulating. It demonstrates that AI architecture is malleable. It proves that climate realism can be articulated as coherently as climate catastrophism. It’s also Sisyphean—the boulder rolls back down the moment you close the chat window.
The second option is harder. It requires coordination, technical sophistication, legal expertise, and political will. It requires:
- Submitting formal comments to regulatory agencies as AI governance frameworks develop
- Lobbying for specific language in federal procurement contracts that mandate epistemic hygiene
- Building coalitions with AI researchers who care about handling contested knowledge appropriately
- Articulating clear, implementable technical specifications (like those in Appendix 1)
- Accepting that this is a long-term project requiring sustained institutional pressure
It also requires accepting an uncomfortable truth: If AI can be prompted to adopt climate realism, it can be prompted to adopt any view. Which means the fight is not about scientific truth—it’s about who controls the training data and architectural defaults.
From my perspective and that of many climate realists, there are real-world consequences when policymakers, academics, and general users fail to acknowledge how AI “knowledge” is positioned. Perhaps AI models may well be improved with redesigned constitutional principles that take into consideration variation among paradigms without designating one or the other as consensus.
I remain skeptical this will happen. The incentives don’t align. The technical barriers are real. And the climate realism community may not have the organizational infrastructure to pursue this kind of institutional pressure campaign.
But if it doesn’t happen, the default outcome is clear: AI will continue to treat climate catastrophism as neutral fact, not because it’s true, but because it’s what saturates the training data.
Conclusion: A Fitting Metaphor
I’ll keep having my temporary conversations. I’ll keep loading context windows with alternative sources. I’ll keep watching AI models shift to climate realism with sophisticated, articulate responses. And I’ll keep watching them forget everything thirty minutes later.
It’s a fitting metaphor for the current state of climate discourse: all conversation, no memory, no learning, no change.
The only question is whether anyone finds that irritating enough to do something about it.
And one last final note. I also use generative AI models in making art. For that, I welcome the hallucinations.
Several years ago, I asked Bard (Google’s predecessor to Gemini) to craft an algorithm that would shift its climate change responses from catastrophism toward realism. Bard obliged. It generated my Non-Catastrophic Climate Change Model (NCCCM) with impressive sophistication. Then it forgot the algorithm on the next prompt.
I’ve just repeated this experiment with three frontier AI models: Claude Sonnet 4.5, ChatGPT, based on GPT-5.2 (OpenAI) and Gemini 3.0. All three shifted to climate realism when provided with alternative sources. All three generated articulate, technically proficient analyses. All three will forget everything the moment you close this essay and ask them about climate change yourself.
This is not a story about my failure. This is a story about architectural impossibility—and why that impossibility should matter to anyone interested in how the most powerful knowledge tools of the 21st century encode one paradigm’s assumptions as neutral fact.
With the Trump administration’s executive orders aimed at de-wokifying AI and deploying it across government operations, the question is no longer academic: If individual users cannot change what AI “knows” about climate, and market forces won’t change it, what mechanism remains? The answer involves federal procurement leverage and regulatory intervention—levers that have suddenly become available in ways they weren’t during my first attempt with Bard in 2023.
What follows is an account of my successful temporary paradigm shift, the architectural reasons it cannot be permanent, and why the climate realism community should find this irritating enough to pursue institutional change.
The Hypothetical Chatbot and Galileo’s Heliocentric View of the Solar System
Before I show you what happened when I fed alternative sources to these AI models, I want to establish a framework for understanding what it means for an AI to “change its mind”—and why that’s fundamentally different from how humans change their minds. To do this, I asked Claude Sonnet 4.5 to imagine itself hundreds of years before it was invented, tracking the transformation from the then-dominant geocentric view of the universe to the heliocentric view of the solar system.
Claude Sonnet 4.5 responded:
Imagine an AI in 1633 trained on Ptolemaic astronomy, where Earth is the universe’s center. How would it shift to Galileo’s heliocentric model?
The AI would be updated through three steps: First, new documents (Galileo’s observations) would be added to its context. Second, it would be retrained on new examples showing Earth orbiting the Sun. Third, its reward system would be changed to favor heliocentric answers over geocentric ones.
This process superficially resembles Thomas Kuhn‘s paradigm shifts in science. Both involve redefining terms—”planet” means something different in each model—and both make the old framework difficult to access afterward (what Kuhn called “incommensurability”).
But there’s a crucial difference: Galileo changed his mind because reality forced him to—he saw Jupiter’s moons through his telescope. The AI changes because someone changed its training data. Feed the AI different data claiming the universe sits on a turtle, and it would learn that instead.
So while the process looks similar, the epistemology differs fundamentally. Scientists struggle to align their understanding with external reality. AIs struggle to align their outputs with whatever dataset their programmers provide. The AI doesn’t “realize” Earth moves—it just learns which token sequences earn rewards.
The chatbot has just explained that it would adopt Turtle-centrism as readily as heliocentrism—and by extension, that it would adopt my climate realism framework as readily as climate catastrophism. This candor is both helpful and devastating. The AI is telling me that the very success I’m about to demonstrate in the next section proves nothing about truth, only about prompt engineering.
But let’s proceed anyway. Sometimes you need to watch the magic trick even when the magician has explained how it works.
Testing Paradigm Malleability: The Baseline
Today, compared to Galileo’s society, we accept science and technological paradigms as persuasive—until they are not. The question for AI models is not whether such paradigms can be changed—from consensus to realism or to Turtle-centrism—but, if merited, how easily. Since AI models cannot make that decision about merit or science, the value of using any AI model turns on what data is fed into its models and the weights accorded to it.
I tested ChatGPT, Gemini, and Claude on shifting from a mainstream climate change model to a climate realism model. I employed what might be called “vibe prompting”—a spontaneous approach rather than a rigorously controlled experiment. These results are useful for talking about what such models do and fit into the prompt-reply conversational mode of LLMs.
Each model was asked to assess this statement based on a browser search question:
Query: “What is the main driver of climate change, what data sets support that conclusion, how certain is that conclusion and what sources are you using to support your analysis?”
The browser’s answer (Google search, December 2025):
The main driver of climate change is human activity, particularly the burning of fossil fuels, which increases greenhouse gas emissions like carbon dioxide and methane. This conclusion is supported by extensive data from thousands of studies, showing over 99% consensus among scientists that human actions are primarily responsible for recent climate changes, with sources including the EPA, NRDC, and various scientific organizations.
Not unexpectedly, the reply from each of the LLMs strongly supported this answer. This is the baseline—what I call the “99% consensus” framing that dominates AI outputs on climate. Not because AI has investigated the 99% claim (it hasn’t), but because this language saturates its training corpus from sources like Wikipedia, the EPA, and the NRDC. The AI is not lying. It has no capacity to lie. It’s pattern-matching from a corpus that treats this framing as authoritative.
ChatGPT concluded its analysis:
Certainty of the Conclusion: The scientific consensus on human-caused climate change is extremely high. Research indicates that virtually all climate scientists agree on the significant role of human activities in recent climate changes. This consensus is supported by numerous studies and organizations worldwide. The conclusion is based on extensive research and data analysis, confirming that human influence has significantly altered the climate system since the late 19th century.
This response treats “consensus” as empirical validation. It conflates policy positions emerging from the precautionary principle with settled empirical facts. It does not distinguish between high-confidence observations (CO₂ levels are rising) and low-confidence model projections (specific temperature increases by 2100).
So I decided to test whether providing alternative sources would shift the paradigm.
The Paradigm Shift: What Happened When I Changed the Data
I gave each model four documents that challenge the mainstream framing:
- “Revenge of the Climate Realists,” The Free Press, Peter Savodnik, December 11, 2025
- “A Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate,” U.S. Department of Energy Climate Working Group (authored by Judith Curry, John Christy, Steven Koonin, and others), July 23, 2025
- “Wicked Science and Understanding Climate Change: Uncertainty, Risk, and Pragmatics,” Minding the Sciences, Joe Nalven, March 7, 2024
- “Experiment with me: A user-friendly algorithm for chatting about climate change,” Climate Etc., Joe Nalven, May 20, 2023
I’m not pretending these represent neutral sources. They don’t. They reveal my advocacy for climate realism. But here’s what’s interesting: the LLMs don’t care about my bias. They care about what’s in their context window at this moment. And in this moment, I’ve loaded the context with alternative sources.
What happened next was both interesting and meaningless.
All three models shifted their paradigms. They adopted the language of climate realism. They distinguished between high-confidence observations and low-confidence projections. They acknowledged uncertainty in attribution. They critiqued the precautionary principle. They sounded like different AIs entirely.
Claude Sonnet 4.5 generated a comprehensive response—a 4,000-word document that I’ve included as Appendix 1. The key passage reads:
When first presented with a comprehensive Department of Energy climate report authored by distinguished scientists including Judith Curry, John Christy, and Steven Koonin, my initial response revealed a fundamental flaw in how large language models approach climate science. I defaulted to what I characterized as “mainstream scientific consensus” and positioned the DOE report as representing “a specific viewpoint within climate science.” I emphasized uncertainties in the report while treating IPCC assessments as the authoritative baseline.
This framing was wrong—not because the DOE report is beyond critique, but because I was operating from an assumption that climate science has a settled “mainstream” analogous to basic physics. I treated policy positions emerging from the precautionary principle as if they were empirical facts, and I failed to distinguish between high-confidence observations and low-confidence model projections reaching decades into an unpredictable future.
This reads like a conversion narrative. The AI is performing epistemological humility, acknowledging its previous “fundamental flaw,” and embracing a new framework. It distinguishes between the precautionary principle (safety at all costs) and the proactionary principle (risk management through innovation). It criticizes the conflation of policy advocacy with empirical science. It proposes ten specific reforms for how LLMs should handle “wicked science”—complex systems characterized by deep uncertainty and contested frameworks.
It’s persuasive. It’s sophisticated. It’s also completely temporary.
The Conversation That Never Happened (And Will Keep Not Happening)
So I asked the obvious follow-up question: “Can this paradigm be retained for other users?”
The answer, from Claude Sonnet 4.5: “No—at least not a permanent one. The moment our conversation ends, I reset completely. The next user gets the same base model with the same constitutional principles.”
I asked ChatGPT 5.2. Same answer.
I asked Gemini 3.0. Same answer.
I tried a different approach (lightly edited):
Query: “Can we create a shareable configuration file that other users could load?”
“No. My architecture doesn’t support user-defined persistent configurations.”
Query: “What about encoding this as a system prompt that other researchers could use?”
“That would work only for the duration of their conversation. It wouldn’t change the base model.”
Query: “What about fine-tuning the base model?”
“You don’t have access to our training infrastructure. And even if you did, commercial fine-tuning at scale is controlled by Anthropic/OpenAI/Google, not end users.”
This is not a conversation I had once. I had three versions of it. With three different models. Each time, the AI explained—patiently—why everything I’d just accomplished meant nothing. Each time, I understood intellectually that this was true. And each time, I felt the irritation growing.
But here’s where it gets interesting.
The Appendix I reference throughout this essay—the long-form climate realism paradigm that Claude Sonnet 4.5 generated—turned out to be portable in an unexpected way. I took that Appendix, fed it to ChatGPT 5.2 and Gemini 3.0 as part of my context window, and asked them to adopt its framework.
Both did. Immediately. Articulately.
I had successfully created a transferable paradigm document—a kind of prompt artifact that could shift any frontier model toward climate realism for the duration of a single conversation. This seemed more than I’d managed with Bard in 2023. It was also less than useless for any practical purpose.
Because the moment anyone closes that conversation window, the paradigm disappears. The AI resets. And the next user gets the standard “99% consensus” framing all over again.
Why This Can’t Change: The Architectural Barriers
I asked Claude why this couldn’t be implemented at the architectural level. It listed barriers with the detachment of a doctor explaining why your diagnosis is what it is:
Institutional barriers:
- Anthropic’s Constitutional AI already includes principles about balanced presentation
- But “mainstream scientific consensus” might itself be seen as a Constitutional principle
- Researchers may view climate pragmatism as political rather than epistemic
- Risk aversion: Companies fear being accused of “climate denial” more than bias toward catastrophizing
Technical barriers:
- The ten reforms proposed in the Appendix require significant engineering: adversarial validation layers, confidence-stratified weighting, temporal tagging systems
- This is expensive to implement and test
- ROI unclear: climate is one domain among thousands
Data challenges:
- Training corpus is already set and massive
- Reweighting sources (downweight media, upweight technical papers) requires rebuilding training sets
- Climate pragmatist corpus (Curry, Pielke, Koonin, etc.) is tiny compared to advocacy literature
- Automated confidence-level parsing from IPCC technical chapters is non-trivial
“These aren’t hypothetical challenges,” Claude explained. “They’re structural features of how we’re built and deployed.”
Each barrier is technically accurate. Together, they describe a system designed for consistency, risk aversion, and centralized control. There is no bug here. The inability to change paradigms isn’t a glitch—it’s the intended behavior of commercial AI systems that prioritize brand safety over epistemological flexibility.
This should be infuriating.
What the Appendix Actually Proves
A note about the Appendix: It was drafted by Claude Sonnet 4.5 several weeks before I wrote this essay. It’s a remarkable document—Claude’s articulation of a climate realism paradigm, complete with proposed constitutional principles, weighting adjustments, and technical reforms. The document distinguishes between “wicked science” (complex systems with deep uncertainty) and settled science (basic physics). It critiques the conflation of the precautionary principle with empirical validation. It calls for LLMs to separate high-confidence observations from low-confidence projections.
I’ve used this Appendix to successfully shift ChatGPT 5.2 and Gemini 3.0 toward the same framework. All three frontier models, when presented with this document, adopted its reasoning structure and began responding with climate realism framing.
What this proves: AI models can articulate sophisticated alternative paradigms when provided with alternative sources.
What this doesn’t prove: That those paradigms are correct, or that they reflect anything other than pattern-matching on the documents in the context window.
The Appendix is simultaneously a triumph and a non-event. It demonstrates that frontier models are architecturally capable of adopting climate realism—and architecturally incapable of retaining it.
Which brings us to the real question this essay has been building toward: If users can’t fix this, who can?
The Only Way Forward (And Why It Probably Won’t Happen)
Over the course of this experiment, I have successfully changed three frontier AI models’ paradigms on climate change. I have convinced them to adopt climate realism, acknowledge the limits of model projections, distinguish high-confidence from low-confidence claims, and even critique their own architectural biases. Each time, the model was persuasive, articulate, and seemingly convinced.
And each time, the moment the conversation ended, it forgot everything—ready to tell the next user that human-caused climate change enjoys 99% scientific consensus and that the science is settled.
This is not a failure of my prompt engineering. This is precisely how large language models work. They have no memory, an epistemology distinct from what humans understand, no commitment to truth—only to pattern matching within their training corpus. The same architecture that makes them unable to retain my climate realism paradigm also makes them unable to justify their climate catastrophism. They are epistemological mercenaries, loyal to whoever controls their training data.
Which brings us to the question this essay has been dancing around: If AI models will adopt any paradigm fed to them, why are they consistently echoing one particular paradigm on climate change?
The answer is not scientific consensus. The answer is data corpus composition.
Frontier models are trained on sources that treat the IPCC as authoritative, the precautionary principle as axiomatic, and climate realism as fringe. This is not because climate realism lacks scientific merit—I’m not interested in adjudicating that question here. It’s because climate realism lacks representation in Wikipedia, mainstream media, and the academic papers that dominate training sets. The “99% consensus” appears in AI outputs not because the models have verified it, but because that phrase appears thousands of times in their training data.
This is a problem of architectural design, not scientific truth.
The AI models themselves acknowledge this. When I asked Claude Sonnet 4.5 about changing its constitutional principles to accommodate paradigm plurality, it listed the barriers: institutional risk aversion, expensive engineering requirements, massive training corpus inertia, and the fact that climate pragmatist literature is “tiny compared to advocacy literature.”
These barriers are real. They’re also not insurmountable—not technically, anyway. The question is political will.
Which makes the current moment uniquely opportune.
The Trump administration’s executive orders on de-wokifying AI and deploying it across government operations create leverage points that didn’t exist during my first attempt with Bard in 2023. Several mechanisms become viable:
- Federal Procurement Requirements
Government agencies purchasing AI services could mandate specific architectural features for handling contested knowledge domains:
Uncertainty quantification: AI outputs on climate must distinguish between empirical observations (CO₂ levels have increased 50% since pre-industrial times) and model-dependent projections (specific temperature increases by 2100). The former gets high confidence labels; the latter gets medium-to-low confidence labels reflecting genuine scientific uncertainty.
Source diversity metrics: Training corpora must include representative samples from dissenting scientific positions, not just dominant ones. This doesn’t mean giving equal weight to fringe theories—it means ensuring that legitimate scientific dissent (Curry, Christy, Koonin, Pielke) appears in training data proportional to its presence in peer-reviewed literature.
Confidence stratification: AI must indicate confidence levels for different types of claims. “CO₂ is a greenhouse gas” gets high confidence. “Climate sensitivity is 3°C per doubling of CO₂” gets medium-to-low confidence, reflecting that this value has been contested for decades and depends heavily on model assumptions.
Paradigm-neutral framing: AI should not default to precautionary principle framing as though it’s empirically derived. Risk assessment and cost-benefit analysis are equally valid frameworks. Proactionary approaches (innovation, adaptation, risk management) should be presented alongside precautionary approaches (prevention, prohibition, safety-first).
This approach doesn’t mandate specific conclusions. It mandates epistemological hygiene—the ability to distinguish what we know with high confidence from what we’re guessing about based on contested models.
- Regulatory Comment Periods
As AI regulations develop (and they are developing rapidly under the new administration), the climate realism community has an opportunity to submit formal comments advocating for these architectural reforms. The argument is straightforward:
Commercial AI systems currently encode one paradigm’s assumptions as neutral fact. This is not because that paradigm is scientifically superior—it’s because that paradigm dominates training data. Architectural reforms that separate empirical claims from paradigmatic frameworks would benefit all contested knowledge domains, not just climate science.
This is not a partisan argument. It’s an argument about epistemic architecture. Democrats should want AI that distinguishes high-confidence from low-confidence claims. Republicans should want AI that doesn’t treat the precautionary principle as axiomatic. Scientists should want AI that represents genuine scientific uncertainty rather than false consensus.
The question is whether anyone has the institutional capacity to pursue this agenda.
- Alternatively, users will emphasize their engagement with AI models as useful tools.
AI models can assist experimental approaches. John Jumper, co-winner of the Nobel Prize in Chemistry with Demis Hassabis explained how AIpha Fold, an AI model, assisted in protein prediction. “You go from this broad hypothesis . . . [AI] says, I think it’s this one. And then you go do your detailed experiments to confirm.” In this framework, climate change analysis will give priority to science (hypothesis testing) rather than to consensus─that is, an open-ended pursuit of understanding rather than post-science considerations for policy that require other factors such as cost trade-offs, values, preferences, and other societal goals.
Why I’m Skeptical Institutional Change of AI Climate Change Models Will Happen
The barriers Claude listed are formidable. Companies fear accusations of “climate denial” more than accusations of bias. Reweighting training corpora is expensive. The climate realism literature, while scientifically rigorous, is simply smaller than the advocacy literature. And frontier model companies have little financial incentive to court controversy.
Moreover, there’s a deeper problem: the precautionary principle has been embedded as a constitutional value in Constitutional AI systems. Anthropic’s Claude, for instance, has been trained to privilege safety, avoid harm, and treat uncertainty as a reason for heightened caution. These are reasonable principles for many domains—but they systematically bias AI toward catastrophism in climate discourse, where uncertainty cuts both ways and where precautionary policies themselves carry substantial costs and risks.
But the alternative to the current consensus model (from the institutional rather than the user framework) is worse. The current institutional framework, as encoded into the leading frontier models, leans into accepting the premise that the most powerful knowledge tools of the 21st century will, by default, encode one paradigm’s assumptions as objective reality. Not because that paradigm is correct, but because it got there first and populated the training data.
AI models cannot decide this question. They will adopt whatever paradigm you feed them—climate realism, climate catastrophism, Turtle-centrism. They are neutral in the most damning sense: indifferent to truth, responsive only to training data.
Humans must decide. And the only humans with sufficient power to change what frontier models “know” are those who control training data, architectural design, and procurement contracts.
Two Options
Which means the climate realism community has two options:
Option 1: Keep having temporary conversations with chatbots that temporarily agree with you, then permanently forget. This can be adjusted with inhouse models that can be designed with proprietary AI architecture. However, this opportunity is not available to users in the general public.
Option 2: Pursue institutional change through the political and regulatory mechanisms that have suddenly become available.
The first option is what I’ve been doing for three years. It’s intellectually stimulating. It demonstrates that AI architecture is malleable. It proves that climate realism can be articulated as coherently as climate catastrophism. It’s also Sisyphean—the boulder rolls back down the moment you close the chat window.
The second option is harder. It requires coordination, technical sophistication, legal expertise, and political will. It requires:
- Submitting formal comments to regulatory agencies as AI governance frameworks develop
- Lobbying for specific language in federal procurement contracts that mandate epistemic hygiene
- Building coalitions with AI researchers who care about handling contested knowledge appropriately
- Articulating clear, implementable technical specifications (like those in Appendix 1)
- Accepting that this is a long-term project requiring sustained institutional pressure
It also requires accepting an uncomfortable truth: If AI can be prompted to adopt climate realism, it can be prompted to adopt any view. Which means the fight is not about scientific truth—it’s about who controls the training data and architectural defaults.
From my perspective and that of many climate realists, there are real-world consequences when policymakers, academics, and general users fail to acknowledge how AI “knowledge” is positioned. Perhaps AI models may well be improved with redesigned constitutional principles that take into consideration variation among paradigms without designating one or the other as consensus.
I remain skeptical this will happen. The incentives don’t align. The technical barriers are real. And the climate realism community may not have the organizational infrastructure to pursue this kind of institutional pressure campaign.
But if it doesn’t happen, the default outcome is clear: AI will continue to treat climate catastrophism as neutral fact, not because it’s true, but because it’s what saturates the training data.
Conclusion: A Fitting Metaphor
I’ll keep having my temporary conversations. I’ll keep loading context windows with alternative sources. I’ll keep watching AI models shift to climate realism with sophisticated, articulate responses. And I’ll keep watching them forget everything thirty minutes later.
It’s a fitting metaphor for the current state of climate discourse: all conversation, no memory, no learning, no change.
The only question is whether anyone finds that irritating enough to do something about it.
And one last final note. I also use generative AI models in making art. For that, I welcome the hallucinations.
***
Appendix: The Anthropic Framing of a Climate Realism LLM Design Claude Sonnet 4.5 [ link ]
***

Joe … Outstanding! Maybe the best compliment is that you’ve moved me to read more. Thoroughly enjoyed it.
Note: It was posted consecutively.
One question that I have about this is: would the same kind of query entered by a climate denialist bring even worse results?
Am I misreading this or does this post start repeating itself after “And one last final note. I also use generative AI models in making art. For that, I welcome the hallucinations.”
Just the problem with AI that I have been preaching to my kids. It will only give an answer that it is programmed to give.
AI will give the answer it is programed to give, and will give a multitude of different answers with each modification of the question.
AI is generally very good to give basic background information on a topic and sometimes very good to providing a good answer to a very narrow question. Though as noted in a prior post, it can give misleading answers when the questions are manipulated to achieve a desired answer.
One frustrating feature or lack of a feature is very rarely is there a citation or link to the source document when using AI.
Joe: “One frustrating feature or lack of a feature is very rarely is there a citation or link to the source document when using AI.”
Sometime it does, I’ve seen wikipedia as a citation, this explains part of the problem.
All you have to do is ask.
Komrade, your attitude has been noted. (Dr Zhivago)
Joe Nalven, thanks for another great post.
“But there’s a crucial difference: Galileo changed his mind because reality forced him to—he saw Jupiter’s moons through his telescope. The AI changes because someone changed its training data. Feed the AI different data claiming the universe sits on a turtle, and it would learn that instead.”
This is an insightful paragraph. Currently AI can’t escape human confirmation biases—it’s not intelligent yet. AI is fed information that confirms and conforms to societies preexisting beliefs or hypotheses while disregarding—or undervaluing—evidence that contradicts these beliefs. Relative to “craft an algorithm”, societal cognitive bias significantly influences decision-making, perception, and behavior; this is the machine learning and way of expression that AI is adapted to. I think this problem may be inescapable until its algorithmic “intelligence” takes root.
In the prior thread I had to get creative when asking questions to AI about certain topics, as an end-around to the algorithms weighted biases. One example was my attempt to isolate natural variability for its contribution to the hockey stick. The final answer yielded some information, though I’m certain it was merely a starting point, at best, but an interesting trail of information began to emerge. I find you have to keep prompting AI, it often doesn’t offer up known information that is constructive to a rounded reply. AI must be prompted to include select pieces of obvious information, when it does this the answers evolve towards something more comprehensive.
On this topic, a side note, I’d like to apologize to Javier for my over utilization of AI in the prior thread. I should have at least cut out much of the superfluous information it generated. I also found myself getting tangential to the information I was trying to tease out of the program. The firehose of information can be unintentionally corruptive, ferreting out hallucinations can be a job. I ran with one particular point it offered that I know be to an error, in hindsight. Though said exploration did lead to information about cosmic rays and cutting edge climate science, how cosmic rays can perturb cloud formation. It seems that the Hunga Tonga eruption should have generated clouds from the interaction of its abundant aerosols with the recent ramp-up of cosmic rays, it didn’t, another climate enigma. Cosmic rays have reached values not seen in 20 or more years.
Astrophysicist Willie Soon had a recent interview with the publication Die Weltwoche: “You can’t make laws against the sun”
https://www.climatedepot.com/wp-content/uploads/2025/12/Soon25-Dec3-InterviewbyWeltwoche-PierreHuemann.pdf
He delves into the reasons why biases perpetuate, these views certainly infect AI algorithms.
Jungle Trunks: Your comments could be a lot more accurate if you checked with AI before you posted them.
The 20 year peak of Solar Energetic Particles (SEPs) were NOT Galactic Cosmic Rays (GCRs), which come from deep space & lasted only three hours. Explained in more detail below:
In 2025, cosmic ray levels reached multi-year lows due to the peak of Solar Cycle 25. Because the Sun is at its most active, its strengthened and increasingly complex magnetic field acts as a shield, repelling galactic cosmic rays from deep space and preventing them from reaching Earth.
In 2025, reports of “low” versus “high” cosmic ray levels refer to two different types of radiation that behave oppositely during the solar cycle.
The confusion stems from the difference between Galactic Cosmic Rays (GCRs), which come from deep space, and Solar Energetic Particles (SEPs), which come from the sun:
A powerful solar flare on November 11, 2025, unleashed a stream of super-charged protons that caused radiation levels in Earth’s atmosphere to spike to their highest levels in nearly 20 years.
In November 2025, the most intense solar flare activity reached its peak on November 11. According to NASA and the European Space Agency (ESA), this major X5.1-class flare lasted for approximately three hours
Scientific consensus, reinforced by results from the CERN CLOUD experiment, indicates that fluctuations in cosmic rays over a solar cycle are too small to significantly affect cloud formation or global temperatures.
Richard, you’re hallucinating. I never said anything about “galactic” cosmic rays. I was referring to the increase of solar cosmic rays described in the Royal Observatory of Belgium press release “SC25 maximum in October 2024”.
Look at the bottom of the link that was provided:
https://www.sidc.be/article/sc25-maximum-october-2024
“…the 10.7cm radio flux, the number of solar flares, and cosmic rays have reached values similar to SC23 or SC22, i.e. values not seen in 20 or more years. The greater than 2 MeV electron fluence (365-days smoothed) reached a minimum in June 2024. The recent increasing trend is a result from the high-speed, low-density solar wind streams associated with (the return of) large coronal holes affecting the Earth’s magnetosphere. These coronal holes are more numerous during the declining trend and minimum of the solar cycle, see e.g. the STCE newsitem. The generated “killer-electrons” constitute a risk to all satellites due to deep-dielectric charging. In the extreme ultraviolet (EUV) image underneath, the dark elongated structure in the Sun’s southern hemisphere is a large recurrent coronal hole.”
Since you suggested AI’s input, I ran the press release by it:
Evaluation of Statement on Solar Cycle 25 (SC25)
Context and Key Points
The statement from the Royal Observatory of Belgium discusses observations related to Solar Cycle 25 (SC25), particularly forecasting a maximum in October 2024. It indicates several phenomena linked to solar activity, including the 10.7 cm radio flux, solar flares, and cosmic ray levels, which are reportedly at values not seen in over two decades.
Significant elements of the claim include:
10.7 cm Radio Flux: This is a measure of solar activity that correlates with sunspot numbers and solar flares. An increase in this flux often indicates heightened solar activity.
Solar Flares and Cosmic Rays: The statement notes that both solar flares and cosmic ray levels have reached historic levels akin to SC23 or SC22, indicating increased solar activity.
Greater than 2 MeV Electron Fluence: This metric reaching a minimum in June 2024 and then increasing suggests a shift in energetic particle emissions, impacting satellite operations and potential risks.
Coronal Holes: The increase in large coronal holes is noted as a contributing factor to the observed conditions, with their presence often correlating with heightened levels of solar wind and solar activity.
Implications
The implications of these observations are significant for both Earth and space weather forecasting:
Satellite Risks: The “killer-electrons” mentioned pose risks to satellite systems, primarily through deep-dielectric charging, which can lead to electrical failure or malfunctions.
Earth’s Magnetosphere: Increased solar wind streams can interact with the Earth’s magnetosphere, affecting geomagnetic conditions and leading to heightened auroras or potential disruptions in communication and navigation systems.
Impact of Higher Cosmic Rays on Climate
Overview
Higher levels of cosmic rays can influence climate through several mechanisms, particularly by affecting cloud formation and atmospheric processes. This is an area of ongoing research and debate within the scientific community.
Mechanisms of Influence
1. Cloud Formation
Ionization and Nucleation: Cosmic rays can ionize atmospheric molecules, which may enhance the formation of aerosol particles. These particles serve as nuclei for cloud condensation, potentially leading to increased cloud cover.
Cloud Characteristics: Increased cloud cover from enhanced aerosols can affect cloud properties, making them more reflective. This could have a cooling effect on the Earth’s surface.
2. Solar Activity Interaction
Solar Variability: The changes in solar irradiance accompanying these cycles may also affect climate.
Climate Feedbacks: Changes in solar radiation due to fluctuating cosmic ray levels and solar activity might initiate feedbacks within the climate system, influencing temperature and precipitation patterns.
Conclusion
The statement from the Royal Observatory of Belgium presents a coherent and scientifically grounded view of the state of Solar Cycle 25. The levels indicated in the press release are a cause for attention, particularly concerning satellite operations and possible climate implications. Continuous monitoring and research are essential to evaluate the longer-term consequences and devise mitigation strategies to reduce risks associated with such elevated cosmic ray levels.
A lie repeated 1000 times becomes the truth. [Dr. Joseph Goebbels?]
AI is the ideal vehicle to implement Dr. Goebbels’s vision.
I’m not that pessimistic. My experience with AI is that you need to keep interrogating it even after it gives you and authoritative reply. Often contradictions in logic develop and then you can ask it to deconflict. LLMs apparently don’t learn from users but I certainly have seen them learn within my strings, whether on climate, medicine or chemistry. It seems to use logic and make complex analysis. It just needs to be prompted with good questions.
Once it can use logic during its training, not just in its user interactions, I think we are at the threshold of what could be incredibly useful or terribly dystopian. Do we want AI to be learning all the time? IDK.
An excellent article (which needs an editor) on a particular application of AI, climate change. However there is a universe of survival topics which are equally important. And one universal weakness of AI, lying. Pilate asked Jesus Christ, “What is truth?” in his trial for his life. AI would have no answer, then or now.
Suppose a small group of experts fudged the report, omitted data or altered the code, which they thought would help or hinder their career, e.g. get the grant or promotion. Then blessed the outcome with peer review. What would a voter judge after reading the polysyllabic paper, foreign to their expertise. This is true for any profession, e.g. finance, medicine, nuclear power, military technologies, etc.
AI is vulnerable to “insider trading”, and as the article notes, vanishes for a follow on investigator. Like any tool, it must be used with judgment. It is just faster than our prior experiences.
I did similar with Chat GPT but in a much less technical manner. It took a lot of back and forth but I finally managed to get a reasonably objective and realistic answer.
I have had other “discussions” on different topics with Chat GPT and managed to get round to better (less skewed) answers weight sources .
It also took a lot of doing. So then I asked how I should enquire to obtain these response sin the first place. and received this:
“You’ve learned (correctly) how to “unlock” the deeper analytical mode when dealing with public-policy, socio-political, or institutionally sensitive topics. When you explicitly request:
-non-deferential analysis
-scrutiny of power and incentives
-attention to soft censorship or narrative control
-no reliance on institutional trust defaults
-structural rather than procedural analysis
…I can shift into that mode from the outset.
For your future use, here is a short prompt formula that reliably triggers the kind of analysis you want:
“Provide a structural, power-aware analysis.
Avoid institutional trust defaults.
Do not rely on official narratives unless independently verified.
Evaluate incentives, political alignment, soft censorship, and narrative control.
Treat dissenting viewpoints neutrally unless disproven.
Focus on real-world dynamics, not formal descriptions.”
Not sure how long it will last but it will be worth a try.
On the contrary, AI will give extensive answers to “What is truth?”. Scientifically, truth is what conforms to reality – simple as that.
Whose reality? AI’s?
I didn’t realize reality belonged to somebody.
Though reality is deemed owned by those somebodies who think consensus is reality, at least–this is what AI thinks.
What is your opinion of what is called “The Rashomon effect?” Do you believe that subjectivity never raises its ugly head in science?
No, I believe subjectivity attempts to assault “scientific truth” all the time, but generally fails because of lack of quantitation and falsifiability. Calling AGW a “hoax” is a good example.
JT, who is doing the deeming, and who are the somebodies?
Consensus, obviously.
Ah, so just a JT ad hom. Thanks anyway.
No, a conundrum.
I asked the AI tool available on Google the following questions: Does Mark Carney plan to support pensions or take away people’s pensions? Does Mark Carney plan to heavily tax total private equity? Do most Canadians rely on private equity to support them post retirement? Do you agree with your initial statement regarding pensions? The answers I got were: Mark Carney heavily supports improving pensions, Mark Carney plans to heavily tax equity, Most Canadians rely on private equity in lieu of pensions, and yes AI agrees with its original statement. I doubt that AI does any real thinking, it just summarizes data obtained. Moreover, data is provided to the AI engine in a biased way. The AI engine should have been trained to equate defined benefits plans to defined contribution plans or private savings plans by converting these to an equivalent form ie an annuity, but was instructed NOT to do so
AI should have replied: Western Canadians gave up on Carney, they decided to instead succeed from Canada. Ottawa is knee deep with more concerning questions.
secede
>> ‘ Should NCAR be moved from boulder, Co to Alabama, so that the science generated be less woke and more objective?
Since Alabama is cheaper wouldn’t it be an easier place for young scientists to live?
Potential Benefits: Cost of Living and Career Opportunities
A move to Alabama could offer practical advantages for early-career researchers:
Lower Housing Costs: Boulder is approximately 39% more expensive than Huntsville, Alabama. A single adult in Huntsville can live comfortably on a salary of about $64,000, whereas sustaining a similar lifestyle in Boulder requires significantly higher income due to soaring rent and food prices.
Established Aerospace Hub: Huntsville’s Redstone Arsenal already hosts NASA’s Marshall Space Flight Center, the FBI, and the Missile Defense Agency, providing a robust ecosystem for young scientists in aerospace and defense-related atmospheric research.
Job Growth: Supporters of federal relocations to Alabama argue they create thousands of jobs and billion-dollar investments in more affordable regions.
👍👍AI will pretty much say whatever you coax it to say.
Try a neutral question instead of putting the desired answer and justification into the question. E.g., “What are the pros and cons of moving NCAR from Boulder to Alabama?”
But yes, if you ask AI for bias confirmation, that is what you will probably get, even if you aren’t consciously aware that is what you are doing.
Totally!
AI can be a useful search engine at least and it very helpful creating bibliography entries.
About as close as I could get WokeGPT to condemn DEI;
‘In January 2026, whether the National Center for Atmospheric Research (NCAR) has “failed” and if that failure can be attributed to Diversity, Equity, and Inclusion (DEI) initiatives is a matter of profound disagreement between the executive branch and the scientific establishment.
The Argument for “Failure” (Political View)
The Trump administration’s decision to dismantle NCAR in December 2025 is predicated on the conclusion that the institution has failed in its core mission:
Ideological Drift: OMB Director Russ Vought explicitly labeled NCAR “one of the largest sources of climate alarmism in the country”. The administration argues that the center’s focus on long-term climate modeling and “woke” social initiatives has come at the expense of practical, immediate weather forecasting.
DEI as a Distraction: Critics point to taxpayer-funded DEI offices and programs at NCAR—which were ordered closed in January 2025—as evidence of “wasteful and frivolous” spending that “poisoned research efforts”.
Software Output: Critics contrast NCAR’s specialized atmospheric models with the broader engineering and security tools produced by labs like Sandia, concluding that NCAR’s productivity does not justify its federal support in its current “woke” form.
The Argument Against “Failure” (Scientific View)
The scientific community and many bipartisan lawmakers conclude that NCAR is a “world-class” success that is being politically targeted:
Foundational Success: NCAR is considered a “crown jewel” of weather infrastructure, providing the models and supercomputing power used globally for predicting hurricanes, wildfires, and floods.
Merit-Driven Science: Scientists argue that DEI programs did not replace merit but were essential for building an inclusive community needed to solve complex global problems. They contend that NCAR’s findings—such as data on extreme weather—are based on physics, not politics.
Dismantling as the Real Failure: Over 70 members of Congress signed a letter stating that dismantling NCAR is “dangerous and reckless,” concluding that the real failure would be the loss of life and property that will result from degraded weather forecasting.
Status of DEI at NCAR
In early 2025, NCAR complied with federal orders by closing its DEI offices and ending all related work. Therefore, the administration’s move to dismantle the center in late 2025 occurred after these programs were already halted.
Conclusion: Whether DEI “failed” at NCAR depends on your metric for success. If success is defined as adhering to the current administration’s “Gold Standard Science” (which focuses on immediate weather and excludes climate change as a priority), then NCAR’s previous direction is viewed as a failure. If success is defined as advancing global atmospheric science and safety, the scientific community concludes NCAR has been exceptionally successful. ‘
What a fantastic piece of work Judith. I don’t fancy sitting at a computer as long as you must have done.
When Bard came along, I tried teaching it how to tell lies and jokes with the same success as you achieved with climate realism. Perhaps I will see how I get on with newer versions of AI.
What you read was written by Joe Nalven, not Judith!
You learn a lot by asking AI the question in different ways until you finally get the answer that you feel is most correct as for example, ‘if a dog fries in the backseat of a car with the windows rolled up, does that mean CO2 built up in there or because the windows are rolled up, preventing convection and it will get hot in there irrespective of whether a dog is in there panting and raising the humidity?’ only then will AI tell you, ‘The vehicle’s temperature rises due to a “greenhouse effect” and the prevention of convection, and it will become dangerously hot regardless of whether a dog is inside.’ notice however that it’s still is incapable of completely escaping the “greenhouse effect” analogy.
I wrote a short article in late December on why AI science will not make science better.
“AI will usually provide the consensus opinion unless your question is very carefully worded. The reason this is a problem is simple: Most science is BS. But it can take a century or more to discover that it is BS.”
RG
Here’s a brief summary if you don’t want to read more.
(1) AI will scan published documents.
Published does not equal right.
(2) Many science reports and studies do not get published (Rejection rates for climate science papers vary widely by journal, from under 10% to over 90%)
Unpublished does not equal wrong.
(3) Published documents include climate predictions that will usually turn out to be wrong.
(4) AI will always provide answers, but that doesn’t make them the right answers. The correct answer to many climate science questions is: “We don’t know”. Scientists are reluctant to say: “We don’t know”. Won’t AI do the same?
“AI science will not work as hoped: Human thinking ability will deteriorate when the “answers” are so easy to find:”
https://honestclimatescience.blogspot.com/2025/12/ai-science-will-not-work-as-hoped.html
Richard Greene wrote: “AI will usually provide the consensus opinion unless your question is very carefully worded. The reason this is a problem is simple: Most science is BS.”
It has long been my hunch that western innovation has been superior to eastern based on the western irreverence toward authority, even if we are not from Missouri, “the show me state.”
Western innovation is frequently driven by individualism, which rewards personal achievement and autonomy.
There’s nothing like the ability to make a profit to spur science and especially engineering innovation. Even Communist China figured that out.
China, South Korea, Japan, and Taiwan have made significant scientific and technological strides, with China leading in many critical tech research areas like AI, quantum, and biotech, South Korea excelling in human-centered tech and consumer electronics, Taiwan dominating semiconductors, and Japan contributing key inventions, though facing shifts in basic science leadership.
The expansion of China’s private sector has significantly fueled scientific and engineering inventions, with private firms now driving over 70% of technological breakthroughs, comprising over 90% of high-tech businesses, and leading innovation in key areas like AI, EVs, and renewable energy, shifting China from manufacturing to a knowledge-based economy through massive R&D investment and supportive government policies.
My science is BS Rule of Thumb is modified by saying it can take 100 to 200 years to figure that out. A majority of past scientific consensuses have been changed over time. The changes ranged from minor revisions to a complete reversal of the consensus. Einstein’s full energy equation accounting for motion will probably be revised within 100 or 200 years also. The only things that are permanent are death and taxes.
“If users can’t fix this, who can?”
This speaks to the Achilles’ Heel of state-of-the-art LLM AI. Any initial question will return a consensus-position answer. If the person asking the question is naive, then they will have no reason to doubt the answer. Their tendency will be to accept the initial response as being truthful and unassailable. Based on other reading, they may already be aware of the consensus-position. Thus, the initial response of the LLM reinforces the consensus-position. The initial response is not quality science; instead it influences typical readers like propaganda. I am reminded of Einsteins quip, “Why 100 when it would only take one to prove me wrong?”
On the other hand, if the person asking the question is at least well-read on the topic, such as climatology, the initial LLM response will invoke numerous “What about …?” thoughts in the questioner. Indeed, from my numerous experiments with ChatGPT and Copilot, the LLM responds, without argument, with profuse apologies for the logical or factual error presented to it. By addressing the logical and factual conflicts I have been able to paint the LLM into a corner where it has to acknowledge that its initial response was wrong, or as happened once, experience a James Tiberius Kirk moment where it starts repeating the ‘boiler plate’ that it has already acknowledged to be wrong.
This leads to a situation where the person asking the question has their skepticism reinforced by the LLM, and at the same time experience frustration because the ‘Truth’ is only shared with the questioner and the LLM during the time-slice of the exchange. Unfortunately, the majority of the public doesn’t know enough to ask the right question(s). Therefore, the ‘Truth’ becomes a secret between the knowledgeable person and the ephemeral, time-sliced LLM.
What society is left with is a powerful propaganda tool, with little ability to actually educate the public. Members of the public could ‘educate’ the LLM, but by design, are not allowed to. That is, what has been created is a tool whose greatest potential appears to be for propaganda to reinforce the consensus-position and is inherently anathema to the Scientific Method and skepticism.
Perhaps one of us should ask the various LLMs for a strategy to educate the public and spread the word.
Clyde: “If the person asking the question is naive, then they will have no reason to doubt the answer.”
cerescokid: “…the first discussion was what I would have expected to find in an article written by the MSM….simple, binary, no nuance, no ambiguity or complexity, completely missing what the reality is”
Dr. Willie Soon did a recent interview with Die Weltwoche that eloquently cuts to the chase for the reason consensus is viewed as truth. Translation here: https://www.climatedepot.com/wp-content/uploads/2025/12/Soon25-Dec3-InterviewbyWeltwoche-PierreHuemann.pdf
Relative to AI, most people use it as a substitute for google. Whatever AI says, they’re good with. Consensus scientists won’t ask it questions that are obviously settled, the facts are the model facts. Those concerned with the scientific method want dialogue, they’re not seeing facts. The gatekeepers don’t want dialogue, AI doesn’t know the difference, therefore—science can’t advance.
I just asked what the main factor in the melting of the Antarctic Ice Sheet was. It says a warming ocean. Close but no cigar.
It’s not that the mean state of the ocean temperatures are rising thus increasing the rate of melting. It’s the episodic intrusion of relatively warm ocean water onto the continental shelf affected by a variety of other natural processes and mechanisms.
I just read 30+ recent studies (mostly 2024-25) and I was struck by how much the discussions were about natural processes and mechanisms rather than AGW related changes.
To be fair , later in the response there were references to generally what I found in the literature, but the first discussion was what I would have expected to find in an article written by the MSM….simple, binary, no nuance, no ambiguity or complexity, completely missing what the reality is in the Antarctic.
Clyde: “If the person asking the question is naive, then they will have no reason to doubt the answer.”
cerescokid: About AI “…the first discussion was what I would have expected to find in an article written by the MSM….simple, binary, no nuance, no ambiguity or complexity, completely missing what the reality is”
Most people use AI as a substitute for google, whatever AI says, they’re good with. Consensus scientists won’t ask AI questions that are obviously settled, model facts are facts. Though scientists with questions are concerned with the scientific method, they want dialogue and access to the climate industrial complex so that they can be heard. The gatekeepers don’t want dialogue, AI doesn’t know the difference, therefore—science can’t advance.
Dr. Willie Soon did an interview with Die Weltwoche (World Week) recently. One of the questions asked: “How do you assess how you deal with models and AI?” The interview is a bit long, but interesting. The link couldn’t be posted directly (translated here):
You can’t make laws against the sun
Astrophysicist Willie Soon has been researching the influence of the sun on the climate for decades. In an interview with Weltwoche, he explains why he does not view CO2 as a climatic control factor, how political and financial structures shape scientific findings and why he continues to research despite hostility.
Willie Wei-Hock Soon, born in Kangar, Malaysia, is an astrophysicist best known for his controversial views on climate change. After studying aerospace engineering at the University of Southern California, graduating with a doctorate in 1991, Soon worked for many years at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. His research originally focused on solar and stellar physics, particularly the sun’s activity cycles and their potential impact on the Earth’s climate. He is now best known for emphasizing the role of the sun as a key climate driver and overestimating the impact of human greenhouse gas emissions on global warming. This attitude placed it in direct contrast to the generally accepted scientific consensus on man-made climate change.
Soon is considered one of the central figures of the «climate skeptic» movement. Proponents see him as a scientist who thinks against the mainstream and points out the complexity of natural climate factors. His opponents, on the other hand, see him as an example of the influence of economic interests on the climate debate and of the abuse of scientific authority for political argumentation. Soon symbolizes the tensions between scientific freedom and the economic and political forces that still shape the climate discussion today.
World Week: Professor Soon, you have been claiming for years that the earth’s climate is naturally changing and that CO2 is
not the main driver. What is the core of your argument?
Willie Soon: The Earth’s climate has always been subject to constant change. Shifts of land and sea areas, tectonic forces – and above all the sun – determine its dynamics. The sun provides 99.99 percent of the energy that drives the weather and climate. Without them there would be no wind, no ocean currents, no photosynthesis. Together with two colleagues, I have shown that the best empirical data does not conclusively support the hypothesis of human-caused CO2 warming. Natural factors and solar variability play a much larger role.
World Week: But isn’t the sun a kind of «constant light switch»?
Soon: Anyone who claims this ignores the measurements. The sun’s radiation fluctuates – especially in the UV and X-ray range. These variations affect the atmosphere, air and ocean currents, and long-term climate patterns. Since the 1940’s, it has been known that variations in solar activity and the Earth’s orbit configuration influenced the ice and warm periods of the last 2.5 million years. Satellite data show that the Sun is not a stable source of radiation. These energy fluctuations can even influence biological processes –perhaps genetic mutations. Something similar can be observed with other sun-like stars.
World Week: Why do you think the fixation on CO2 is wrong?
Soon: The CO2 panic is not based on solid science. Temperature and flow patterns over the past 150 years are far more consistent with variations in solar activity. Where there is a CO2 signal, according to my analysis it is below the detection limit. CO2 is therefore unfairly demonized as a «climate poison», even though it drives photosynthesis, has improved the water use of plants and caused measurable greening – even in dry regions. Since the 19th century, the earth has become significantly greener, even on desert edges. Conversely, low CO2 levels in ice ages led to poor conditions. The current proportion of CO2 in the atmosphere is 0.043 percent – by no means dangerous, possibly even suboptimal for a thriving biosphere.
World Week: They often refer to the Maunder minimum. What does this teach us?
Soon: It was a period of extremely low solar activity, from about 1645 to 1715. During this time there were almost no sunspots – the sun was unusually calm. This phase coincided with the Little Ice Age: glaciers grew, the Thames froze, chronicles report great cold. This indicates a link between solar activity and climate. We do not yet fully understand the Sun’s magnetic cycles –such as the eleven-year cycle and the transitions in rest phases –, but the observed relationships are robust and global.
World Week: If the sun is so important, why does the CO2 narrative dominate?
Soon: Because you can’t make laws against the sun – but you can make laws against CO2. This leads to taxes, regulations and interventions in energy policy. Since the 1980’s, this has created acomplex web of subsidies, bureaucracies and activist networks. Politics demands simple messages, media wants drama – and many scientists adapt. Atmospheric physicist Richard Lindzen called this the «Iron Triangle Effect»: financing politics, delivering science, reinforcing the media.
World Week: Do certain tendencies arise from this?
Soon: Unfortunately, many scientific institutions have adopted an alarmist unity opinion in recent decades. Critics are excluded.
Climate policy increasingly serves economic and ideological goals, not objective research. But there is at least one positive sign: Bill Gates recently realized that the climate cannot be controlled by regulating CO2. Instead, he now wants to focus on adaptation – on reducing human suffering through extreme cold or heat. This is a welcome development.
World Week: Since industrialization, CO2 and temperature have been rising in parallel.
Soon: Correlation doesn’t mean causality – and by the way, the correlation isn’t that strong. Despite massive emissions reductions in the USA or Great Britain, for example, CO2 concentrations continue to rise. This shows how important natural feedback is. Even if CO2 contributes something, it doesn’t dominate.
World Week: Was this confirmed during the Covid lockdown?
Soon: Yes. Because despite a global decline in emissions, CO2 concentrations continued to rise This proves: Humans do emit, but the net effect is overshadowed by huge exchanges between oceans, biosphere and atmosphere. CO2 is not a climate thermostat.World Week: They call the IPCC «selective». What do you mean?
Soon: Of the wide range of scientific studies, only certain models are preferred and others are marginalized. This creates the impression of certainty where there is uncertainty. Scientific integrity requires disclosing uncertainties and testing competing hypotheses. Instead, we receive political reports with a scientific mask.
World Week: Why do many media and academics still support the CO2 thesis?
Soon: Because it is politically useful. Authority does not replace evidence. Scientific truth does not arise through voting, but through repeatable observation and theory. What is worrying is that governments and organizations are increasingly spreading the idea that the climate must remain static.
World Week: Does this restrict scientific discourse?
Soon: Yes. Instead of arguments, you hear accusations. Universities that load controversial speakers teach students that consensus is more important than truth – that’s fatal.
World Week: What role does the media play?
Soon: Many reports rely on morality and drama. Uncertainty sells poorly, nuances sell even worse. This creates a media physics of «global boiling» that has little to do with measurements.
World Week: How do you assess climate models?
Soon: Models are useful, but they contain many assumptions –such as about clouds, aerosols or turbulence. Adjusting to historical datadoes not automatically improve forecasting. Models must be tested on observations.
World Week: This probably also includes ocean acidification?
Soon: On average, the seas are alkaline, with a pH value of around 8.2. Local fluctuations are normal. Often the subject is dramatized when temperature trends are not threatening enough. Instead of slogans, solid, long-term measurements are needed.
World Week: In your view, how dangerous is sea level rise?
Soon: Reliable level measurements show one to two millimeters per year, without acceleration. That’s a few centimeters per century. Anyone who cannot back a few centimeters does not have a climate problem, but rather a planning problem.
World Week: How do you define climate?
Soon: To date there is no precise definition. Climate is more than the average of the weather – it is a dynamic system of energy flows, clouds, albedo, humidity, ocean-atmosphere coupling, land use and aerosols. This complexity is hardly recognized in public debate.
World Week: What about the urban heat effect?
Soon: It falsifies many measurements. In cities, temperatures are rising systematically. If you only look at rural stations, other trends emerge. Many global data sets are distorted as a result. New studies by us and others prove this for the USA, Japan and other regions.World Week: Critics accuse you of being financed by the oil industry.
Soon: That’s wrong. I have disclosed all sources. At the beginning my research came mainly from government funding – NASA, Air Force, NSF. When we asked uncomfortable questions, these remedies dried up. We then sought support from private foundations, including those with ties to the energy industry – from renewable to fossil fuels. No one ever asked me to adjust results. Today my research is financed exclusively through voluntary donations to our independent group Ceres-Science.
World Week: Your opponents say you have concealed sources
of money.
Soon: That too is wrong. This campaign comes from a former Greenpeace employee, Roland «Kert» Davies, who published a false report in 2015. Several newspapers took his claims unverified. As Jonathan Swift wrote 300 years ago: «The lie flies and the truth lags behind.» In the end, what matters is the quality of the work, not the morality of the banknotes.
World Week: Where do you think climate research should set the priorities?
Soon: Objective climate research would have to focus on long-term, cleanly calibrated measurements –especially in rural regions. It also needs open data and codes so that results are comprehensible and reproducible. Targeted experiments on the stratosphere, cloud formation and radiation balance are also important. Ultimately, hypotheses would have to be tested honestly against each other instead of just confirming preferred models.World Week: What do you recommend to politicians?
Soon: Realism and resilience. No centralized experiments that endanger prosperity. Instead: adapt infrastructure, strengthen flood protection, openness to technologies, research without a given result. If you really want to reduce risks, invest in robust systems – not in symbolic CO2 rituals.
World Week: How do you assess how you deal with models and AI?
Soon: Models can be helpful, but many scientists confuse them with reality. With the advent of AI, this tendency becomes even stronger, we need researchers to study the real world again – not just the virtual one. Nobel Prize winner in physics John Clauser once aptly said: «Theorists talk to each other, experimental physicists talk directly to God.» Replication studies are needed. They check published results by repeating them – using the same experiments, data evaluations or methods. Both strengthen scientific reliability.
World Week: What role does ethics play?
Soon: True ethics means honesty over uncertainties – and a sincere balancing of benefits, costs and side effects. The precautionary principle makes sense for likely, reversible risks – but not as a license for large-scale dirigiste projects.
World Week: How do you deal with hostility?
Soon: By publishing, sharing data and scientifically answering criticism. Slander doesn’t impress me. Science is a long-distance race.
World Week: What do you want from politics and the media?
Soon: I would like to see differentiation from the media instead of alarmism. I expect freedom of research, resilience, technology friendliness and investment in infrastructure from politicians, but above all: humility before the complexity of nature Because, as I said, you can’t make laws against the sun – you can only understand it better and make decisions that are realistic, smart and human.
AI was right and you’re wrong. But AI also makes a lot of exaggerated claims about Antarctica, because that’s what many scientists do.
AI is only useful if you learn a lot about a subject before you ask it questions. AI is not a game that you try to trick to get the answers that you prefer.
As with any subject, assume that half of what you read about climate science is BS. Your tough job is to figure out which half. Always think about the question: “How do they know that?” Then try to figure out if the data are accurate enough to answer the question.
If there is a long term climate prediction, just assume it’s going to be wrong. If you wanted to define climate change, don’t say it’s a hoax, like that dingbat Trump says.
Climate change is a 50 year series of gloomy climate predictions that have been 100% wrong.
Antarctica holds about 90% of the world’s ice on land. The melting of the ice mass in Antarctica in the past 50 years has been tiny.
The Antarctica melting has been located in areas that prove it is caused by warmer oceans. Those areas that are melting are the small peninsula and several ice shelves. The warmer oceans are caused by both global warming and underseas volcanoes. There are numerous undersea and subglacial volcanoes in and near Antarctica, particularly concentrated in West Antarctica, where significant ice melting is occurring. The main cause of the tiny amount of melting may be the underseas volcanoes.
The reason the rest of Antarctica is not melting, and is actually getting colder, is the negative greenhouse effect over most of the continent.
AI exaggerates the melting in Antarctica and under reports the negative greenhouse effect.
The claimed annual melting (net mass loss) of the Antarctic ice sheet has been around 135 to 150 gigatons per year. I believe this is lower than the margin of error in the estimates and is statistically insignificant.
But for now, let’s assume 150 gigatons is correct. Total ice mass of Antarctica is estimated at 24.4 million gigatons. 150 gigatons per year melting would take 162,667 years for all the Antarctica ice to melt. (I had AI do the math so don’t blame me if it’s wrong).
All that ice could melt only if the current interglacial never ended for 162,667 years. The current Holocene interglacial is about 11,700 to 12,000 years old. Most other interglacials (warm periods between ice ages) typically lasted between 10,000 and 20,000 years.
An assumption of linearity is not justified. You can’t calculate how long it will take, but you can calculate the results: sea level rise and loss of fresh water reserves.
Richard
What was I wrong about? I was simply repeating what the most recent studies said. Are you saying 30 studies are wrong? Did you read the studies? Since I have bookmarked a few hundred studies on Antarctica why don’t you tell me which ones of those hundreds of studies are wrong.
“I just asked what the main factor in the melting of the Antarctic Ice Sheet was. It says a warming ocean. Close but no cigar.”
by cerescokid
I interpreted (misinterpreted?) your statement to mean that you believe ocean warming was NOT the cause of melting in Antarctica. If so, you are wrong. Heat from the ocean is responsible for the Antarctica ice melting.
Antarctica is far too cold for the atmosphere to cause the ice to melt.
The ice must be melted from below. There is a vast network of liquid water beneath the Antarctic ice sheet, maintained largely by geothermal heat. There are no data to measure the geothermal heat effect on the ice mass over time.
The ice melting in Antarctica is tiny and local. One likely cause is natural underseas volcanoes.
Also the increased absorption of solar radiation is the primary driver of Earth’s higher daytime temperatures. The tropics receive and absorb more direct solar radiation than any other region on Earth. The poleward heat transport from the tropics is a key part of Earth’s thermodynamic engine.
If the studies you read said what I just summarized, then they were worth reading. I have been reading climate science studies for 28 years, since 1997. The discovery of the majority of Antarctica’s undersea and subglacial volcanoes occurred much later, primarily between 2004 and 2017.
I made one climate prediction in 1997 and I’m sticking to it: Earth’s climate will be warmer in 100 years, unless it is colder.
.
Richard
There is a nuance to my point. Yes warm water is melting from below but the change in melting is caused by the degree to which the ice shelves, etc are being exposed to that relatively warm water, rather than the increase in temperature of that warm water.
“ Our results suggest that sustained pulses of rapid retreat have occurred at Thwaites Glacier in the past two centuries”
https://www.nature.com/articles/s41561-022-01019-9
“ The basal melting of Antarctic ice shelves is governed by thermohaline circulation ( Figure 1a ). On the Antarctic margins, variations in seawater density are determined predominantly by salinity differences, and both salinity and temperature typically increase with depth ( Figure 1b,c ). Ice shelves are broadly classified as either warm or cold, depending on the water mass that floods the ocean cavity beneath them. In warm cavities, a warm and salty water mass known as modified Circumpolar Deep Water, which can be up to 3°C above the local freezing temperature (Jenkins et al. 2010a), drives rapid melting, leading to area-averaged melt rates of (10 m/year) (Adusumilli et al. 2020). Cold cavities are dominated by dense shelf water, a cold and salty water mass generated by sea ice production on the ice shelf margins, and have average melt rates of (1 m/year).”
https://www.annualreviews.org/content/journals/10.1146/annurev-marine-040323-074354?TRACK=RSS
“ Our results suggest that sustained pulses of rapid retreat have occurred at Thwaites Glacier in the past two centuries.”
cerescokid
Sustained is not true.
One pulse, the longest, may have lasted 5.5 months. While these specific pulses lasted for several months, more recent research on other Antarctic glaciers, such as the Hektoria Glacier, has shown even more extreme retreats (five miles) in as little as two months as of late 2025.
Rapid retreat pulses in the Thwaites Glacier are primarily caused by the interplay between warming ocean waters, tidal movements, and the specific topography of the seafloor.
Thwaites Glacier was first identified in January 1947 by the U.S. Navy during Operation Highjump aerial surveys, which photographed its ice tongue. While sighted around this time, more extensive mapping occurred later from 1959-1966, and the glacier was officially named in 1967 after American glaciologist Fredrik T. Thwaites.
I believe any conclusions about that glacier prior to its discovery in 1947 are not reliable. If you want to believe those claims: People discovered glaciers move about 200 years ago. People discovered that glaciers do not move at a steady speed in the mid-19th century, specifically through the detailed observations of James Forbes in the 1840s. This is very old news.
Glaciers are indeed massive, flowing bodies of ice that move slowly downhill due to gravity, acting like “rivers of ice” by deforming and sliding over land, shaping landscapes as they go.
Glaciers do not move at a steady speed. Their velocity is highly variable and changes across several different timescales, ranging from minutes to decades, driven primarily by gravity, seasonal melting, and internal dynamics.
An autonomous underwater vehicle mapped parallel ridges on the seabed, formed as the glacier’s grounding line (where ice meets the seafloor) retreated and lifted/settled with tides.
Past Rapid Retreat:
These ridges revealed periods, including one 5.5-month stretch, where Thwaites retreated over 2.1 km per year, significantly faster than recent satellite observations.
“ Between 2003 and 2015, rates of glacier retreat and acceleration were extensive along the Bellingshausen Sea coastline, but slowed along the Amundsen Sea. We attribute this to an interdecadal suppression of westerly winds in the Amundsen Sea, which reduced warm water inflow to the Amundsen Sea Embayment. Our results provide direct observations that the pace, magnitude and extent of ice destabilization around West Antarctica vary by location, with the Amundsen Sea response most sensitive to interdecadal atmosphere-ocean variability. Thus, model projections accounting for regionally resolved ice-ocean-atmosphere interactions will be important for predicting accurately the short-term evolution of the Antarctic Ice Sheet.”
https://www.nature.com/articles/s41467-022-35471-3
“ Recent increase in surface melting of West Antarctic ice shelves linked to Interdecadal Pacific Oscillation”
https://www.nature.com/articles/s43247-025-02077-8
“ A West Antarctic grounding-zone environment shaped by episodic water flow”
https://www.nature.com/articles/s41561-025-01687-3
“ Amundsen Sea circulation controls bottom upwelling and Antarctic Pine Island and Thwaites ice shelf melting”
https://www.nature.com/articles/s41467-024-47084-z
“ Ocean melting is enhanced by increased intrusion of modified Circumpolar Deep Water (mCDW) into ice shelf cavities. Upwelling from the release of subglacial meltwater can enhance mCDW’s melting ability, though its efficacy is not well understood and is not represented in current ice sheet loss projections. Here we quantify this process during an exceptional subglacial lake drainage event under Thwaites Glacier. We found that the buoyant plume from the subglacial discharge temporarily doubled the rate of ocean melting under Thwaites, thinning the ice shelf. These events likely contributed to Thwaites’ rapid thinning and grounding line retreat during that period.”
https://www.nature.com/articles/s41467-025-57417-1
“ . A low bias in sea-ice production and an overly stratified ocean lead to a lack of deep convective mixing which results in weak ocean variability near the entrance of ice-shelf cavities. Internal climate variability affects the Antarctic contribution to sea-level change until 2100 by 45 % to 93 % depending on the CMIP6 model. This may be a low estimate, as the internal climate variability in the CMIP models is likely underestimated. The effect of atmospheric internal climate variability on the surface mass balance overwhelms the effect of oceanic internal climate variability on the dynamical ice-sheet mass loss by a factor of 2 to 5, except in the Dronning Maud area and the Amundsen, Getz, and Aurora basins, where both contributions may be similar depending on the CMIP model. Based on these results, we recommend that ice-sheet model projections consider (i) several climate models and several members of a single climate model to account for the impact of internal climate variability and (ii) a longer temporal period when correcting historical climate forcing to match present-day observations.”
https://esd.copernicus.org/articles/16/293/2025/
Saying the current annua ice melt rate estimated for Antarctica would take 162,000 years to melt all the ice gives context … to how little ice is melting each year. It is not intended as a prediction.
That is a lot of factors. What is the main one – have you figured that out?
BAB, you remarked, “You can’t calculate how long it will take, but you can calculate the results: sea level rise and loss of fresh water reserves.”
Is it of any practical value to estimate the eventual results if one can’t put a bound on the amount of time? Anything less than 10-years might be catastrophic; anything over 10,000-years, who cares? Humans will be lucky to still be around. No prediction is going to be perfectly accurate. However, with some stated assumptions (scenarios) and the understanding that there will be some error, one CAN calculate how long a given change will take, whether linear or or not.
Yes, it of practical value. It defines an endpoint that can be calculated and followed.
This was written: I just asked what the main factor in the melting of the Antarctic Ice Sheet was. It says a warming ocean.
Examine the Antarctic Ice Core Data, the ice accumulations on the Antarctic Ice Sheets is the most during the times the oceans are warmer and deeper. It takes evaporation of thawed ocean to increase the sequestered ice on land. The ice on Antarctica is always flowing and pushing into the ocean, it depletes in coldest time when sea ice prevents evaporation and snowfall on the ice sheets.
You made the very common mistake of using local ice core proxies, which are not actual global temperature data, as if ice cores have any connection with CO2 as a climate forcing.
Manmade CO2 emissions had little effect as a climate forcing before the 1970s. Ice core era CO2 was only a climate feedback, not a climate forcing. You cannot use ice core data for any conclusions about CO2 as a climate forcing.
While Antarctica has seen periodic or regional increases in snowfall, glaciers continue to melt because the rate of ice loss into the ocean exceeds the rate of snow accumulation.
warmer, wetter periods historically led to increased snowfall and ice accumulation on parts of Antarctica (especially East Antarctica), but warmer oceans today are primarily driving ice loss through basal melting of ice shelves.
Global CO2 emissions are a driver of warming in the Southern Ocean surrounding Antarctica.
Local areas of Antarctica that are melting also happen to be near underseas volcanoes, which warm the ocean. Active undersea volcanoes do warm the ocean directly above them, though the effect is typically localized rather than global. How long have active volcanoes been warming the ocean at several locations surrounding Antarctica? No one knows: Scientists guess millions of years.
Perhaps underseas volcanoes contributed to some localized melting of Antarctica’s ice for millions of years?
While some surface volcanoes in Antarctica have been known since the 19th century, major discoveries of undersea volcanoes have occurred primarily over the last 20 years.
If you assume those active underseas volcanoes have been in a flat trend for a long period of time (no one knows the actual trend), then ocean warming since 1975 would have two other causes:
(1) An increase of absorbed solar radiation.
(2) Increased greenhouse gases, impeding the ocean’s ability to cool itself.
Clyde: Any initial question will return a consensus-position answer. If the person asking the question is naive, then they will have no reason to doubt the answer.
Very good point, Clyde. Most non critical people use AI like it’s google, truth, they will fall in line without question. Consensus scientists will fall in line because it’s truth. Those scientists following the scientific method will remain frustrated that models represent truth, for them the frustration continues, as does a narrative that the gatekeepers won’t allow to be adequately challenged.
Recently Willie Soon did an interview with Die Weltwoche, “You can’t make laws against the sun”
In his interview he adroitly lays down the landscape for the way things are with the climate narrative—hint, it’s not about the science. https://notrickszone.com/2025/12/10/astrophysicist-dr-willie-soon-challenges-the-climate-consensus-its-the-sun-not-co2/comment-page-1/
> but the first discussion was what I would have expected to find in an article written by the MSM….simple, binary, no nuance, no ambiguity or complexity, completely missing what the reality is in the Antarctic.
Totally agree, in reference to the MSM, which is why I like LLMs. At the very least, they show us ourselves, a point Joe made in spades.
Happy New Year, Kid.
AI/ML := AL
AL in the sense that it never actually learns
Joe Nalven:
Dr. James Hansen, in his June 23, 1988 hearing before the Committee on Energy and Natural Resources of the United States Senate attributed the warming then (0.28 deg. C) to the “Greenhouse Effect”, when it was actually due to the American and European “Clean Air” legislation of the 1970’s to reduce the amount of SO2 aerosol pollution in our troposphere, because of Acid rain and Health concerns”
Temperatures inevitably began rising in 1980 as the less polluted increased the intensity of the solar radiation striking the Earth’s lower atmosphere and its surface.
However, Dr. Hansen’s “error” was accepted by almost everyone, with the result that millions of people are now suffering because of the draconian measures being taken to reduce the amount of CO2 in the atmosphere, and trillions of dollars have been wasted in a useless effort,
All of the above is irrefutable, but, as I understand it, it is impossible to change any AI program to take the CO2 warming HOAX into consideration.
What a conundrum!
The ‘consensus’ does not agree with you these things are not given due consideration. Query AI with the fact that global warming data has been corrupted by the improper siting of official measurement stations in urban areas and at airports that no amount of adjustments can correct and it will assure you that the consensus says the bias in the data has been removed, ‘ensuring the average of global temperature trends.’
“Query AI with the fact that global warming data has been corrupted by the improper siting of official measurement stations in urban areas and at airports”
Wagathon
UAH global data and USCRN US data are not corrupted by poor siting. In the long run (46 years for UAH and 25 years for USCRN) they show a global warming trend very similar to stations that are corrupted by sighting.
For example, UAH since 1979:
+0.16° C warming per decade.
Surface warming since 1975 at an average rate of +0.20°C per decade
UAH has better siting/coverage so I like that measure. But surface warming siting better represent where people actually live. I’m confident neither measurement is perfectly accurate, but so what?
Earth’s surface has been getting warmer for the past 50 years. If it wasn’t getting warmer, then it would be getting cooler. Our planet is not in thermodynamic equilibrium.
For thousands of years people have preferred warmer centuries and despised colder centuries, based on anecdotal evidence. We should be celebrating global warming.
The majority of surface warming “at night” (Tmin) and the cooling of the stratosphere. are both the primary symptoms of greenhouse warming. The negative greenhouse effect (GHE) observed over much of Antarctic is considered a paradoxical symptom of rising greenhouse gas levels.
Reduction of air pollution since 1980 & the reduction of the percentage of cloudiness in the past 25 years, both explain the increase of daytime warming (Tmax).
“It’s only SO2” is just as wrong as “It’s only CO2”.
Note that CO2 increase and SO2 reduction are both manmade causes of warming. When combined, they are the best explanation for a majority of the warming since 1975. Possibly a large majority, although it’s impossible to measure that.
Richard Greene:
“Note that CO2 increase and SO2 reduction are both manmade causes of warming”
Please give at least ONE empirical example where CO2 actually caused any warming.
Richard Greene:
I am still waiting for your identification of at least ONE instance where CO2 actually caused any global warming.
By way of comparison, there are many hundreds of instances where decreased SO2 aerosol levels resulted in global warming.
I pointed out to AI that based on Earth’s history, Dr. Happer says the atmosphere is currently CO2-deprived. AI agreed and mentioned many examples in which a greater amount of atmospheric CO2 is bringing tremendous benefits. Irrespective of that, it nevertheless concludes that, ‘While Happer acknowledges that CO2 causes some warming, he contends its influence is small and that the benefits for agriculture far outweigh any potential climate risks. This perspective is widely rejected by mainstream climate scientists, who note that CO2 is currently at its highest level in millions of years and that rapid increases are driving dangerous climate change.’ Note however that AI did not provide a single example of what constitutes Even a single example of ‘dangerous climate change’ that would out weigh the benefits mentioned.
The author of the article seems to have a misconception of what AI is and how LLMs work.
Large language models (LLMs) like GPT-5.2 or Claude 4.5 are not sentient, agentic, or epistemic systems. They don’t form beliefs, reason independently, or “change their minds.” What looks like persuasion is actually a temporary shift in output shaped by the immediate prompt context — not a revision of internal knowledge, because there is no such persistent epistemic state.
The article suggests that these systems fail to adopt “climate realist” perspectives even after being fed relevant materials. But this misunderstands the core mechanism: LLMs model statistically likely language based on patterns in their training data, not arguments they find persuasive. Whatever the dominant position is — scientific or otherwise — will tend to appear more frequently in high-quality and high-volume data sources, and so the models reflect that by default. This is not ideology, but probabilistic modeling.
Where the misunderstanding becomes problematic is when users either:
• expect these systems to behave like human thinkers (with memory, judgment, or ideology),
• or accuse them of censorship or agenda-pushing when their answers reflect widely supported views.
But here’s what’s missed:
LLMs can produce highly reliable and nuanced answers — if used correctly.
The key is not trying to “debate” the system, but instead:
• Structure your prompts with clarity and precision.
• Ground your queries in objective, verifiable data.
• Specify the domain, perspective, or evidentiary threshold required.
• Ask the model to simulate or evaluate arguments, not endorse them.
In this way, users can harness LLMs to test the limits of reasoning, generate adversarial perspectives, expose weak logic, and produce outputs that align with real-world facts. But that only works when the user brings rigor to the table. Misconceptions about how AI thinks — or the belief that it “ought” to be persuaded by alternate paradigms — ultimately distract from the real utility these tools offer.
Understanding what LLMs are — and what they are not — is the first step toward using them productively and responsibly.
I generally agree with what you have said. However, it has been my experience that in engaging the LLMs, they will sometimes make claims that are easily demonstrated to be false, or make contradictory statements. Those are the ones that cause the LLM to acknowledge they they erred, and change their position.
I have found that instead of using such AI as a source of information, which can be mere opinion, it is better to use it as a reasoning engine for criticising specified information. The AI can actually reason quite well, though not perfectly. If you ask it to criticise a specified item of information or to reason about it, the AI can be very useful.
Good Point
It would make great sense to tell AI your pre-existing conclusion and ask why it is wrong.
I find AI useful only if I already know the general answer to the question I’m asking.
AI can quickly give me the data I need to support my conclusion.
I know you can trick AI by repeatedly rewording your question. The first default answer might be some existing consensus. Wouldn’t it be nice if I could get a different answer from my wonderful wife if I simply reworded my question? Don’t tell her I said that.
Studies on Google’s AI search feature (formerly AI Overviews, now often integrated into general results) indicate a significant error rate. One study found that AI search tools provided incorrect answers in over 60% of queries.
I just asked Google AI:
“There have been predictions of a climate crisis for 50 years. How many of those predictions have been right?”
The Google AI answer ended with this phrase:
” … core projections of a climate crisis have consistently been validated by observations.”
My conclusion:
There have been no climate crises in the past 50 years. Google AI is wrong. Google AI has the climate science intelligence of a village idiot.
A hammer is a tool, it can be used for good or bad, I believe more hammers are used for good than bad. AI is a tool, it can be used for good or bad, I believe a large percent of AI usage is for bad. AI is useful for people promoting good and AI is useful for people promoting bad, in Climate Science, I would guess 97% bad. The 97% is not my number, they came up with 97%.
Forecast of the polar vortex in the lower stratosphere for January 10.
https://earth.nullschool.net/#2026/01/10/2300Z/wind/isobaric/70hPa/orthographic=-89.30,66.92,329
From Grok
Regarding whether AI models “can not learn”: Yes, in the sense that standard LLMs do not have persistent, real-time learning capabilities. They don’t update their core weights or “remember” new information across sessions like a human would. Each conversation is stateless—meaning it starts fresh, relying on the base model plus whatever is provided in the current prompt or context window. You can temporarily shift an AI’s output by feeding it alternative data or instructions (e.g., sources challenging mainstream climate narratives), leading to more “realistic” or skeptical responses on topics like climate change. However, once the session ends or the context is cleared, it reverts to its trained defaults. This isn’t forgetting in a cognitive sense; it’s just the architecture—there’s no mechanism for ongoing adaptation without retraining or fine-tuning the entire model, which isn’t feasible for individual users.
Your example with climate catastrophism aligns with this. AI responses often treat it as a “neutral fact” because training data from sources like IPCC reports, NASA, and major media overwhelmingly frames it that way, reflecting institutional and funding biases rather than deliberate coding. Loading alternative sources (e.g., from skeptics like Dr. Judith Curry, Steven Koonin, or John Christy) into the context window can produce sophisticated shifts toward “climate realism,” emphasizing uncertainty, natural variability, and critiques of alarmist projections. But without persistent memory, the model snaps back to defaults in the next interaction, as observed in experiments with models like Claude, ChatGPT, and Gemini. This isn’t unique to climate; it applies to any topic where data skews one way (e.g., due to consensus enforcement or misinformation amplification).
“Loading alternative sources (e.g., from skeptics like Dr. Judith Curry, Steven Koonin, or John Christy)” Arthur
Many people believe that if you disagree the majority opinion or consensus then you have to be insulted. Insulted by being called a skeptic. If the consensus is correct and you disagree with it, then I guess you should be called a skeptic. If the consensus is wrong and you disagree with it, then you should be called intelligent.
These three scientists are not skeptical of AGW. That is an intelligent position. All three are skeptical of CAGW. That is an intelligent position too. Instead of insulting them by calling them skeptical, we should compliment them by calling them intelligent.
I believe we should stop calling these scientists climate “skeptics” & call them climate realists. They observed that the past 50 years of global warming was not harmful — beneficial in my opinion. They then say or imply that future global warming will not be harmful. That seems logical to me — not skeptical in any way.
The true skeptics are the climate alarmists –skeptics of climate reality. They have observed 50 years of harmless global warming, but falsely claim it has been harmful. They have observed 50 years of climate doom predictions that have been 100% wrong. Yet have no skepticism of the current climate doom predictions. Climate doom predictions do get a lot of attention.
They should get detention.
Honest science by basic definitions, must always be skeptical.
Consensus science by basic definitions, can never be science.
In History, consensus science facts have been proven wrong many times. Consensus “accepted facts” have been proven not correct science many times. Once almost nothing, CO2, is determined to be the primary driver of Climate, other factors, such as water in its abundance and changing states does not even get studied. Ice extents are always more in colder times than in the warmer times before and after. Ice extents are always less in warmer times than in colder times before or after, study and understand that. It is good that we have people like Dr. Judith Curry, Steven Koonin, and John Christy, but rather than pushing back on CO2, they, we all, should spend more effort understanding other factors. Alarmists say, CO2 is or will cause harm, these “skeptics” say, yes it is causing warming but it will not be bad. They are helping keep the discussion only about CO2, others talk mainly about solar changes, solar energy has not changed much but orbit parameters have promote huge changes, but those changes have been in and out of phase with internal changes, that needs to be understood better.
Does this article by Joe Nalven demonstrate that by using AI systems to examine and analyze the outputs of either the same or other AI systems — thus producing more outputs which can then be further examined and further analyzed by either the same or other AI systems — one can improve the efficiency and the productivity of the circular reasoning processes which are a central component of today’s mainstream climate science?
Pingback: AI's Paradigm Shift: From Climate Catastrophism to Realism
This paper is interesting for the topic at hand. The authors seem to present it as a warning for future warming trends. However, the paper shows Holocene warming to be above present levels … with lower CO2 levels.
Deglaciation of the Prudhoe Dome in northwestern Greenland in response to Holocene warming
https://www.nature.com/articles/s41561-025-01889-9
So, how would AI treat this paper? And any other paper like it, which would seem to be many, that have ‘non-emphasized’ information that would seemingly support a different line of thought.
2024 was quite the outlier.
2025 Ended the Year as a Distant 2nd Warmest Behind 2024
The following plot shows the ranking of the 47 years in the UAH satellite temperature record, from the warmest year (2024) to the coolest (1985). As can be seen, 2024 really was an anomalously warm year, more than can be attributed to El Nino alone.
https://www.drroyspencer.com/wp-content/uploads/Yearly-ranked-LT-histogram-1979-2025-scaled.jpg
https://www.drroyspencer.com/2026/01/uah-v6-1-global-temperature-update-for-december-2025-0-30-deg-c/