How Dr. Frankenstein is making research sick

by Judith Curry

What I saw was a creature not unlike that made by Dr. Frankenstein and which turned onto its creator: neither traditional science nor business, as it is made from incompatible parts taken from both bodies with good intentions but not much forethought. – Yuri Lazebnik

Both the Republic of Science and Science on the Verge discussed the scientific enterprise using a market analogy. In response to these posts, Yuri Lazebnik sent me a paper that he published several years ago regarding the current state of the medical research enterprise. It raises some really issues and provides some fresh insights regarding research institutions, and also speaks to issues raised by Pasteur’s Quadrant. Not to mention that I find the Frankenstein angle to be irresistable.

The title of the paper is: Are scientists a workforce? – Or, how Dr. Frankenstein made biomedical research sick, published in 2015 by EMBO Reports. [link]. Some excerpts from the paper:

Some time ago, I was reading Science’s Careers and cringed at the title “Can NIH renovate the biomedical workforce?” The problem was the word “workforce”, since its Russian equivalent was used by the Communist Party leadership to describe other citizens of the Soviet Union— where I grew up—whom they viewed as mere cogs in a machine at the Party’s disposal.

Noting the mentioning of “scientific workforce” in the plan led me ask whether the systemic flaw that felled the Soviet Union—the leadership/workforce system, with its top-down chain of command—might also be related to the systemic flaws that are taking a shot at the US science.

To summarize the first stage of my differential diagnosis, I found that the relationship within the ecosystem changed from one of advisors, trainees, and colleagues to that of a workforce and its users. This change is difficult to explain solely by money shortages, but it can be explained if we assume the advisors adopted a new behavioral model, likely of corporate origin; a possibility that favored the diagnosis of businessification. I began to suspect, however, that the diagnosis could be more complex because business models are not all alike.

[This] suggests that an immediate consequence of the imbalance between funding and the number of scientists is “hypercompetition for the resources and positions”, which “suppresses the creativity, cooperation, risk-taking, and original thinking required to make fundamental discoveries [. . .] The system now favors those who can guarantee results rather than those with
potentially path-breaking ideas that, by definition, cannot promise success”.

How did it happen that the self-organizing and self-maintaining system of Science, the Endless Frontier, was replaced with the chain of command? Could this change in appearances and the underlying thinking be explained by the imbalance of money and scientists, or does it reflect a wish to run scientific institutions as a business? I favored the latter explanation and proceeded to analyze the next symptom of the malaise, the prevalence of translational research.

Vannevar Bush presciently warned that: “Basic scientific research should not, therefore, be placed under an operating agency whose paramount concern is anything other than research. Research will always suffer when put in competition with operations.” From the operational perspective, a patent related to medicine can bring millions if not billions of dollars to the institution, while wondering why petunias have colored patches may appear to be a waste of much-needed resources (to note, the petunia led to the discovery of RNA interference, a breakthrough that has affected many areas of medicine, from viral infections to cancer). From the operational perspective, funding from the pharmaceutical industry is a gift from heaven, but this gift comes with an implied or explicit focus on research related to medicine. Do we need to look for other explanations for the primacy of translational research beyond those indicated by Bush in his warning?

After reviewing this symptom, I concluded that the prevalence of translation research can be easily explained by businessification. Business and basic science are complex systems that have evolved over centuries side by side in continuous and often unpredictable interaction. The heroes of innovative business and science share a knack for identifying key problems and an obsession with finding a solution, testifying to the commonality of how creative people, whatever they do, think, and act.

Yet, business and basic science have operated by different rules that are determined by the primary purpose of each system. The primary purpose of a business is to generate monetary profit for its owner(s). The purpose of scientific research is to make verifiable discoveries, whether they
have a commercial value or not.

Now, imagine what would happen if the captains of science decided that the primary purpose of basic research is not discovery, but profit. What would those who grew up dreaming of becoming great discoverers think, feel, and do? If reputation based on discovery is no longer the currency, then how should funding be allocated? Hence, the search for surrogates to fill the void—the number of papers published, the number of citations, citation indexes, impact factors, formulas to calculate their relative values, and all other administrative inventions to keep the system operating— with the ultimate measure being the money that scientists can bring in. If discovery is no longer the primary purpose and finding true answers to nature’s questions remains as hard as it is, would the people who accept the first convenient finding for the answer have an advantage in securing funding? Would the people who cannot trade their integrity leave science or decide not to come in?

If discovery is secondary, is it surprising that the traditional model of operation— discover something, verify, convince colleagues (including reviewers) with evidence, publish to secure your credit as the discoverer and letting others know about it, use your credit to get grants, repeat—would change to something different: come up with a nice story, sell it to the reviewers and editors, use the publication as a voucher to get grants to produce more nice stories. If science is a business, why would it matter what is sold? A loss of the sense of purpose can send a person into a tailspin. The same can happen to an institution, to a part of a society, or to society as a whole.

However, I could not see the current scientific institution as a business. What I saw was a creature not unlike that made by Dr. Frankenstein and which turned onto its creator: neither traditional science nor business, as it is made from incompatible parts taken from both bodies with good intentions but not much forethought.

To avoid the word “Frankenstein”, I would call this hybrid entity pseudo-business, by analogy to pseudoscience, which is an activity that pretends to be science but does not follow its basic rules. If this hybrid has any purpose, it is to maintain and expand itself.

As systems theory suggests, and as Dr. Frankenstein belatedly learned, merging complex systems is inherently prone to produce unexpected results. As a general rule, the more different the systems are, the more likely their hybrid would have unexpected properties. The differences between business and basic science are difficult to miss, making the malaise a predictable outcome.

A correct diagnosis might help to understand what these interests are and help scientists, funders, administrators, and policymakers act accordingly, using both therapy and surgery.

The ship is titanic in size, the inertia is comparable, the captains’ quarters are comfortable, and the crew and passengers have come to assume that they have as much leverage as they would on a military ship. I also hope that the Carnegies, Stanfords, and Hopkinses of today, perhaps with some help from the government, will build new ships, perhaps smaller, but more agile and steered by crews who are not afraid to sail in uncharted waters.

JC reflections

This paper presents a novel angle to diagnosing the reason for the dysfunctional aspects of scientific research, in terms of the structure of institutions that has resulted from a businessification of research.

A previous post Pasteur’s Quadrant  addressed the conflicts between ‘use-inspired’ and ‘pure basic’ research, which relates to Lazebnik’s description of ‘pure’ versus ‘translational’ research. We need both pure and applied research, but the university environment is uniquely configured (in principle) to support pure research, whereas many government labs and the private sector are configured for translational and applied research.

With the decline of research funding from the federal government, universities are focusing more and more on innovation and spawning start-up companies (something that makes state governments who fund higher education particularly happy). This is a good thing, but pure research needs to be nurtured.

Over the past 30 years, I’ve been at a range of universities with a succession of administrations, ranging from strong faculty/weak administration, strong but lean administration with strong faculty, and overblown administration that views faculty members as necessary troublemakers. The sweet spot is strong but lean administration that fosters a strong and autonomous faculty – a rare combination that characterized Georgia Tech when I joined the faculty in 2002.

There is a conflict in universities (in hiring and rewarding faculty) in terms of valuing fundamental discoveries that change the way we think about something, versus rewarding ‘flashiness’ – large number of publications, grantsmanship and nice stories that the media likes. When I was hiring and rewarding faculty, I fought to reward faculty members that were making fundamental discoveries. Faculty members with ‘flash’ tend to be the ones that are recruited by other institutions, and retaining such faculty members was a decision made by people several pay grades higher than me. I also didn’t focus on funding, beyond a faculty member’s ability to bring in a baseline of funding to support a small research group. However, there is a strong push from Deans and Vice Chancellors for Research to bring in more research $$.

But the fact remains that the culture in universities, with the many demands on faculty members and the extreme competitiveness at the major research universities, that it is very difficult for a faculty member to justify, let alone find, a sustained amount of time to tackle the really difficult and important problems. Peter Higgs (of Higgs boson fame) lamented that it seems impossible in the current university environment to devote the amount of focus and time required to make such discoveries.

The key issue is fostering research that ‘sails in uncharted waters’. Apart from consensus groupthink issues, there is a growing concern that the reward system for university faculty members makes it much more advantageous to do pick the low hanging fruit — research problems that will attract funding, produce many papers and a few press releases — rather than to tackle the big/hard problems.

New structures and incentives are needed to support “smaller, but more agile and steered by crews who are not afraid to sail in uncharted waters.” Bill Gates, Charles Koch, and other industrialists are spending substantial funds to support research on specific big challenge issues – which may be addressed by either small teams or by big scientific teams. I think this is a good thing (and a very useful complement/balance to government funding).

But this still does not solve the institutional problem at universities that impedes (more or less) ground-breaking, verifiable research. One of the most interesting models for conducting scientific research that I’ve come across is the IHMC  Florida Institute of Human & Machine Cognition, a private not for profit research institution. Here is a link [IHMC] that describes the uniqueness of IHMC:

Ken Ford, one of IHMC’s founders and its CEO, has structured the institute so that bureaucracy doesn’t get in the way of science On the science and technology staff, the institute has no departments, no standing committees and no administrators or middle managers, whom Ford calls ‘vacuum cleaners of passion.’

Each scientist, Ford says, is an enrepreneur who must attract funding support his or her work. Ford says IHMC does no strategic planning, avoiding the game of trying to identify ‘hot’ research fields and then competing against the likes of MIT for scientists in those areas. Instead, IHMC gets its direction from hiring the right people and turning them loose. His recruiting efforts boil down to looking for ‘remarkable people whom we think would be wonderful colleagues.”

Its an approach that generates serendipity. IHMC is one of only a few research efforts in Florida that are truly world class, and the only one that is breaking ground not just on the research it does, but how it organized itself to get that research done. It’s something for the state to keep in mind as it considers the how much and where of research funding.

I visited IHMC last fall, and have been invited to visit again next December. If there was such an institution in the environmental science field, I would join it in a heartbeat. If some philanthropist would like to seed me with the funds to start such an institution for climate science, send me an email.

http://www.reallycoolblog.com/wp-content/uploads/2013/04/Clive-Colin-Frankenstein_02.jpg

 

174 responses to “How Dr. Frankenstein is making research sick

  1. In this post I read the following statement:
    “New structures and incentives are needed to support “smaller, but more agile and steered by crews who are not afraid to sail in uncharted waters.”

    It’s too late. Pandora’s box is wide open.
    Last week I noted that a small private group of scientist have begun the construction of the first completely synthetic human genome. Clearly they are not afraid to sail into uncharted waters. If there is one single endeavor of science that has the potential to radically change the future of the planet this is it.
    Should we try to stop, encourage or influence these scientist or should we follow your advise to promote a ‘hands off’ approach?

    My vote is let Frankenstein be Frankenstein. If they create a monster just remember, Beauty is in the eye of the beholder.

    • I’m surprised they’re going straight in with a human – would have thought they would “practice” first with maybe C elegans, Drosophila or a mouse.

  2. I sometimes wonder if private businesses do more real research than universities. Maybe not so many papers and mostly inside the field of the business, but anyway.
    In the popular fields it seems that most papers are more or less a pot stew of very few original papers with only a few new spices. No one dares to challenge the original discoveries, they just add a little here and there.

    • I was a very senior exec at a very large Fortune 50 tech co. R&D budget was almost $4 billion. Mostly translational applied research. But we were spending about $500 million annually on ‘pure’ speculative research, imcluding the teams that ‘scouted’ university research. Advanced physics and chemistry and biology, mostly. (OLEDS, Biochips were the two ‘pure’ research areas I was responsible for.)

    • David Wojick

      The top ten firms alone spend about $100 billion a year on R&D.
      http://fortune.com/2014/11/17/top-10-research-development/

    • Prívate research is hybrid. Some of it is kept indoors, some is carried out in partnerships with other companies, universities, or government labs (for example Los Alamos). Papers are written discussing partial results, mostly to impress future recruits, governments, or the public. In my case I published very little, but what little I did publish was carefully trimmed to avoid revealing too much. An interesting fact: I was plagiarized twice within the organization. In one case I wrote a paper and submitted it for review, changed jobs, and about two years later I saw my paper published by the internal reviewer with very few changes.

  3. The best example of this sordid situation [i.e., “junk science, commercial interests and outright corruption among so-called scientific authorities who care more about feathering their own nests than facts”] is Julie Gerberding, the head of Merck vaccines (the largest US vaccine manufacturer) – who is the former head of the CDC. The CDC mission is to push vaccines, no matter how useless and dangerous they may be. The best career hope for CDC flunkies is to collaborate with that corrupt program and make a jump to a better-paid position with a drug company, a path blazed by vaccine huckster Gerberding.

    • It would be nice if they could find a replacement for mercury in childhood vaccines and flu shots.

      The CDC maintains there is “no relationship between Thimerosal-containing vaccines and autism rates in children,” even though the data from the CDC’s own Vaccine Safety Datalink (VSD) database shows a very high risk. There are a number of public records to back this up, including this Congressional Record from May 1, 2003. The CDC’s refusal to acknowledge thimerosal’s risks is exemplified by a leaked statement from Dr. Marie McCormick, chair of the CDC/NIH-sponsored Immunization Safety Review at IOM. Regarding vaccination, she said in 2001, “…we are not ever going to come down that it [autism] is a true side effect….” Also of note, the former director of the CDC, which purchases $4 billion worth of vaccines annually, is now president of Merck’s vaccine division.

      https://healthimpactnews.com/2014/cdc-caught-hiding-data-showing-mercury-in-vaccines-linked-to-autism/

      • Most single-dose vials and pre-filled syringes of flu shot and the nasal spray flu vaccine do not contain a preservative. That does not, however, change the facts about flu shots:

        “Public health experts are routinely misleading the public as to the strength of the science in support of its statements about vaccine effectiveness, safety, and the threat of influenza … The vaccine is not always “better than nothing.” ~Peter Doshi

  4. “The system now favors those who can guarantee results rather than those with potentially path-breaking ideas that, by definition, cannot promise success.”
    I have heard — so this is second-hand, take it with a grain of salt — that grant success is strongly related to having produced results (i.e., publishable papers) on a prior grant. Hence, a strategy has evolved in which a grant proposal involves research into areas which you’ve previously found to be productive, guaranteeing publishable papers. The remainder of the grant is then used to explore new areas, to find a foothold for the next grant. Not conducive to taking risks, unless you’re well up on the totem pole.

    • HW, have grantsmanship experience in energy storage ( one of my companies was built around my insights and patents). In that area, money flows to the ‘hot topic du jour’ independent of results. Carbon nanotubes got discovered. Anybody proposing energy research based on CNT got funded, no matter how bad the idea. Hundreds. A decade of wasted money and no results because SWCNT have chirality (and only 1 of 3 forms is electrically conducting) while MWCNT have no volumetric density due to a random packing of rods math theorem. Giems discovers graphenes. Another decade of completely wasted grant funding. Now the money flows into metamaterials. Fads no different than teen girl cloths. Everybody just goes with the flow, and there is little to no results accountability.
      We did get (after 3 months of effort) one near $3 million ONR grant. The trick was to leave ~$2 mill of the $3 with ONR as a grant collaborator. Some might call that a shakedown, others bribery. With our stub third, we did produce the promised goods. Proved the basic physics insights (again) and ranged the process parameters for repeatability (translational research).

      • Rud, I believe the scientific term is “Kickback”. It’s a successful business model that’s been around since the first mammal felt an itch on his back, possibly predating prostitution.

    • Curious George

      Published papers are counted as results, not publishable ones. That’s where a pal review, and a suppression of inconvenient ideas, come into play.

      In addition, we now have a generation of hippie scientists.

  5. The bane of Western society is “Management”.

  6. David Wojick

    Federal funding is sufficient, as VB warned. Almost all Federal research is funded by operational agencies looking to support their existing agendas. See my http://www.cato.org/publications/working-paper/government-buying-science-or-support-framework-analysis-federal-funding. Even NSF is locked into existing research programs and paradigms. Pro-AGW climate research is just the most extreme case, with 14 agencies locked in.

    • I propose a different model: get the US government completely out of the grant business. The bureaucrats siphon off a large chunk of the money for themselves while rewarding their buddies with the politicians basically buying votes from university faculty and grantees. The current process has virtually no checks-and-balances as the bureaucrats do not have to explain why they made a grant to a particular party while losers have no recourse in the matter. The process is grossly inefficient and fundamentally corrupt.

      The alternative: private industry and investors in research get a full tax credit. Some firms & investors may be inclined towards a more longer view of basic research, while others may opt for more applied research. In any case, the marketplace decides while government bureaucrats (the few that remain) and politicians remain at the periphery.

      • kellermfk,

        I’d go a little further perhaps. If the company claims intellectual property rights, and subsequently makes a profit from those rights, then they repay the tax credit they received.

        I believe that the capitalist system is based on entrepreneurship, along the lines of one taking risks to eventually make a profit. If your initial assumptions were wrong, you might lose everything. Tough. That’s capitalism.

        Would less grants and subsidies lead to less taxes? Would taxpayers be happier knowing that companies and individuals were actually standing on their own feet, so to speak? I know I’m only dreaming, of course.

        Cheers.

      • Sounds reasonable to me. However, if the intellectual property occurred before the tax credits were secured, then the tax credits do not have to be repaid.

  7. Reblogged this on TheFlippinTruth.

  8. Corrupt science threatens the very survival of humanity. Do we not each now have a moral responsibility to help correct the situation that developed while we selfishly pursued public research funds to advance our own careers?

    • “Corrupt science threatens the very survival of humanity.”

      Consensus: Humanity has a braincloud that needs government treatment.

      Lukewarmer: Beware of doctors diagnosing brainclouds.

  9. Interesting ‘Frankenstein’ perspective, but IMO strongly colored by the research field, medicine. Doesn’t translate well to climate science, where translational solutions in energy are completely different topics than how the climate system ‘works’.
    But there are strongly related symptoms. Especially Judgement of value by proxy: number of papers, citations, grants…. Those things will happen in any organization, because ‘administrators’ want quantifications to avoid making hard qualitative judgements of the sort Judith as Chair had to make. The solution is always the same. Strong (and sound) but very lean administration, lots of slack rope for the faculty/staff/producers. Saw it in consulting (15 years, was both a big producer and admin of a global practice area–Just me and two secretaries I managed to keep fully occupied). Saw it in the tech corp I was a senior exec in. $400 million in annual excess cost just in finance and HR compared to median benchmarks. HR reasoned kept the company from unionizing. Finance reasoned kept the company from scandal. Both bogus.
    There is a law of nature that admin always wants to expand. It always results in organizational stagnation and ‘death’ if allowed to happen.

    • Regretfully, the 1945 decision “to save the world from nuclear annihilation” by hiding the source of energy in atomic bombs was the first, seemingly well-justified step in Dr. Frankenstein’s social engineering.

      Climatology, Cosmology, Nuclear and Solar Physics were early casualties of the infection that has isolated all humanity from reality and induced worldwide social insanity.

    • Steven Mosher

      an essay about how business does pure research ( kinda sorta) would be nice Rud.. judging which guys you could let free to folow their curiosity and which guys needed deliverables to perform would be something you can probably speak to with authority

      • SM, done. First, decide which ‘pure’ reseach areas are of corporate strategic interest. In my specific cases, displays and biochips (the then CEO decided that his fathers legacy was semiconductors, so his would be biochips). Then figure out the great unsolved basic science issues.
        For example in OLEDs, a sufficiently intense and long lived blue for RGB screens. We already had in the lab R and G out to sufficient intensity/lifetime. We had none of the OLED manufacturing/sealing/cost problems solved back in the mid 1990’s. But without blue, there is no RGB and so nothing translational to work on. Now blue oleds are an interesting weird combination of quantum physics and organic chemistry. So you hire a ‘science’ recruiter to find the brightest postdocs in the most closely related fields, since almost nobody is working on blue. You vet their results, cross your fingers, and hire a small team. You put them in a ‘secret’ lab in Tempe Arizona (so none ot the usual HR and other corporate bureaucratic nonsense) and give them a budget 2x what they requested. And then you tell them you will check in once a year if they don’t communicate (more often only if they request), and that this will go one for at least five years.
        And in three years, they got blue!

        As for translational (developmental) research, all the usual ordinary management stuff works fine . Specific detailed roadmaps (we used 5 years, with quarterly deliverables), monthly/quarterly reviews, tight budgets, fire low performers annually ( the euphemism is ‘made available to the competition’). No different than sales or manufacturing. Just business. Many companies in my 15 years of consulting experience were just insufficiently ruthless, and that is what got them into trouble. In operations, the motto is Yoda’s “do or do not; there is no try”. But that attitude also puts a huge onus on executives to set reasonable achievable goals for their teams and businesses. If I order my farm’s dairy cows to jump over the moon, they might try and that would result in udder destruction. My fault, not theirs. Management is an art as well as a science.

      • A footnote. The corporation also made a major investment in FED displays, which I was also intimately involved in. Not for mobile telecom, but to re-enter the TV business against Plasma and LCD. We succeeded at pilot scale (FED is just scaled up semiconductors) but failed to recognize how LCD would progress. Invested $250 million in a pilot line plus $40 million/year operations. Wrote the whole thing off after I failed to get Sony to a JV- Sony then was smarter than we/I were.

        Turned out our OLED blue solution for mobile displays ‘infringed’ some Kodak OLED IPR having nothing to do with blue. My team tried for a year to reach a reasonable deal with Kodak. Failed again, as they wanted half the value of blue! So, as my then big company started to falter in its main businesses, we sold our blue IPR. And now you have OLED TV, with the rumored iPhone 7 also OLED. Saves battery life since emissive. Very thin. Now cheap as manufacturing is inherently simpler than LCD. Such is business life.

      • Contact them once a year unless they check in first? Double the requested budget? I gotta give mad props to a company willing to swing for the fences in such a fashion. It is the stuff of comic book heroes.

        And good job blue team!

      • Rud: Isn’t that close to the Xerox model at PARC?

      • @AK Microsoft is a really, really terrible example of innovation.
        Started with a product created by someone else, which itself was a copy.
        Monopolistic business practices to extend and consume.

      • B1815, yes. Difference mainly in better strategic coupling than was the case at Xerox PARC. PARC did a lot on computational user interfaces; Zerox wasn’t in computation. We were in mobile devices with displays.

    • ‘There is a law of nature that admin always wants to expand.
      It always results in organizational stagnation and ‘death’ if
      allowed to happen.’
      Peopling the island with Calibans. h/t Dr Frankenstein.

  10. I feel like carping about my least favourite word.

    These days, when you see the word “agile”, it means some lumbering government or corporation has finally worked out that it no longer has the will or energy to do what it was meant to do but is in no mood to get out of the way. It’s the rich old guy who buys the red sports car when he should be in a Lexus listening to James Taylor.

    In the 80s the too-big-to-die (aka “iconic”) would have opted for a new logo or vision statement. Now they say “agile” a lot. Great word for sclerotic NASA or the half-blind, lurching EU now that “innovative” has lost its magic.

    Cease to agile me. It agitates me. Let the corporate be corporate and live with its limitations rather than pretend its heart is still young and gay. “Agile” is a comb-over.

    The future is in the hands of spotty, horny, thorny, discontented young guys who are all snips ‘n snails and puppy dog tails. You can’t bottle what they’ve got.

  11. “I also hope that the Carnegies, Stanfords, and Hopkinses of today, perhaps with some help from the government, will build new ships, perhaps smaller, but more agile and steered by crews who are not afraid to sail in uncharted waters.”

    I live amongst one of those three. They are the government. They are already helping themselves.

    The ships won’t get smaller ’til they run low on money. Right now the focus is on solid copper gutters, landscaping and fresh asphalt,
    Oh yeah, and the President just went from 9 to 11 mil per trip around the star.

    • Ok I was a bit off … 15.4 mil in total compensation for fiscal year 2013.
      There’s base pay and one’s cut of the take.

  12. Dr. Curry ==> “. If some philanthropist would like to seed me with the funds to start such an institution for climate science, send me an email.”

    If I had the do-re-mi I’d do it in a minute.

    Come on tech billionaires, here’s your chance to make a difference.

    • Plus many. But most of them are PC, so on the other side.

      • Geoff Sherrington

        It is hard to come to grips with the tech billionaires. Much of their success came from the ability to take money out of the pockets of the many. Taxation officials do this routinely. I do not see this skill as a free pass to dictate the course of future science. Better they hand back some of the peoples’ money to think tank type bodies that are experienced and structured to know where better to put funds. More chance they will have some of your spirit Rod, giving twice the budget and once a year exams.
        It is easier when there is a target like blue that all can understand. It’s much harder to pick the guy or the team who wonders why the flowers are spotted.
        You can’t budget for serendipity, with any precision. Luckily, these types of researchers tend to find ways to follow their noses while doing what was asked. Or ignoring what was asked, that is the spirit.

      • Except for Koch brothers and they started with rashig rings in distillation columns. Had to actually measure separations of chemicals so were grounded in physical results. Wouldn’t do to take funds from them as that would nullify any science no matter how accurate. Have to wait till nature bats last and the AMO and PDO swing negative and even temperature adjustments no longer convince the world that cAGW is doomed all.
        Scott

      • The PDO already took its periodic dip into “swing negative”. It created a slight slowdown in the upward march of the global mean surface temperature. Nothing like the old days before the ACO2 control knob started bullying natural variation.

        The AMO, even with the assistance of the vast blue Atlantic blob south of Greenland, appears incapable of going negative.

        December 2010:
        http://www.ospo.noaa.gov/data/sst/anomaly/2010/anomnight.12.30.2010.gif
        March 31, 2016:
        http://www.ospo.noaa.gov/data/sst/anomaly/2016/anomnight.3.31.2016.gif

      • ” take money out of the pockets of the many. Taxation officials do this routinely”
        Geoff Sherrington, what the blazes are you talking about? Businesses don’t “take” anything – they offer stuff that customers want more than they want the money it takes to buy it. It’s called Free to Choose. Taxation is the use of legal force to confiscate money. They aren’t remotely comparable, and your apparent ignorance of this fact does not speak well for your education or opinions.

      • > Taxation is the use of legal force to confiscate money.

        Some call taxation the price we pay for civilization. Thus, the price we pay for civilization is the legal force to confiscate money. If business is the opposite of civilization, then freedom of choice is barbarous.

        FREEEDOOOOM!

  13. Quoting Lazebnik:

    Some time ago, I was reading Science’s Careers and cringed at the title “Can NIH renovate the biomedical workforce?” The problem was the word “workforce”, since its Russian equivalent was used by the Communist Party leadership to describe other citizens of the Soviet Union— where I grew up—whom they viewed as mere cogs in a machine at the Party’s disposal.

    The word may be the same; however, there’s a big difference between a one-party sham democracy and a two-party democratic system where most of the time electoral delegates — except perhaps the super ones — hew to the will of the populist vote. One also wonders how Lysenko would have fared in a system where Executive Orders can be subject to judicial review by request of an entirely separate elected body of representative legislature. Or especially now in a system where Executives themselves are term-limited in a way that Stalin most certainly was not, nor would have for an instant tolerated.

    I’m not sure I want to read the rest of this, if indeed I could even concentrate on it with all the dog-whistles ringing in my ears at the moment.

    I can at least sufficiently force down the bile and concentrate long enough to say this much: anyone who has not at one point in another pondered the switch to “Human Resources” over the perfectly descriptive and ingenuous term “Personnel” is either

    1) too young to have entered the *workforce*
    2) never been part of the *workforce* in a large corporation
    3) fortunate enough to have always owned and operated their own business and/or only been ever part of the *workforce* in a mom-and-pop owned and operated small company.

    And I mean seriously, doing good science IS work. Hard work. I’d personally be suspicious of anyone hinting otherwise. Perhaps the “problem” here is the association with the word “force”.

    Bottom line is, I’d rather be a cog in the machine of a healthy, mostly capitalist, economy backed by world-class publicly-funded research organizations that were charted and are administered by elected representatives than any alternatives few of the complainers here are even able to cogently articulate.

    • Dr. Lazebnik’s insight surely came from his Russian heritage.

      • omanuel,

        Dr. Lazebnik’s insight surely came from his Russian heritage.

        It pretty clearly influenced his choice in metaphors. What nobody seems to have picked up on yet is that he’s equated profit-driven corporate motive with Communist Party-controlled State-funded research which he also was clearly exposed to up to and including his work as a research associate in St. Petersburg, which ended coincidentally enough in 1991. One might argue that his essay expresses a certain amount of disenchantment that the visions which prompted him to leave Mother Russia to begin with didn’t turn out as well as he’d hoped. I’m speculating of course, as are you — there’s really very little here for either of us to go on.

        It’s worth pointing out that his particular flavor of biomedical research is oncology, a field that nobody goes into NOT hoping to find a way to reliably kill cancer cells without either terminating the patient — or making them quite ill. It’s a field ripe for … corruption if you will … by the results-oriented pharmaceutical industry. I.e., not an area of study that can afford to stray too far from the path of applied research. Think about it; why would a beancounter at the NSF want to hand out limited public funds to even a well-established principal investigator to do pure research knowing full well the amount of funds industry makes available to do the applied stuff?

        I’m not completely unsympathetic to what he’s grousing about. My beef is how his piece is being used in the context of climate research. At this point, who really gives a damn about doing pure research with so much policy riding on practical results? The connection just isn’t there for me except for the silently embedded “but Lysenko” Red Scare which permeates the political opposition to urgent emissions reductions.

        I’ve not been disappointed by the usual griping about lack of funding for “alternative hypotheses” as Dr. Christy put it to Congress when he floated the idea of 5-10% of the current US climate research budget for a Red Team to write an “assessment report”. Funny how few people complain about grant-whoring rent-seekers when they’re holding themselves up as some sort of modern day Galileo who are poking holes in the Warmunist Theocracy, innit. But I digress.

        Let’s return to Dr. Lazebnik. His proposed hopeful solution to the problem of underfunded pure research in *his field* of biomedicine is: I also hope that the Carnegies, Stanfords, and Hopkinses of today, perhaps with some help from the government, will build new ships, perhaps smaller, but more agile and steered by crews who are not afraid to sail in uncharted waters.

        Setting aside wondering what pure climate research would look like as opposed to applied (because that’s really NOT what this analogy is about), let’s think about another famous industrialist/philanthropist name of Rockefeller, John D., founder of Standard Oil. The closest modern day equivalents are probably the Koch brothers.

        Kip Hansen in comments here thinks that the tech billionaires have a chance to “make a difference”. Rudd Istvan laments that most of them are “on the other side” out of political correctness.

        You can’t seriously tell me that the fossil fuel industry of today doesn’t have enough nickles to rub together to find an alternative hypothesis to CO2-induced global warming … or at least enough to be earnestly looking for one. Let’s not leave the Saudi and Kuwaiti Sheikhs out of this thought experiment. And I do mean experiment, because you guys are not thinking.

        The cold-hard truth is this: nobody with the means to do what your wishful thinking is asking for is going to waste serious cash looking for alternative mechanism(s) they do not think exists to explain a phenomenon which is adequately explained by good, robust science already on the books.

        Follow the money is a common refrain in this “debate”. Very well. Look at who has it, and where they’re NOT putting it.

        That is all.

      • > [T]here’s really very little here for either of us to go on.

        Thence this post, and until a philantropic offer shows up in Judy’s mailbag.

  14. Basic science will never be profitable. Its funding will thus always be a messy, chaotic mix of charity, politics and millionaire largesse. Centuries ago scientific research was practiced mostly by self funding millionaires. Today’s situation is more complex and untidy but in my view, better.

    • Right, because premier for-profit journals with a reputation to protect are so highly motivated to publish politically-motivated crap.

      Next.

      • You jest, surely!

        For profit journals publish for profit.

        You might have noticed that they are fiercely resisting any attempts to disseminate scientific knowledge for free!

        Of course they publish politically motivated crap! Just look at some of the papers premier journals have been reluctantly forced to retract. The crap doesn’t even have to politically motivated. Computer generated crap, supported by the appropriate fee, gets published – supposedly peer reviewed and all.

        I don’t share your child-like faith in the scientific publication process.

        Another appeal to your opinions, by the look of it. A fact or two might be more persuasive.

        Cheers.

      • Mike,

        You might have noticed that they are fiercely resisting any attempts to disseminate scientific knowledge for free!

        I have indeed, as would I expect any organzation doing what they do for profit. Even non-profit organizations have a similar sense of self-preservation. Your point is what, exactly?

        Of course they publish politically motivated crap! […] The crap doesn’t even have to politically motivated.

        Thank you for recognizing the difficulty of determining why the crap was crappy.

        Just look at some of the papers premier journals have been reluctantly forced to retract.

        I would look at their reluctance as evidence that they know habitually selling crap is not a good business model … unless perhaps one is operating as a merchant of fertilizer.

        A fact or two might be more persuasive.

        And monkeys might fly out of my arse instead of crap. Let’s not forget that the burden of proof in this subthread lies with who posted the political cartoon at the head of it.

      • brandonrgates,

        The original graphic comment contrasted the scientific method, with the Cargo Cult method practised by pseudo scientific members of the Warmist Church of Latter Day Scientism, who describe themselves as climatologists.

        Climatologists apparently devote their taxpayer funded lifestyles to the study, restudy, and studies of the studies of the averages of weather, which is what climate is. Nothing more, nothing less.

        As a result, they have achieved precisely nothing of use to humanity, albeit at vast expense. Of course, “for profit” journals publish this pseudo scientific nonsense. So would I. Large taxpayer funded fees to publish rubbish, even bigger taxpayer funded fees charged to actually read the rubbish. It’s not all rubbish of course. You actually have to use your brain to separate the gold from the dross.

        As to brains, I assume you are at least the intellectual equal of the Incredible Shrinking (Trees talk to Me) Mann, the statistically challenged mathematician Gavin Schmidt, or any number of bearded, balding, bumbling buffoons. The bar is low enough, that’s for sure.

        As to burdens of proof, climatologists apparently don’t believe reproducible testability is necessary. Unverifiable yet strident assertions suffice – climatologists call this “evidence”.

        So what part of the poster’s postulation are you actually challenging, and why? You haven’t said, have you? Just the usual Warmist Waffle – stupid and dismissive, possibly patronising and condescending, but at least laudably brief.

        I’m not sure what your attempt at schoolboy humour is intended to imply. Are trying to say you are full of crap? Or maybe jam-packed with monkeys, figuring if they utter enough random sounds, they’ll eventually say something intelligent? How would anyone discern the difference?

        Somewhat like climatology papers, appearing in a highly profitable commercial journal after payment of high fees. Which, if any are either factual or useful?

        Obviously, my opinion is superior to yours, because my ego is bigger.

        Cheers.

      • Mike,

        The original graphic comment contrasted the scientific method, with the Cargo Cult method practised by pseudo scientific members of the Warmist Church of Latter Day Scientism, who describe themselves as climatologists.

        Who needs evidence when begging the question will suffice?

        How would anyone discern the difference?

        That you wonder at being able to tell a winged monkey from a turd could be part of the problem here.

      • Brandon
        After you have moved the goalposts and changed the rules in your favour, then it’s natural to argue earnestly for playing by the rules.
        How moving the goalposts? Well, for one, discarding Karl Popper’s rule of falsifiability. Granted, climate science is far from alone in this, technology and computational advances have seen inductivism gain ground over deductivism, contrary to Popper’s tenets.
        Inductive: here’s a simulation with a built-in warming effect of CO2. I run it and it shows rising CO2 warms the climate. This proved CO2 warms climate.
        Deductive: CO2 levels in the thousands ppm in past ages did not cause warming. Even ice ages occured under these conditions. The palaeo record is not consistent with a significant effect of CO2 on temperature. A reverse effect – CO2 following temperature due to ocean solubility of CO2, is possible however.

      • ptolemy2,

        Well, for one, discarding Karl Popper’s rule of falsifiability. Granted, climate science is far from alone in this, technology and computational advances have seen inductivism gain ground over deductivism, contrary to Popper’s tenets.

        Really? Setting aside the matter that Popper is one of many science philosophers and has no special status as a final authority other than your declaration by fiat, suppose this hypothesis: we propose all swans are white on the basis that all we’ve ever observed are white swans. We recognize this is an induction.

        Show us where Popper would take issue with the falsifiability of our hypothesis.

        Inductive: here’s a simulation with a built-in warming effect of CO2. I run it and it shows rising CO2 warms the climate. This proved CO2 warms climate.

        Strawman and cherry-picking.

        Deductive: CO2 levels in the thousands ppm in past ages did not cause warming. Even ice ages occured under these conditions. The palaeo record is not consistent with a significant effect of CO2 on temperature. A reverse effect – CO2 following temperature due to ocean solubility of CO2, is possible however.

        Treats multiple physical properties of CO2 as one variable. Your model, and therefore deduction, fails. See also: PETM. Also see also: Popper wasn’t a fan of naïve falsificationism.

        Now that I’ve chased your squirrels, perhaps you can offer up some evidence that premier for-profit journals have been snookered all this time by bad science and/or are in on the scam. Thanks.

      • have been snookered all this time by bad science

        I know I’ve presented evidence to you, the measured min and max temp are not responding to a ln forcing, at least for the massive changes that have driven global temperature these last 40 or 50 years. Now, there could be a ln forcing hidden in this, but it is insignificant in comparison.

        The issue with our climate is not a loss of cooling, and what else does Co2 do?

      • not a loss of cooling

        to clarify it is better said as

        not a loss of atmospheric cooling ability

      • micro6500,

        I know I’ve presented evidence to you, the measured min and max temp are not responding to a ln forcing, at least for the massive changes that have driven global temperature these last 40 or 50 years.

        And I know I’ve pointed out to you that your output jumps around as individual station records start and stop, suggesting that your method is overly sensitive to spatio-temporal sampling.

        Now, there could be a ln forcing hidden in this, but it is insignificant in comparison.

        You’re not calculating long term change to min/max temps, you’re calculating day-on-day differences. Measuring daily weather noise is not climatically relevant to a long term change due to external forcing.

        Having chased yet another squirrel, back to my charge against the political cartoon above: That dozens of well-established premier for-profit journals have somehow overlooked — for decades! — the glaring error you think you have found beggars rational belief.

      • And I know I’ve pointed out to you that your output jumps around as individual station records start and stop, suggesting that your method is overly sensitive to spatio-temporal sampling.

        No, you’ve waved your hands around, and I deal with both of these, spatially by processing different sized areas, and temporally by doing both daily and yearly averaging. I also don’t pollute the data by averaging it with stations from around the world, unless it is in the area that is defined.

        You’re not calculating long term change to min/max temps, you’re calculating day-on-day differences. Measuring daily weather noise is not climatically relevant to a long term change due to external forcing.

        And you’re very wrong about this, I guess I could do a 30 year running average, but that seems stupid, and I also don’t consider the daily evolution of every surface station in the US over a year as weather.

        Besides all of this, the derivative still shows the evolution of surface temperature that has driven the climate the last 40 years and is the soruce of climate change is not a loss of cooling from Co2.
        And that is the only thing that matters.

      • You’re not calculating long term change to min/max temps, you’re calculating day-on-day differences. Measuring daily weather noise is not climatically relevant to a long term change due to external forcing.

        Wanted to go back to this. First totally wrong, my anomaly is the day to day change, but from the specific station, which reduces measurement uncertainty, second the noise comment, if I was measuring weather noise max temp would have the same undulations as min, as you can see it doesn’t.
        https://micro6500blog.files.wordpress.com/2015/03/global.png

        And what’s with this “intentionally hidden by averaging together”

        You can see from the derivative Max temp is near zero, yet the changes to our climate have been driven by changes to min temp as shown above, and it isn’t global changes, they are driven by regional changes, this one graph show it, this is the the slope of the change of temperature as the sun moves through it’s orbit divided by calculated solar forcing for each included station, I calculate this in latitude bands, this is N20 to N30
        https://micro6500blog.files.wordpress.com/2016/05/latband_n20-30_sensitivity.png
        The change in measured CS in the late 90’s is the cause of the “step” at the end of the 97-98 El Nino.

      • micro6500,

        The issue with our climate is not a loss of [atmospheric] cooling, and what else does Co2 do?

        We can look at a snapshot in time of outgoing radiation as seen from orbit and compare that to the results of a radiative transfer code:

        https://1.bp.blogspot.com/-zCrdL62Jh3g/VtjfXbsZwdI/AAAAAAAAAqw/mHYDtwrOAaA/s1600/MODTRAN%2BAll%2BEmitters%2Bvs%2BNIMBUS%2B4%2BIRIS%2BW%2BTropical%2BPacific%2BCO2%2B000325%2Bppmv.png

        We can also look at observational change over time, and compare that to modelled results, e.g. Griggs and Harries (2006): The observed difference spectrum between the years 2003 and 1970 generally shows the signatures of greenhouse gas forcing, and also shows the sensitivity of the signatures to interannual variations in temperature.

        We can look at change over time in ocean heat content:

        https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content700m2000myr.png

        We can look at “the best data we have” according to our host:

        http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_May_2016_v6.jpg

        Multiple lines of evidence show a clear long-term secular change in energy content. Direct spectral observation show changes over time consistent with modelled results. The theoretical mechanism consistent with these changes is well-described and was on the books before observational data confirmed the prediction:

        Although the increase in atmospheric CO2 has not yet resulted in a measurable change in the earth’s climate. The concerns surrounding the possible effects of increased CO2, have been based on the predictions of models which simulate the earth’s climate. These models vary widely in the level of detail in which climate processes are treated and in the approximations used to describe the complexities of these processes. Consequently the quantitative predictions derived from the various models show considerable variation. However, over the past several years a clear scientific consensus has emerged regarding the expected climatic effects of increased atmospheric CO2. The consensus is that a doubling of atmospheric CO2, from its pre-industrial revolution value could result in average global temperature rise of (3.0 ± 1.5)°C. The uncertainty in this figure is a result of the inability of even the most elaborate models to simulate climate in a totally realistic manner. The temperature rise is predicted to be distributed nonuniformly over the earth, with above-average temperature elevations in the polar regions and relatively small increases near the equator. There is unanimous agreement in the scientific community that a temperature increase of this magnitude would bring about significant changes in the earth’s climate including rainfall distribution and alterations in the biosphere.

        […]

        In our climate research we have explored the global effects of Newell’s evaporative buffering mechanism using a simple mathematical climate model. Our findings indicate that Newells’s effect is indeed an important factor in the earth’s climate system. As Newell predicted, evaporative bufferlng does limit CO2-induced temperature changes in the equatorial regions. However, we find a compensatingly Iarger temperature increase in the polar regions, giving a global averaged temperature increase that falls well within the range of the scientific consensus. Our results are consistent with the published predictions of more complex climate models. They are also in agreement with estimates of the global temperature distribution during a certain prehistoric period when the earth was much warmer than today. In summary, the results of our research are in accord with the scientific consensus on the effect of increased atmospheric CO2 on climate. Our research appears to reconcile NewelI’s observations and proposed mechanism with the consensus opinion.

        We are now ready to present our research to the scientific community through the usual mechanisms of conference presentations and publications in appropriate journals. I have enclosed a detailed plan for presenting our results.

        Multiple lines of evidence from independent investigations backed by sound theory are against you. I would suggest you deal with that instead of endlessly repeating the results of a flawed analysis.

      • We can look at a snapshot in time of outgoing radiation as seen from orbit and compare that to the results of a radiative transfer code

        “For the decade considered [2000-2010], the average imbalance is 0.6 = 340.2 − 239.7 − 99.9 Wm2 when these TOA fluxes are constrained to the best estimate ocean heat content (OHC) observations since 2005 (refs 13,14). This small imbalance is over two orders of magnitude smaller than the individual components that define it and smaller than the error of each individual flux. The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm2(95% confidence) due largely to instrument calibration errors12,15. Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change11.

        https://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/
        I remember the uncertainty being up about 14Wm2, but will accept this, it’s +/- 4Wm2 still larger than the supposed effect it’s trying to measure, plus the measured difference.

        The models are hunting for imbalances and build-ups in planetary energy. But according to the observations, the longwave (infra-red) energy coming onto the earth’s surface, the infamous back radiation, is 10 – 17 W/m2 higher than in the famous Trenberth diagram from 1997. So the models are trying to explain tiny residual imbalances, but the uncertainties and unknowns are larger than the target. The argument that “only the forcing from CO2 can fill the gap in the models” is not just argument from ignorance rhetorically, but factually too.

        Then how they “fixed” it.

        While one might imagine that the instantaneous
        impact of a perturbation on the top-of-atmosphere
        (TOA) radiation balance would be a good measure of
        its radiative forcing, early studies quickly recognized
        that this measure was not optimal. The temperature
        of the stratosphere, in particular, was not closely tied
        to that of the surface. For example, it warms under
        a positive solar forcing yet cools under a positive
        greenhouse gas forcing (Fels et al. 1980), therefore
        requiring the surface and troposphere to warm more
        to balance the same instantaneous TOA net f lux
        perturbation (Hansen et al. 1997). This problem was
        resolved by allowing for a “stratospheric adjustment”
        prior to calculating the radiative forcing, which has
        been the standard approach at least since the first
        Intergovernmental Panel on Climate Change (IPCC)
        report (Houghton et al. 1990).

        http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-13-00167.1

      • micro6500,

        No, you’ve waved your hands around, and I deal with both of these, spatially by processing different sized areas, and temporally by doing both daily and yearly averaging.

        And yet when you compute the global *daily* rates of change, abrupt changes in number of observations often coincide with abrupt changes in rates of change. It sticks out like a sore thumb. That’s not handwaving, that’s pointing to an obviously flawed methodology.

        And you’re very wrong about this, I guess I could do a 30 year running average, but that seems stupid, and I also don’t consider the daily evolution of every surface station in the US over a year as weather.

        Even a 30 year running average of inter-day global variability is closer to weather noise than computing a 30 year linear trend. As it stands now, all you’re measuring is high frequency static.

      • And yet when you compute the global *daily* rates of change, abrupt changes in number of observations often coincide with abrupt changes in rates of change. It sticks out like a sore thumb. That’s not handwaving, that’s pointing to an obviously flawed methodology.

        There aren’t any abrupt changes in stations, though I suppose I could throw them away or use stations from far away to make something up.

        Even a 30 year running average of inter-day global variability is closer to weather noise than computing a 30 year linear trend. As it stands now, all you’re measuring is high frequency static.

        If this is what you think, you haven’t look at this.

        What is an average of a year of changes at a single station? It isn’t weather.
        And how is that any different that what everyone else is doing, other than I don’t smear data from 1,200km away into an answer.

        As for the previous post, it’s all irrelevant, I’m not arguing that there isn’t spectrum lines, I will argue what TOA imbalance is made up, the measurement accuracy of the satellites are 4 or 5 times larger than the signal they are trying to detect.
        Heat content is so poorly measured it could be anything, but all of this again is irrelevant, I’m not even arguing it hasn’t warmed, what I am arguing is that there is no evidence the rate of cooling at night has decreased, and that there are large regional change in min temp that align with things like the step in temp in 97 and 98, but are intentionally hidden by averaging it all together, and that these changes which do explain the temperature record are not the fingerprint of Co2.

      • 30 year linear trend

        The trend lines for the derivative on both min and max temp when you include measurement uncertainty is
        0.0F +/-0.1F for 1950 to 2013

      • micro6500,

        There aren’t any abrupt changes in stations […]

        The plots I recall seeing showed abrupt changes in number of observations. How about you post your latest?

        […] though I suppose I could throw them away or use stations from far away to make something up.

        You need to do something to account for uneven spatial coverage over time. You can, and should, try randomly throwing away station data as a sensitivity analysis because spatial coverage is not constant throughout GHCND. It’s a recipe for coverage bias if you do not take steps to control for it. And it’s not like the raw data are without known, demonstrable quality issues and systemic biases.

        What is an average of a year of changes at a single station? It isn’t weather.

        Read what I wrote: Even a 30 year running average of inter-day global variability is closer to weather noise than computing a 30 year linear trend.

        At this point, I should probably clarify that I forgot one aspect of the several different calculations you’re doing: looking at diurnal temperature range based on the daily min/max values. Then, IIRC you compute the daily differences between them and average those results together in various ways.

        That’s a long way of saying that I don’t know which “evidence” you’re talking about. Again it would be nice if you posted your most recent plots which demonstrate the argument you’re making.

        And how is that any different that what everyone else is doing, other than I don’t smear data from 1,200km away into an answer.

        HADCRUT4 doesn’t infill with interpolation and their results are quite comparable to GISTemp. Oh, and for the record, 1,200km is the furthest distance GISS uses in their PHA. That does not mean that all, or even most, station data are subject to comparison with others up to that limit.

        As for the previous post, it’s all irrelevant […]

        Sorry, but consilience of multiple independent lines of evidence backed by well-described physical theory is the furthest thing from irrelevence in empirical science.

        […] I’m not arguing that there isn’t spectrum lines […]

        Those absorption/emission lines are trying to tell you something you would do well to not dismiss out of hand.

        […] I will argue what TOA imbalance is made up […]

        Oh rubbish. Again, I’m supposed to believe that premier for-profit journals have for decades been accepting manufactured consclusions based on falsified data? Pull my other one.

        Why would you even argue that TOA imbalance is “made up” knowing full well what a huge heat sink the oceans are? Do you think the system would respond instantaneously to an abrupt 10% increase (or decrease) in solar output? Why would any other global external forcing work any different?

        […] the measurement accuracy of the satellites are 4 or 5 times larger than the signal they are trying to detect.

        Come on. What’s the s/n ratio in the analysis you’re doing?

        Heat content is so poorly measured it could be anything, but all of this again is irrelevant, I’m not even arguing it hasn’t warmed, what I am arguing is that there is no evidence the rate of cooling at night has decreased, and that there are large regional change in min temp that align with things like the step in temp in 97 and 98, but are intentionally hidden by averaging it all together, and that these changes which do explain the temperature record are not the fingerprint of Co2.

        Which temperature record?! And what’s with this “intentionally hidden by averaging together” crap all about? Do you not realize that all of the major GMST analyses provide gridded data products?

        Nobody on this end of the conversation is impressed by your unevidenced conspiracy theories about professional scientists or complicit journals intentionally hiding things for decades on a worldwide international scale. Wake up.

      • brandonrgates,

        A couple of minor points. Your snapshot in time seems to be one during the day. Any warming during the day seems to be due to insolation. A snapshot at night shows cooling. No sunlight. No CO2 heating at all. Cooling, as is observed by all.

        As to the rest of your arguments, I may need to point out that correlation does not create causation.

        There is no repeatable scientific experiment demonstrating the alleged heating effect of CO2. None.

        Calls to ban coal appear to be completely stupid, unless a more cost effective way of generating electricity, and smelting iron ore, as well as elevating CO2 levels in the atmosphere, are found.

        CO2 is an essential requirement for humanity to continue to exist. Calls to restrict the amount available in the atmosphere are tantamount to conspiracy to commit a massive crime against humanity – genocide on an unprecedented scale.

        If you disagree, producing some facts to support your contentions might be valuable. Computer model outputs are not facts. Predictions or scenarios are not facts. Consensus opinions are not facts.

        I don’t believe you have any facts, but I’m always prepared to change my mind if I’m presented with new facts. What about you?

        Cheers.

      • Mike,

        A couple of minor points. Your snapshot in time seems to be one during the day.

        I believe that is correct. It doesn’t actually matter.

        Any warming during the day seems to be due to insolation.

        Non sequitur: a snapshot in time does not show any change. That was the point of my including Griggs and Harries (2006), which you predictably ignored in your response.

        A snapshot at night shows cooling.

        Day or night, outbound LW as seen from orbit is *always* cooling. Good thing too, or we’d have been cooked eons ago.

        No sunlight. No CO2 heating at all. Cooling, as is observed by all.

        I’ve already previously covered how you are strawmanning the mechanism: CO2 isn’t a source of heat.

        As to the rest of your arguments, I may need to point out that correlation does not create causation.

        You don’t need to do that because I learnt it in Stats for Dummies.

        Calls to ban coal appear to be completely stupid, unless a more cost effective way of generating electricity, and smelting iron ore, as well as elevating CO2 levels in the atmosphere, are found.

        Calls to continue to suffer the adverse health and economic effects of breathing combusted coal dust appear to be completely stupid.

        CO2 is an essential requirement for humanity to continue to exist.

        Nobody on this end of the argument is talking about taking absolute levels of CO2 to zero. Only net *emissions*. Our species evolved in conditions where atmospheric CO2 ranged between 120 and 280 ppmv — calling 400+ ppmv necessary is a bit of a stretch.

        Calls to restrict the amount available in the atmosphere are tantamount to conspiracy to commit a massive crime against humanity – genocide on an unprecedented scale.

        Spoken like a true humanitarian. Here’s your symbolic Nobel Peace Prize.

        If you disagree, producing some facts to support your contentions might be valuable.

        I do not have the burden of proof for your assertions.

        Computer model outputs are not facts.

        I provided empirical data. The modelled results were quite consistent with them. This is how science works.

        Predictions or scenarios are not facts.

        Good, then we can completely dismiss your predictions of massive crimes against humanity and genocide on an unprecedented scale.

        Consensus opinions are not facts.

        So we should trust what a minority of self-proclaimed experts scribble on the Internet (who generally don’t have a cohesive alternative explanation) over what the majority of actual domain experts have published in refereed literature. Sounds like a plan … I’ll get right on that.

        I don’t believe you have any facts, but I’m always prepared to change my mind if I’m presented with new facts.

        Prior experience would suggest otherwise.

        What about you?

        Produce a CMIP5-compatible AOGCM which better explains the instrumental record without invoking GHGs, and you’ll at least gain my attention. If those results are independently replicated, published in refereed literature, and become widely accepted, I would change my beliefs. And be quite relieved. No … I’d be ecstatic.

      • brandonrgates,

        You seem to agree that CO2 is not a source of heat. You also seem to agree that there are no net harmful effects to humanity from increasing the amount of CO2 in the atmosphere.

        Your objection to breathing combusted coal dust indicates that you are a follower of James “Death Trains” Hansen, and have been using calls to reduce CO2 emissions as a roundabout way of banning coal. Maybe you could devote your obvious enthusiasm to figuring out ways to minimise the effect of breathing combusted coal dust – a respirator should fit your needs. Or you could stridently demand that governments all over the world are forced to do your bidding, and prevent any combusted coal dust from reaching your lungs.

        I wish you well with your endeavours. You’ll need it, but let me know how you get on, won’t you?

        As to demanding that I produce a different and better computer model subject to your specifications, you are getting into the same silly mindset as Steven Mosher. The toy computer models that you love so much, have proved to have precisely zero useful predictive ability. Modelling outputs from a chaotic system the size and complexity of the atmosphere has eluded the world’s finest self styled climatologists!

        I would have to admit that I could do at least as poorly, for far less cost. I won’t, because I don’t really want to make the Warmist Wallys look any more foolish than they already appear. Just not good form.

        Keep on with the fight against the filthy dirty evil black coal, (and presumably those who burn it).

        I prefer to live in the real world, even with all the attendant costs. Maybe you could join Steven Mosher, and try to get the Chinese government to reduce the amount of particulate pollution in China.

        I figure it’s quite possible that the Chinese are as smart as Americans. Maybe they aren’t quite incompetent as you seem to imply.

        I believe they have been building the world’s fastest supercomputers for several years. Maybe the Americans could get a few tips, rather than constantly whining about how everyone else is stealing American intellectual property. Can’t the US just steal it back, or are US authorities bumbling ineffectual complainers?

        Maybe the Whining Warmist philosophy has been adopted more widely than has been recognised to date. Complain that nothing’s your fault, people only want to find fault with your work, it’s unreasonable to have to justify anything in scientific terms, and all you need is more money, with even less accountability.

        Maybe.

        Cheers.

      • Mike,

        You seem to agree that CO2 is not a source of heat.

        No seems about it, I’ve previously been quite clear with you that CO2 is not a heat source.

        You also seem to agree that there are no net harmful effects to humanity from increasing the amount of CO2 in the atmosphere.

        No. It’s an open question in my mind whether the net effects *to date* are harmful or beneficial because of the inherent difficulty in such calculations.

        Your objection to breathing combusted coal dust indicates that you are a follower of James “Death Trains” Hansen, and have been using calls to reduce CO2 emissions as a roundabout way of banning coal.

        I’m quite up front about CO2 being a future risk best avoided. I don’t see why it should be an issue of Integrity that I have multiple reasons for prioritizing which fossil fuels should be replaced sooner rather than later.

        Maybe you could devote your obvious enthusiasm to figuring out ways to minimise the effect of breathing combusted coal dust – a respirator should fit your needs. Or you could stridently demand that governments all over the world are forced to do your bidding, and prevent any combusted coal dust from reaching your lungs.

        I don’t have a problem exercising political force, Mike. It’s how the real, adult world works.

        I wish you well with your endeavours. You’ll need it, but let me know how you get on, won’t you?

        You’ll read about it in the papers. I doubt you’ll see my name mentioned.

        As to demanding that I produce a different and better computer model subject to your specifications, you are getting into the same silly mindset as Steven Mosher.

        The kind of model I proposed is not the kind of thing one person is going to develop in their spare time. You’re reading me too literally.

        The toy computer models that you love so much, have proved to have precisely zero useful predictive ability.

        Perhaps I’m reading your hyperbole too literally, but I’d dearly love to see your calculations which arrive at “precisely zero”. Teh modulz are of course wrong, as all models are by definition. The way I judge models is by relative skill.

        Modelling outputs from a chaotic system the size and complexity of the atmosphere has eluded the world’s finest self styled climatologists!

        Climatologists are very explicit that teh modulz are not long-term weather forecasting tools. The stated goal is to colour within the boundaries of the attractor.

        I prefer to live in the real world, even with all the attendant costs.

        We didn’t stop using cheap stone tools due to a scarcity of rocks.

        Maybe you could join Steven Mosher, and try to get the Chinese government to reduce the amount of particulate pollution in China.

        My primary focus is on the policies of my own country.

        I figure it’s quite possible that the Chinese are as smart as Americans. Maybe they aren’t quite incompetent as you seem to imply.

        I’m of the mind that all “races” are comparably intelligent. This 2015 article from Bloomberg shows the Chinese outspending the US on renewables by double in absolute terms and by over a factor of three in terms of nominal GDP. As well, 23 of the 67 nuclear reactors under construction worldwide are being built in China, or approximately 30 percent.

        I’d call that relatively smart.

        I believe they have been building the world’s fastest supercomputers for several years. Maybe the Americans could get a few tips, rather than constantly whining about how everyone else is stealing American intellectual property. Can’t the US just steal it back, or are US authorities bumbling ineffectual complainers?

        You’re really all over the place today. I’d say the biggest problem in US politics at the moment is our polarized and thus gridlocked and unproductive Federal legislative branch, which does affect more than just climate policy.

        Maybe the Whining Warmist philosophy has been adopted more widely than has been recognised to date.

        I’m not sure that Whining Warmist is even a common term outside the anglophone world (appropriately translated of course). Certainly the mainstream scientific view is more broadly *accepted* in non-English-speaking countries. As far as adoption goes — which entails action over words — only N. America and Europe have negative emissions growth from 2000-2011, -0.33% and -0.38% per year respectively (based on linear trend, data from the World Bank.

        Complain that nothing’s your fault, people only want to find fault with your work, it’s unreasonable to have to justify anything in scientific terms, and all you need is more money, with even less accountability.

        Every IPCC assessment report is chock full of scientific justification. Wake me up when you’ve got an alternative explanation for temperature trends since the Industrial Revolution.

      • brandonrgates,

        Maybe you’re conflicted. You agree CO2 has no heating properties, but seem to claim it is responsible for increasing the Earth’s temperature. That fits my definition of heating, so you claim that CO2 heats something by merely surrounding it. Of course, this is specious nonsense.

        You wrote –

        Every IPCC assessment report is chock full of scientific justification. Wake me up when you’ve got an alternative explanation for temperature trends since the Industrial Revolution.

        I’d replace the term “scientific” with “pseudo-scientific”. The IPCC assessment reports are chock full of nonsense, interspersed with reluctantly inserted truth. They eventually acknowledged –

        “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”

        Then they proceed to claim they can predict the climate anyway, and even propose working towards ways of verifying the predictive models that they say can’t work!

        It only needs more money, of course.

        And you really believe this crap?

        Many of the volunteers providing input to the IPCC were obviously second raters with too much spare time, and their employers realise this, by not giving them something worthwhile to think about.

        A motley collection of tree whisperers, coal haters, incompetent statisticians and so on. Unfortunately, some real scientists got caught up in the madness, at the same time. You’ll notice that the more thoughtful IPCC contributors have changed their minds to various degrees.

        As to your Mosher-like demand for an alternative explanation, Professor Curry has recently provided a link to a peer reviewed paper providing exactly that. The science is reproducible, the authors even expressed surprise at their findings, but accepted them, albeit finding them somewhat counter intuitive. That’s probably as a result of being exposed to rampant Warmist Scientism for a decade or two.

        I’ve commented on the paper previously. If you are interested, you might care to read it, and no doubt dismiss it out of hand (that’s purely assumption, of course).

        My assumption is that you are opposed to burning coal, just like James “Death Trains” Hansen. I agree there are health costs associated with using coal. I agree it looks unpleasant, and coal miners emerge from mines looking black and filthy. I like electricity, steel, cement, plastic and chemical feedstock, and all the other benefits which accrue from using coal. I put up with the costs.

        I rarely descend to analogies, but I also use electricity, in spite of the ever present danger of electrocution. I drive a car, in spite of the ever present danger of serious personal injury or death. I’ve reluctantly gone to hospital on occasion, in the full knowledge that many people die as a result of avoidable medical mistakes, or as the result of becoming infected by hospital dwelling highly resistant microorganisms.

        I support actions you might take to reduce pollution of various types. Hydrocarbon combustion results in at least two components – carbon dioxide and water. I regard neither of these as pollutants in atmospheric amounts up to at least double the present levels.

        You may, of course.

        I’ll let you have the last word.

        Cheers.

      • BrandonG: “Produce a CMIP5-compatible AOGCM which better explains the instrumental record without invoking GHGs, and you’ll at least gain my attention.”
        .
        Simple, discard the unfounded assumption of zero centennial and millennial scale variability or infallibility of a record that has been more homogenized and adjusted than a block of aged Swiss with a lower back problem.

    • bile jones,

      In the land of Oz, there’s another step for climatology. When your research job is declared redundant because you claimed the science was settled, suddenly discover it’s not.

      Gather your soon-to-be-jobless colleagues, dress in your “signature white coats” and hand out leaflets door-to-door, claiming the Government is “anti science”.

      Seriously. Whining Warmists!

      Cheers.

  15. The answer may well be to “game” the system from the opposite direction – that is, if your research funding depends on citations in published literature, establish a journal of failed research. Publish “dead-end” research, stuff that didn’t work they way you thought it would. Hard to get started, but once established, a valuable resource for new research and also a way to get your citations higher, even when what you do doesn’t lead to positive results. Have to be careful how you name it and have to ensure that you have a wide support base from major universities to get that “seed” database in place.
    As it stands, most of the stuff you could publish in this sort of journal never sees the light of day and dies with the team that did the research. If science/research is about making discoveries, then discovering you were wrong surely counts; having a list of what doesn’t work is more valuable than what does work when 99% doesn’t work and only 1% does, especially when everyone is clamoring for $ and the funders want results – why re-invent a square wheel 50 times?

  16. Oh – and you could use papers published in that sort of journal as a way to “measure” creativity for the administrators!

  17. There seems to be basically two kinds of Science, Private and Public.

    Private Science is done mainly by large corporations behind closed doors. The knowledge gained is not shared in the peer reviewed journals; rather it is used to create products that can be monetized. Private Science is done for the sole purpose of exploiting unwitting consumers by making them buy products they don’t need while causing high external costs through consumption of natural resources, damage to the environment, and exploitation of indigenous peoples.

    Public Science, on the other hand, is done for the benefit of all humanity. It is funded by government grants, rather than the exploitation of citizens. Its goals include guiding and supporting government policies and informing and educating the public so that they understand and support government action on a wide range of economic and social issues. Public science is conducted at elite institutions by professors and doctors with impressive titles unfettered by the politics of greed which dominates private science. Their findings and results are clearly written and published in respected peer reviewed journals for all to read. These papers are read intently by journalists for newspapers and television who then breathlessly report the important new facts to the public. Other scientists eagerly reproduce and confirm the important findings so that they may become consensus, the law of the land.

    It is clear that Private Science is broken and must be fixed. This will certainly require government action in the form of regulations, taxes and penalties for the recovery of external costs. Other countries such as Cuba, Venezuela and North Korea are far ahead of us in this regard and their companies are held to high social standards. It is my hope that the next administration will continue and build on the achievements of the Obama Administration and take our country bravely forward into the 21st century, with no exclamation points required here, I hope.

    • Geoff Sherrington

      KenW
      Perhaps you are confusing ‘politics of greed’ with ‘motivation for profit’.
      Two significant differences between your two groups (with which classification I disagree) is 1. In private you have the incentive of being able to get very rich. 2. In public you seldom have to account for your performance. These differences can make the two groups poles apart.
      Having said that, most of my career used private research dominantly, but was grateful for public research in quite a few instances where they were hired, or came to us for part of their funding. It was quite successful, but it seems a resisted concept at present.

      • Well Geoff, I also think that public and private science need to be brought closer together. A good place to start would be to make private science more inclusive. Many R+D departments are severely lacking in diversity. Title IX oversight needs to be expanded to include all entities that do business with the any government agency, including the IRS.

      • Geoff Sherrington

        KenW,
        With experience, you should lose some of the urge to invite others to control or regulate you.
        Geoff.

      • Geoff, it’s not about me, it’s about the younger generation that is just coming into the workplace. The government has made great strides in ensuring that our education system provides a positive and unthreatening life experience and gives our young people the critical facts they need to rationalize their expectations.

        I don’t think that the private sector, if left to itself, is capable of providing the caring and nurturing environment that our young people have come to depend upon. The government will need to step in before corporate insensitivity leads to disillusionment and unnecessary sadness

        We will certainly see increasing demands for such intervention in the near future.

    • KenW. What a complete crock. So the government is going to “fix” private science? Great idea, make it grossly inefficient, not accountable to anyone and a source of money and power for just elitist leftists – just like “public” science.

      • It’s a joke, son.

      • Willard! I was expecting a little more support from you than this. Some here think that government supported research is ineffective. I think we’ve got exactly what we created and I see no signs that we’re going to change course soon so let’s finish the job and get on with it.

      • Your expectations were way too low, KenW.

        I duly submit that we should embrace globalization and outsource research where its output is optimized. Some would suggest China, but I think we can get even more half-baked ideas for our buck if we go directly to Russia. Their salary is becoming lower than Foxconn employees, and if they can hack Killary’s emails, they sure could hack teh stoopid climate modulz.

        There is no reason why we should timeshare university facilities with students. Companies that produce engineer-level formal derivations would certainly be more efficient if they could use all the computing power universities can provide them, and YouTube, Coursera, and MIT already provide all the content we need. In fact, our education model is in dire need to be transformed into an internship practice where jobs would be granted to the highest bidders.

        Only then will we see something that remotely resemble a real GRRROWTH gimmick.

        Thank you.

      • Willard, we’ve got some common ground here! We need to think bigger.

        In order to really get things on track we need more effective coordination on the global level. We need to strengthen our supranational institutions like the WTO, IMF, EU and especially the UN.

        Our government in Washington works because they have money*. With money you can distribute largesse. Largesse is what makes the country work the way the managers in Washington know it should. Title IX would be a toothless tiger if it weren’t for the government’s ability to giveth and taketh away. I don’t know where it’s covered in the constitution, but most of our federal agencies would be just so much alphabet soup if they didn’t have largesse.

        (* They don’t really even have to have money, they just say they do and everybody believes them because we’re so used to it)

        The problem with the UN is that they have no money. They’re always begging. They can barely afford the small cadre of elite managers they employ now. What they really need is hands-on purse strings! That’s why we need a global tax on the international banking system. The elimination of cash and global oversight of all computerized banking will soon make it possible.

        When the UN can get their hands on the largesse they need to reward and punish countries at will, then all the countries of the world will work in harmony towards our common goals. The Instruments of International Order, such as the Paris Agreement, will then be worth the paper they’re written on, and GRRROWTH(!) will preserve the lifestyle of that much of humanity that most deserves it.

        Help us to achieve that goal Willard. Then, thanks to greens power, you and i, may get the chance to meet in a place where darkness never happens. See you there!

    • Curious George

      KenW, did a Public Science give us a transistor?

      • George, “did a Public Science give us a transistor?”
        No, Shockley did, a man who’s abhorrent views have no place in today’s society and must be expunged, So there!

        Now we have cutting edge studies that tell us the things we need to know to recognize and avoid problematic behavior in our modern world!

        Read all about it!
        http://phys.org/news/2016-06-disney-princess-culture-magnifies-stereotypes.html?utm_source=menu&utm_medium=link&utm_campaign=item-menu

      • ” George, “did a Public Science give us a transistor?”
        No, Shockley did, a man who’s abhorrent views have no place in today’s society and must be expunged, So there!

        Now we have cutting edge studies that tell us the things we need to know to recognize and avoid problematic behavior in our modern world!”
        Lol, well this is a perfect example of what’s wrong with America , and I’m not talking about Shockley.

      • What’s wrong micro? The results are shocking. Do you think she had to fudge her p-values to get that result? Or messed up her excel sheet? I’m inclined to believe that this study is absolutely reproducible.

        many times over in fact:

        ” Coyne has authored more than 80 studies on media influences, gender, aggression and developmental psychology in top peer-reviewed publications. Her work on how profanity in the media increases teen aggression appeared in Pediatrics and another study on how video games can be good for girls was published in the Journal of Adolescent Health.”

      • ? I’m inclined to believe that this study is absolutely reproducible.

        I don’t doubt, boy are not the same as girls.
        Since when is profiling and stereotyping allowed? that’s what this whole thing is, I thought that was sexist? And then when boys don’t follow her stupid hypothesis, she makes up gibberish to wave it away.

        “We know that girls who strongly adhere to female gender stereotypes feel like they can’t do some things,” Coyne said. “They’re not as confident that they can do well in math and science. They don’t like getting dirty, so they’re less likely to try and experiment with things.”

        Greater female stereotypical behavior isn’t worrisome for boys because the boys in the study who engaged with Disney Princess media had better body esteem and were more helpful to others. These beneficial effects suggest that princesses provide a needed counterbalance to the hyper-masculine superhero media that’s traditionally presented to boys.

        Read more at: http://phys.org/news/2016-06-disney-princess-culture-magnifies-stereotypes.html#jCp

      • Erase the original government prize used to start Bell labs, and the government reason why Bell was such a dominant player in the industry, and the scientific burst created by the government and its WW2 initiatives in science, and how big would Bell’s private lab have been after WW2?

      • That’s right JCH. The war was the mother of many things, including the highly developed government complex we have today. Of course, back then we were focused as a nation. Really focused. We had a plan and we knew it. That made it a lot easier to get things done and nobody asked stupid questions on the internet. Everybody knew that the government was on our side and would always tell us the truth. We’ve been drifting ever since. I don’t know what it will take to get us focused like that again, but I’m guessing nothing short of aliens or zombies will do the trick.

      • People then knew the government would not always tell the truth. In the months after Iwo Jima my father, a witness of the1st flag raising on Mt. Suribachi, wrote scathing letters in which he called the Rosenthal photograph a fake, and he ordered his family to not purchase the war bonds that were promoted by the Rosenthal photograph. So a son writes a book about his father’s participation in the 2nd flag event and the war bond drives, and Clint Eastwood turns it in to a movie, and now the son, James Bradley, says his father is most likely not in the Rosenthal photograph.

      • ==> . Everybody knew that the government was on our side and would always tell us the truth. =>>

        Wow.

    • Well…

      It is pretty clear that federal agencies should be outright banned from funding policy related research. The government has gotten so good at policy supporting research it is frightening.

      Perhaps the solution is to allow policy related research but require twice as much to be spent on research that presumes the desired policy is wrong.

      Balanced research is cheaper than bad policy.

      For example we should be spending $5 Billion on research to prove CO2 is beneficial, CAGW is impossible or extremely unlikely.

      • I´m on-line with your here, I think it is appropriate to duplicate a comment I made at the bottom of this thread:

        It occurs to me that scrutiny is a good thing within science – simply because ideas are corroborated by the severity of the scrutiny they have been exposed to and survived. If an idea does not survive scrutiny – that is a good thing too – simply because that is how we might get rid of flawed ideas.

        This makes me wonder – who gets any social, professional or economic benefits from scrutinizing others work? And who gets government grants to scrutinize whatever might deserve close scrutiny?

      • Guys, I don’t see what’s wrong with the system we have. You write a paper, publish it and if the press and the NGOs and rich and famous people like it, you get written up in the papers and blogs, do interviews on TV and get invited to conferences and cool stuff. If your paper is “problematic” on the other hand, they investigate you and your associates to see if they find any Conflicts of Interest like bad corporate funding or belief in the supernatural or such. If they find something, then you are “Debunked”. Once you’ve been debunked everybody ignores you and won’t shake your hand or anything. And it‘s really hard to get your bunk back, so most people just pack up and do something else. The system regulates itself. We’re doing fine.

      • Exclamation mark!

      • Steven Mosher

        “For example we should be spending $5 Billion on research to prove CO2 is beneficial, CAGW is impossible or extremely unlikely.”

        there are not enough skeptical scientists smart enough to do the work.
        the red team has no bench and cant even field a team.
        crap… 6 or 7 of them cant even publish a report on temperature adjustments after a year of work.

      • Steven, the weakest position on the red team is grant applications. Most of them graduated long before it was a requirement. PA has a good idea though, kind of like revenue sharing.

      • or, every grant requires an anti-grant

      • :KenW | June 23, 2016 at 1:56 pm |
        or, every grant requires an anti-grant

        With grant equalization for every research there will be an equal and opposite research.

      • A bidding system between grant and anti-grant seekers could be profitable.

        Make money by promising money. Become the middleman who cuts all middlemen. No inventory, no capital investment.

        Let’s call it Über Research.

        No, not catchy enough. We need a proper name, a name that could be branded.

        How about KenW Research?

      • Oh, Willard! Do try to think. What we have now is über Research – way too much and not half of it good. What would it mean to a grant seeker if he/she knew that there would be an opposite grant available if somebody wanted to prove them wrong?

      • > What would it mean to a grant seeker if he/she knew that there would be an opposite grant available if somebody wanted to prove them wrong?

        It would be grants and anti-grants all the way down to the path of GRRRROWTH, KenW.

        The bidding system I had in mind would be researchers bidding for or against a grant. Say there’s a grant (let’s call it G) that a random Gavin (let’s call him G*) would like to have. You or anyone else could bid on the anti-grant so that G* would not have G. G* could bet on G, after which you could counter-bet, etc. At the end of the bidding, we know if we get to give G to G* or not. In any case, a middleman makes profit.

        We could even imagine a system where G is priced dynamically.

        Besides, is G* honest?

      • Not bidders, Willard. Bounty hunters.

      • Gavin’s bounty hunters – I like that, KenW.

        Another market for bounty hunting could be provided the Freedom Fighters themselves. Think of how costly DavidW’s K-12 program has been for the Heartland Institute. I’m sure we could find content writers to do it for half the price and 50% more “freedom.”

        At this pace, our innovations are progressing as fast as GRRROWTH itself.

      • You’re getting the hang of it Willard. Always look on the bright side even if it’s dark where you are. But I’d have that GRRROWTH looked at, it might be congenital.

      • When you stare into GRRROWTH, KenW, GRRROWTH stares right back at you.

  18. The founders of BlackBerry founded and funded the Perimeter Institute for Theoretical Physics in Waterloo, Ontario Exactly the type of institute Dr Curry is discussing for Climate Science, just instead for theoretical physics. The Institute is very successful and has attracted some of the best theoretical physicists in the world. https://www.perimeterinstitute.ca/

    https://en.m.wikipedia.org/wiki/Perimeter_Institute_for_Theoretical_Physics

  19. The key issue is fostering research that ‘sails in uncharted waters’.

    For me, I cant help but put things together in my head, I have forever. Some I work on, others just get put to the side. As a lot of the technology we work with becomes more advanced, the older stuff tends to have spare time or gets cheaper, or you meet someone who does. I’ve been working on surface data for like 8 years now, because I have something I can do it with.
    I agree it does slow the process down a lot though.

  20. There are many straw man arguments in this topic, too numerous to count so I won’t bother.

    In the end, research funding is primarily about “time”, and time is money. Funding agencies want to know “how long will this commitment last…for me?”

    The answer is surprisingly simple. Look to see who is the top administrator. If that administrator is by training or temperament an accountant, then the time interval to do the research is short. This is true in business and in most endeavors of research. The accountant is cutting in the name of “efficiency” which becomes the over-riding consideration. The researcher then lives or leaves within those time parameters set by the administrator.

    “Productivity” frequently is viewed in the number of published papers in significant journals, number of national and international committee appointments, and a few others that become institution specific. Productivity, as in working out a sequence of events moving understanding another step forward takes time, and, usually, lots of time. Again, time is money.

    The patients of the administrator determines how far along a particular scientific thrust proceeds. Then the scientist calibrates how far to take the idea, or, quit at a convenient place and then publish some paper and move onto something else.

    This all boils down to trust. Does the administrator trust the researcher to do what he/she says they are/will be doing, including, listening sympathetically when told: “I’m at a blind end.”

    This also boils down to courage, the courage to fail and start all over again. The courage to recommit to the research process where failure is an every day event.

    • David Wojick

      What “funding agencies” are you talk about? With US Government finding agencies these issues generally do not arise because timescales are specified before proposals are even made.

  21. RiH, there is much wisdom in what you say. My own life experiences (unfortunately many) suggest a spectrum. On one end is blue sky whatever whenever stuff. Maybe in university labs, not usually corporate labs. Heck, Bell Labs was looking to replace unreliable vacuum tubes when it invented the transistor. It was trying to size switching requirements when Shannon invented information theory. On the other end is ‘this product needs to be in production with minimum whatever functionality by [date]’. Usual stuff. The operational end of R&D is easy to manage. Posted above. Been there, done that. The hard part is closer to the other ‘blue sky’ end. Not whatever whenever (think Darwin’s formulation of evolution), but still ‘hard’ mainly unsolved science problems like blue OLEDs. As you astutely point out, the working answer is to lose the accountants at the beginning-heck, make them write the whole thing off now. (That bluff usually works to scare them off.) Make sure the multiyear budget is so generous that it will never pinch so you never see finance again after the initial battle is decided (sometimes bloodily at BoD level). (My rule of thumb evidenced above was 2x.) And then, as admin, use the lightest touch possible. For true creative scientists, this actually spurs them on. The anxiety about maybe failing and not being called on it (wasting years of their possible production) is enormously motivating for the rare few who are Secretariats and not plow horses.
    And as you point out, willingness to listen and realize it just is not happening is essential. Hardest thing, cause we all have hopes and like to succeed. Harder for admin than producers. Cause finance will be back in your face the next time. When you do that, how you do that, and consequences are among the most important management art forms. Stuff they try but fail to teach at HBS, because no matter how well written and taught the case, it isn’t you in your emotional gut in real time.

    • ristvan

      Funny you mention HBS. The thinking goes: “If I am elite, then whatever I touch turns to gold.” Kinda like Midas and his (researcher) daughter.

      • RiHo08,

        I wonder if it also applies to “I must be smart because I receive so much money.”

        This would solve many problems. If something novel is needed, just pay someone enough money that their intelligence would be raised to the point that they could invent what was necessary! They wouldn’t even need education – they’d automatically become an auto-didact.

        Of course, it’s nonsensical. Widely believed in capitalist economies, where executives in loss-making companies get even bigger remuneration packages. I assume this is to make them intelligent enough to stop making the losses, which they were responsible for earlier, due to not being paid enough.

        Pay peanuts, you get monkeys. Pay millions, they’re still monkeys. Napoleon said every foot soldier carries a field marshal’s baton in his knapsack. The problem is that you never know which one is needed in any given conflict situation.

        Look at what happened to Napoleon. He just happened to be carrying the wrong baton at the time.

        Cheers.

      • This is just One view of life today…

        Jeremiah 5:28 They are waxen fat, they shine: yea, they overpass the deeds of the wicked: they judge not the cause, the cause of the fatherless, yet they prosper; and the right of the needy do they not judge.

        Read like it was written for all of us today but really it was written way back in the day. The people move around some more and here we are again.

      • Unbelief.

      • Arch Stanton,

        Sounds like the King James authorised version. Interesting to note that all the different versions give different expressions to what is notionally the word of God (to Protestant Christians, of course).

        I prefer the King James Version myself, but if I may permitted a small pun, God alone knows why!

        I intend no offense – your faith is your own. Consider me a harmless heretic, or a ignorant heathen, if you wish.

        Cheers.

      • Poke & A Hope.

      • QED it is, the power of 1 over 0 after all…

  22. 4TimesAYear

    Reblogged this on 4timesayear's Blog.

  23. It occurs to me that scrutiny is a good thing within science – simply because ideas are corroborated by the severity of the scrutiny they have been exposed to and survived. If an idea does not survive scrutiny – that is a good thing too – simply because that is how we might get rid of flawed ideas.

    This makes me wonder – who gets any social, professional or economic benefits from scrutinizing others work? And who gets government grants to scrutinize whatever might deserve close scrutiny?

    • Steven Mosher

      expect congress to call you to testify, what a breakthrough!

      • Not even “Post-Normal” science. Just simple communication:

        “A picture is worth a thousand words.”

        But if our hostess, or anybody else, wants to use it (testimony or anything else), notice the little “copyleft” symbol in the corner.

      • Anastasios Tsonis, distinguished professor at the University of Wisconsin – Milwaukee, believes the pause will last much longer than that. He points to repeated periods of warming and cooling in the 20th century.

        ‘I know that the models are not adequate … they don’t agree with reality.’

        – Anastasios Tsonis, distinguished professor at the University of Wisconsin – Milwaukee

        “Each one of those regimes lasts about 30 years … I would assume something like another 15 years of leveling off or cooling,” he told Fox News.

        Chaos dawg bites master.

      • Chaos dawg bites master.

        All from one year of warming?

        Think how warmists insisted that a 5 year, no 7 year, no 11 year, no whatever year pause didn’t make any difference to their “science”.

        Anyway, if this period doesn’t last 30 years, that’s just input to the science. How these periods interact with the effects of GHG’s remains an open question.

      • Not to mention that if you actually read his paper, rather than a (probably) cherry-picked statement from a biased news agency, you’ll find this:

        Using a new measure of coupling strength, this update shows that these climate modes have recently synchronized, with synchronization peaking in the year 2001/02. This synchronization has been followed by an increase in coupling. This suggests that the climate system may well have shifted again, with a consequent break in the global mean temperature trend from the post 1976/77 warming to a new period (indeterminate length) of roughly constant global mean temperature. [my bolds]

        Anybody who understands science understands that such predictions are highly tentative, whether organizations like the IPCC admit it or not.

        Furthermore, that interview was as of 2013, and his issue was that the IPCC refused to include his perfectly good science in their agenda-driven pseudo-science. Whether or not it turned turns out to be correct, it was consistent with observation at that time, and deserved to be included in what was supposedly the “best science we have” at that time.

      • AK,

        “A picture is worth a thousand words.”

        An ignorantly wrong picture is worth a thousand nothings. FAR WGI Chapter 2:

        It is recognised that the emissions of a number of trace gases, including NOx, carbon monoxide, methane and other hydrocarbons, have the potential to influence the distribution of troposphenc ozone It is not straightforward to estimate the greenhouse warming potential of these indirect effects because changes in troposphenc ozone depend, in a complex and non-linear manner on the concentrations of a range of species. The limited spatial resolution in current troposphenc chemistry models means that estimates of increased troposphenc ozone production are highly model-dependent Furthermore, the radiative impacts of troposphenc ozone changes depend markedly on their spatial distribution.

        […]

        In addition to the effects from other forcings that oppose or reinforce the greenhouse gas forcing, there are also decadal-scale climate changes that can occur without any changes in the radiative forcing Non-linear interactions in the Earth-ocean-atmosphere system can result in unforced internal climatic variability (see e g Section 6 5 2) As a result of the combined effects of forced and unforced effects on climate a range of unpredictable variations of either sign will be superimposed on a trend of rising temperature.

        TAR WGI Chapter 2:

        But even without changes in external forcing, the climate may vary naturally, because, in a system of components with very different response times and non-linear interactions, the components are never in equilibrium and are constantly varying. An example of such internal climate variation is the El NiñoSouthern Oscillation (ENSO), resulting from the interaction between atmosphere and ocean in the tropical Pacific.

        Feedbacks and non-linearities

        The response of the climate to the internal variability of the climate system and to external forcings is further complicated by feedbacks and non-linear responses of the components.

        […]

        Many processes and interactions in the climate system are non-linear. That means that there is no simple proportional relation between cause and effect. A complex, non-linear system may display what is technically called chaotic behaviour. This means that the behaviour of the system is critically dependent on very small changes of the initial conditions. This does not imply, however, that the behaviour of non-linear chaotic systems is entirely unpredictable, contrary to what is meant by “chaotic” in colloquial language. It has, however, consequences for the nature of its variability and the predictability of its variations. The daily weather is a good example. The evolution of weather systems responsible for the daily weather is governed by such non-linear chaotic dynamics. This does not preclude successful weather prediction, but its predictability is limited to a period of at most two weeks. Similarly, although the climate system is highly nonlinear, the quasi-linear response of many models to present and predicted levels of external radiative forcing suggests that the large-scale aspects of human-induced climate change may be predictable, although as discussed in Section 1.3.2 below, unpredictable behaviour of non-linear systems can never be ruled out.

        […]

        The effect of aerosols

        The effect of the increasing amount of aerosols on the radiative forcing is complex and not yet well known. The direct effect is the scattering of part of the incoming solar radiation back into space. This causes a negative radiative forcing which may partly, and locally even completely, offset the enhanced greenhouse effect. However, due to their short atmospheric lifetime, the radiative forcing is very inhomogeneous in space and in time. This complicates their effect on the highly non-linear climate system. Some aerosols, such as soot, absorb solar radiation directly, leading to local heating of the atmosphere, or absorb and emit infrared radiation, adding to the enhanced greenhouse effect.

        […]

        Climate response

        The increase in greenhouse gas and aerosol concentrations in the atmosphere and also land-use change produces a radiative forcing or affects processes and feedbacks in the climate system. As discussed in Chapter 7, the response of the climate to these human-induced forcings is complicated by such feedbacks, by the strong non-linearity of many processes and by the fact that the various coupled components of the climate system have very different response times to perturbations. Qualitatively, an increase of atmospheric greenhouse gas concentrations leads to an average increase of the temperature of the surface-troposphere system. The response of the stratosphere is entirely different. The stratosphere is characterised by a radiative balance between absorption of solar radiation, mainly by ozone, and emission of infrared radiation mainly by carbon dioxide. An increase in the carbon dioxide concentration therefore leads to an increase of the emission and thus to a cooling of the stratosphere. The only means available to quantify the non-linear climate response is by using numerical models of the climate system based on well-established physical, chemical and biological principles, possibly combined with empirical and statistical methods.

        […]

        The nucleus of the most complex atmosphere and ocean models, called General Circulation Models (Atmospheric General Circulation Models (AGCMs) and Ocean General Circulation Models (OGCMs)) is based upon physical laws describing the dynamics of atmosphere and ocean, expressed by mathematical equations. Since these equations are non-linear, they need to be solved numerically by means of well-established mathematical techniques.

        […]

        Predictability, global and regional

        In trying to quantify climate change, there is a fundamental question to be answered: is the evolution of the state of the climate system predictable? Since the pioneering work by Lorenz in the 1960s, it is well known that complex non-linear systems have limited predictability, even though the mathematical equations describing the time evolution of the system are perfectly deterministic.

        The climate system is, as we have seen, such a non-linear complex system with many inherent time scales. Its predictability may depend on the type of climate event considered, the time and space scales involved and whether internal variability of the system or variability from changes in external forcing is involved. Internal variations caused by the chaotic dynamics of the climate system may be predictable to some extent. Recent experience has shown that the ENSO phenomenon may possess a fair degree of predictability for several months or even a year ahead. The same may be true for other events dominated by the long oceanic timescales, such as perhaps the NAO. On the other hand, it is not known, for example, whether the rapid climate changes observed during the last glacial period are at all predictable or are unpredictable consequences of small changes resulting in major climatic shifts.

        Plenty more where that came from, but this should about do it. Point being, I think it’s fair to say they’ve read their Lorenz and are up to speed on deterministic non-linear chaos and that your cartoon is bullcrap.

      • brandonrgates,

        You wrote –

        “Point being, I think it’s fair to say they’ve read their Lorenz and are up to speed on deterministic non-linear chaos . . . “

        You are correct.

        The IPCC wrote –

        “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”

        They then go on to ignore the reality which they have acknowledged, and burst into a perfect explosion of predictions, forecasts, scenarios outcomes and the like.

        None of which are any more reliable than a 12 year old child with a pencil, a ruler, and historical data.

        Notice the IPCC –

        “. . . the long-term prediction of future climate states is not possible.” No hedging, or silly “very likely” assessments. Not possible.

        Go on. Tell me that the IPCC only put that in to confound skeptics. Tell me it’s unimportant, that the rest of the report needs to be considered. Tell me they didn’t really mean what they wrote. Maybe it was Warmist truthiness.

        Warmist version of chaos – “I see it, but I don’t believe it. I’ll just ignore it, and pretend it doesn’t exist. If that fails, I’ll just deny it, divert the discussion to another subject, or confuse everyone by redefining the term.”

        Fat headed fools might be an apt description, but fools might take offense at being lumped in with climatologists.

        Cheers.

      • Plenty more where that came from, but this should about do it. Point being, I think it’s fair to say they’ve read their Lorenz and are up to speed on deterministic non-linear chaos and that your cartoon is bullcrap.

        That’s like saying a modern evolutionary biologist is “up to speed” because he’s “read his Darwin”.

        Simple challenge: compare the SPM from the 1st report (“FAR”) and the 4th (“AR4”). Notice how the supposed certainty has increased, despite the fact that the science has not advanced.

        Another challenge: find me a reference to spatio-temporal chaos in the AR4/WGI report. (I’m not saying there isn’t one, but I’m not going to look for one then be accused of straw-man arguments when I examine context and tear it apart.)

        And BTW, don’t hold me, or any skeptic, responsible for Flynn. He’s been outed.

    • I like it. Better than spaghetti.

      • Thx. Notice how the line for Real Climate Science abruptly gets thinner after the founding of the IPCC.

        This communicates my notion that the IPCC “hijacked” science for its own purposes, with most of the grants being channeled to running GCM’s in support of its “increasing certainty”.

        At the expense of real science.

    • AK,

      That’s like saying a modern evolutionary biologist is “up to speed” because he’s “read his Darwin”.

      Accompanied by the excerpted passages from the IPCC itself, I think it’s a suitable reponse to a cartoon which might mislead the incidentally igorant to believe that the IPCC thinks climate responses to external forcing are strictly linear. I can’t do much about willful ignorance.

      Simple challenge: compare the SPM from the 1st report (“FAR”) and the 4th (“AR4”). Notice how the supposed certainty has increased, despite the fact that the science has not advanced.

      The “science has not advanced” is rather vague. Providing direct quotes with citations to the source is a good practise.

      Another challenge: find me a reference to spatio-temporal chaos in the AR4/WGI report.

      Searching on all three terms is rather limiting, most of the hits are in TAR. Chapter 2 of the WGI technical summary contains all three terms, albeit not in the same block of text:

      Changes have occurred in several aspects of the atmosphere and surface that alter the global energy budget of the Earth and can therefore cause the climate to change. Among these are increases in greenhouse gas concentrations that act primarily to increase the atmospheric absorption of outgoing radiation, and increases in aerosols (microscopic airborne particles or droplets) that act to reflect and absorb incoming solar radiation and change cloud radiative properties. Such changes cause a radiative forcing of the climate system.[1] Forcing agents can differ considerably from one another in terms of the magnitudes of forcing, as well as spatial and temporal features. Positive and negative radiative forcings contribute to increases and decreases, respectively, in global average surface temperature. This section updates the understanding of estimated anthropogenic and natural radiative forcings.

      […]

      Uncertainties can be classified in several different ways according to their origin. Two primary types are ‘value uncertainties’ and ‘structural uncertainties’. Value uncertainties arise from the incomplete determination of particular values or results, for example, when data are inaccurate or not fully representative of the phenomenon of interest. Structural uncertainties arise from an incomplete understanding of the processes that control particular values or results, for example, when the conceptual framework or model used for analysis does not include all the relevant processes or relationships. Value uncertainties are generally estimated using statistical techniques and expressed probabilistically. Structural uncertainties are generally described by giving the authors’ collective judgment of their confidence in the correctness of a result. In both cases, estimating uncertainties is intrinsically about describing the limits to knowledge and for this reason involves expert judgment about the state of that knowledge. A different type of uncertainty arises in systems that are either chaotic or not fully deterministic in nature and this also limits our ability to project all aspects of climate change.

      Searching on “spatial temporal” (w/o) quotes and limiting to AR4 generates 114 results. Similarly “chaos” for AR4 generates 7 results, “chaotic” 19. All result lists include dupes due to Russian and French versions.

      Annoyingly, their search engine doesn’t give an option for AR5 only. Must I really go through the brain damage of a URL-specific Goggle search to find the relevant terms?

      I promise you that guys like Kevin Trenberth haven’t forgotten about the chaotic and non-linear nature of weather, and thence climate … Lorenz was his doctoral advisor. Not that I’d make too much ado about that, mind … after all, John Christy did his PhD under Trenberth.

      (I’m not saying there isn’t one, but I’m not going to look for one then be accused of straw-man arguments when I examine context and tear it apart.)

      Again, providing direct quotes with links to the original document helps establish context. Paraphrasing without citations is about the best way to get drilled for strawmanning save for simply making crap up.

      And BTW, don’t hold me, or any skeptic, responsible for [Mike F.]. He’s been outed.

      I try, but do not always succeed, to not lump you people into the same bucket …. :-P

      The thought had occurred that he’s a Poe, but actually outing him would minimally require dox I’d think.

      • I think it’s a suitable reponse to a cartoon which might mislead the incidentally igorant to believe that the IPCC thinks climate responses to external forcing are strictly linear.

        I’d say the misleading is yours. Such “incidentally igorant” probably wouldn’t know, or care, about the distinction between thinking “climate responses to external forcing are strictly linear” and not understanding the implications of the non-linearity in the global system.

        Either way, the IPCC’s “climate science” is nothing little but pseudo-scientific claptrap due that lack of understanding.

        How many of the IPCC “functionaries” (“Coordinating Lead Authors and Lead Authors”) would do more than stare at you blankly if you asked them to discuss the difference between temporal chaos and spatio-temporal chaos?

        Of those who even know there’s a distinction, how many could define it? Of those, how many actually have some idea of the implications?

        The “science has not advanced” is rather vague. Providing direct quotes with citations to the source is a good practise.

        Waste of time.

        Anybody who’s been around here for very long knows that the IPCC science has failed to narrow the PDF for the mythical “equilibrium climate sensitivity” for 2½ decades.

        And you’ve proven you don’t follow links anyway: you didn’t follow the link I gave above, and you didn’t follow the one I gave you earlier. How do I know?:

        Another challenge: find me a reference to spatio-temporal chaos in the AR4/WGI report.

        Searching on all three terms is rather limiting, most of the hits are in TAR. Chapter 2 of the WGI technical summary contains all three terms, albeit not in the same block of text:

        I can imagine Jim D or Steven Mosher *face:desk*. Your ignorance is astounding, Mr. “I’m not going to do your homework for you.

        I promise you that guys like Kevin Trenberth haven’t forgotten about the chaotic and non-linear nature of weather, and thence climate … Lorenz was his doctoral advisor.

        Perhaps. At least he understands he has to do some arm-waving to dismiss it.

        Paraphrasing without citations is about the best way to get drilled for strawmanning save for simply making crap up.

        So am I making things up when I suggest that the IPCC didn’t even bother with an arm-wave to dismiss spatio-temporal chaos? That they figured they could just pretend a whole field of science doesn’t exist?

        The thought had occurred that he’s a Poe, but actually outing him would minimally require dox I’d think.

        He’s either a false-flagger pretending to be a nut, or he’s the real thing. “The earth has been cooling for 5 billion years.” Pfui!

      • lMAO. Bad climate scientists. Stewpidd climate scientists. You’re a pointless Kartoon.

      • Spatio-temporal chaos by Tomas Milanovic

        There are scientists who equate chaos to randomness. I’d put that category at 90%.

        There are scientists who equate chaos with Lorenz. They have seen the butterfly attractor picture one day or the other. They know that chaos is not randomness but not much more. I’d put that category at 9%.

        There are then scientists who know what is chaos and really understand it. I’d put that category at 1% and much less for the climate scientists.

      • Did you hear the one where Steven, gets the memo, goes to the debate and there was no one there who cared anymore?

  24. micro6500,

    [posted out of sequence for scroll]

    Apparently your June 23, 2016 at 9:55 am post went to moderation, or else I plain missed it. Either way, I didn’t see it until this morning. Thanks for the plots.

    First totally wrong, my anomaly is the day to day change, but from the specific station, which reduces measurement uncertainty […]

    I don’t see where I’m wrong; I wrote, “you’re calculating day-on-day differences”. Since measurement uncertainty exists at the station level, I also don’t see how you’re doing anything to reduce it.

    […] second the noise comment, if I was measuring weather noise max temp would have the same undulations as min, as you can see it doesn’t.

    https://micro6500blog.files.wordpress.com/2015/03/global.png

    Why does the min anomaly show so much more variability than the max anomaly? If that were real, wouldn’t you expect the average anomaly to show much more variability than the max anomaly?

    These are the two plots I’m remembering from prior discussions:

    http://content.science20.com/files/images/SampleSize_1.jpg

    http://content.science20.com/files/images/GB%20Mn%20Mx%20Diff_1.png

    From 1967-1975 are abrupt changes in number of observations, corresponding to big swings in DIFF. The big spike in MXDIFF also looks spurious all on its own. Both features scream artifacts of processing. As in not real world phenomena.

    In your more recent plot shown above, the Min Temp Anomaly curve shows most of its variability right through the same time interval when number of obs is skipping around. My “weather noise” argument is probably more correctly stated as coverage (sampling) bias. When the mix of stations changes — especially abruptly — you are in essence giving more (or less) weight to the local weather where those stations were added (or taken away).

    I’ll give you an extreme example. According to Wikipedia, mean diurnal temperature ranges from 13°C to 20°C depending on time of year. In the Amazon, which is at a similar near-equatorial latitude (albeit south instead of north), the mean DTR is between 2°C and 5°C. Location matters. A cluster of observations in a particular region is very likely going to bias global results unless you control for it. Gridding is a popular method. Area weighting grids to compute a regional or global average is considered essential.

    […] it isn’t global changes, they are driven by regional changes […]

    There’s little dispute in literature over the existence of regional variability. There’s also little dispute in literature that the system has gained energy, particularly since the 1960s as manifest by the OHC estimates you so casually dismiss. Here it is down to 2 km with error estimates:

    https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content2000mwerrpent.png

    Secular trend well outside the uncertainty envelope. Here are some calcs based on that data over the entire interval:

    5.47E21 J/yr / 3.16E07 s/yr = 1.73E14 J/s (W)
    1.73E14 W / 5.10E14 m^2 = 0.34 W/m^2

    The lower bound for surface imbalance in Stephens et al. (2012) is 0.43 W/m^2. Assuming the IPCC is correct and that > 90% of heat is going into the oceans, 0.34 / 0.9 = 0.38 W/m^2. Also noting that the upper 2 km of the ocean is only about half of the oceans’ total mass, being within 0.05 W/m^2 of Stephens’ lower bound with just this crude estimate does give me some additional confidence that the whole thing isn’t made up.

    The trend lines for the derivative on both min and max temp when you include measurement uncertainty is 0.0F +/-0.1F for 1950 to 2013

    Mmmhmm. A non-zero linear trend over a derivative plot would tell us that rate of change is constant over the entire interval. That tells us nothing about the rate of change itself … it could be negative, postive or exactly zero.

    Let’s consider the theoretical forcing due to CO2 over roughly the same interval, May 1958 through May 2016, according to Keeling’s Mauna Loa curve. Using 5.35 W/m^2 * ln(CO2/280) I get a linear trend over the entire interval of 2.3E-02 W/(m^2 yr). The slope of the month on month change is 4.1E-05 W/(m^2 yr^2).

    For May 1980 through May 2016, the figures are 2.6E-02 W/(m^2 yr) and 6.1E-05 W/(m^2 yr^2). So yes, some very slight acceleration in theoretical forcing due to CO2. Considering everything else going on in the system, you’re probably not going to find that acceleration looking solely at surface temperatures, and certainly not how you’re doing it.

    What you might have better luck reconciling is this:

    5.35 W/m^2 * ln(402.3 / 315.5) * 0.8 K m^2/W = 1.0 K. According to HADCRUT4 (trailing decadal means), the GMST change has been 0.63 K. Stevens et al. (2012) give the current energy balance as 0.6 W/m^2 as their central estimate, multiply that by 0.8 K m^2/W = 0.48 K. 1.0 K – 0.48 K = 0.52 K, or 0.11 K less than observed. Or you could use Stevens’ lower bound of 0.43 W/m^2, the answer there works out to 0.66 K or 0.03 K higher than observed.

    That really should be the end of the discussion.

    • Erratum: According to Wikipedia, mean diurnal temperature ranges from 13°C to 20°C in the Sahara Desert …

    • Hmmm, you totally misunderstand most of it. If these are issues, I have to rewrite how I explain what I’m doing and why everything you mentioned does not diminish the fact that CO2 has an insignificant effect on the temperature record in question.

    • I don’t see where I’m wrong; I wrote, “you’re calculating day-on-day differences”. Since measurement uncertainty exists at the station level, I also don’t see how you’re doing anything to reduce it.

      Yes I calculate the day to day difference, just as everyone else calculates a daily anomaly, For the years data product I then average them together by year, just like how everyone else gets a yearly value. I also make a daily product, but what I use that for is is to get the slope of the change in temp as length of day changes, I take the linear trend of that slope. While you can look at the data as an anomaly, I think it’s more reveling as a derivative, where it shows the atm changes temperature extremely quickly, on all time scales.
      And you don’t see the error reduction because you’re not thinking about it. I wrote my code to allow me to quantify how many samples a station had to include, I have done runs from 240 days per year and up, and in general the signal is the same, the averages are slightly different. These are 360 days, but almost every station that collects 360 days, collects the full year.
      I am basically creating an anomaly based on that stations change, Tmin day2 – Tmin day1, it’s also a derivative, and can be considered as either.
      Now Tmin day 1 is according to NOAA/NCDC +/-0.1. So the uncertainty in the first Min Diff is 2x +/- 0.1 right?
      , then the next one is 2 x +/-0.1, but I’m not doing a bunch of pairs. I’m doing a correlated string of numbers, while day1 is +/-0.1, and day3 is +/-0.1, day 2 can not be both +0.1 and -0.1, so that uncertainty is divided in 2. the uncertainty for a string of 365 days is something like 2x +/-0.1 + +/-0.1/364

      From 1967-1975 are abrupt changes in number of observations, corresponding to big swings in DIFF. The big spike in MXDIFF also looks spurious all on its own. Both features scream artifacts of processing. As in not real world phenomena.

      That’s the data NOAA has for those years. It’s the global data, do you not think that the other data products suffer the same increase in uncertainty?
      So yes I could adjust it, but do you learn more about the data when you see the real data, or when they make it better. Also, the whole area weighing, I could do that, but it does hide how little data some areas actually have, and I do break the world into small parts that could be used as an area weighted value, but I figure if an area has only 1 station and the next one has 10, we know the value of the second area far better than the first with a single station.
      It is an intentional design choice, I wanted to see the measurements, not the made up junk.

      In your more recent plot shown above, the Min Temp Anomaly curve shows most of its variability right through the same time interval when number of obs is skipping around. My “weather noise” argument is probably more correctly stated as coverage (sampling) bias. When the mix of stations changes — especially abruptly — you are in essence giving more (or less) weight to the local weather where those stations were added (or taken away).

      Since in the yearly product, a stion is only included if it has a full year, that is not weather. And the areas that are well sampled have less uncertainty, which is basically the US. The rest of the planet gets worse and worse, and you’re right before about 1980 the coverage is poor, it’s the data that exists. But you don’t make up data to compensate for it.

      I’ll give you an extreme example. According to Wikipedia, mean diurnal temperature ranges from 13°C to 20°C depending

      Another stupid parameter, it’s the range starting at midnight to midnight, wtf is that, it’s half of yesterday and half tomorrow, it’s backwards.
      That’s why I span from min to min for a day, from when the sun comes up and it starts to warm, to when it comes up the following day, so I see how much warming the Sun caused, and whether it all cools off or not. And surprise surprise, as the day gets longer, it doesn’t cool all the way back down, and it warms up, and then as the days start to grow shorter, the ground cools a little more, it isn’t a matter of air temps, it can drop well over 5 degrees an hour, and yet the ground(dirt, concrete, asphalt) is still warm after 12 hours of clear skies cooling. The average daily rising temp is ~18F, it’s been a little higher, and a little lower. I think my page does have a chart from the US SW Desert stations BTW. Also, these charts represents a small fraction of the data that is generated in my code, much of your complains would be answered if you actually did more than look at the pictures with your preconceptions.

      Mmmhmm. A non-zero linear trend over a derivative plot would tell us that rate of change is constant over the entire interval. That tells us nothing about the rate of change itself … it could be negative, postive or exactly zero.

      What? Slope is the linear trend, positive number is a rising slope, a slope of 0 means there is no trend, and what the derivative is used for, is to determine the temperature slope for that station or collection of stations. I sum or average (I’ve done both) the daily positive slopes (based on Trise) with all of the negative slopes(Tfall), or I take this remainder for about 4-5 months as it changes as the length of day changes for both warming and falling and measure that slope.

      Lastly, I’m not really trying to measure the change in “GAT”, there’s a bunch of people already doing that, what I have shown is that the atm is capable of very high rates of cooling, that the ground (as listed above) and clouds have a forcing that 10 to 20 times the forcing from Co2, that come winter any residual heat from Co2 has been released to space, that the changes in surface temperature that is all the being used as evidence of co2 warming isn’t from a slight trend, it’s from large swings in Min temp effecting daily mean temp and then polluting the GAT record.
      Co2 can’t explain that even if there is a slight CS, it’s so far in the weeds compared to what’s actually effecting temps.
      What is driving the surface record is tropically warmed water vapor moving poleward to cool and warming the surface as it goes. I think this is being driving from something like Judith’s Stadium wave, oceans currents moving warm water around.

      Oh, OHC, the surface record is not very good, OHC is far far worse, ie it’s made up.

  25. Just in case you’re sure that peer reviewed papers in prestigious journals are always useful.

    From Nature –

    “A leading scientific publisher has retracted 64 articles in 10 journals, after an internal investigation discovered fabricated peer-review reports linked to the articles’ publication.”

    These are only a handful. The whole peer review system is riddled with defects. One wonders how the thousands of papers retracted – some reluctantly – by reputable journals ever managed to get published in the first place.

    Could the overriding imperative to maintain the billions in profits generated have an impact? Maybe crass commercial considerations are influencing Dr Frankenstein these days..

    Cheers.

    • Just in case you’re sure that peer reviewed papers in prestigious journals are always useful.

      Putting words in my mouth isn’t honest behaviour, Mike. I never once said that peer review catches everything, or that every article published in premier journals are *always* useful. What I *am* arguing is that for-profit journals with a reputation to protect are motivated to not habutually publish crap — their entire business model depends on it.

      From Nature –

      “A leading scientific publisher has retracted 64 articles in 10 journals, after an internal investigation discovered fabricated peer-review reports linked to the articles’ publication.”

      Full article is here, thanks ever so much for a proper citation.

      These are only a handful.

      Yup. Springer’s statement about the retractions gives no indication of the publication timespan of the retracted articles. The percentage of retractions vs. published articles over that period of time would be an informative metric.

      So would the fields of study.

      The whole peer review system is riddled with defects.

      Human beings are riddled with defects, that’s why mechanisms like peer review are essential.

      One wonders how the thousands of papers retracted – some reluctantly – by reputable journals ever managed to get published in the first place.

      One supposes that even honest referees overlook things.

      Could the overriding imperative to maintain the billions in profits generated have an impact?

      You’re already complaining about the publicly-funded research itself. Surely you don’t expect everyone involved in the pursuit of science to work for free.

      Maybe crass commercial considerations are influencing Dr Frankenstein these days.

      And maybe you’re just endlessly speculating because you know that you don’t have any hard evidence to make an accusation that will stick.

      If Springer et al. are so powerfully and universally corrupt, why is it that you’re even reading these articles — published by the journals themselves — to begin with, hmm?

      Why could not crass commercial consideration lead to competing major journals exposing their competitors’ malfeasance, hmm?

      Why don’t teh modulz better fit the falsified observational records, hmm?

      • brandonrgates,

        You may have overlooked the fact that my comment wasn’t for you.

        Its position in the threading, and the fact that I didn’t refer to you might have been clues.

        However, you make a couple of points that I might need to comment on. You wrote –

        “Surely you don’t expect everyone involved in the pursuit of science to work for free.” Eh? The big publishers charge the authors substantial fees to publish, expect reviewers to work for free, charge readers substantial fees to read the content that the publisher got for nothing, and profit hugely therefrom. Publishers working for free? Really? And don’t call me Shirley! (That’s a joke, by the way).

        Why do I read journal articles? Because I choose to. I even read a paper submitted by Steven Mosher to a predatory journal, out of curiosity. On occasion, unlike many Warmists, I change my mind after reading about new research. What about you?

        I presume “teh modulz” Is Warmese for “the models”, and the answer to your question (with or without the information-free “hmm”) is that the models are completely useless. They won’t fit unfalsified records either. Give it a try. You obviously don’t acknowledge that the IPCC said that climate prediction is impossible. Deny, divert, confuse.

        Still no CO2 induced heating, is there? None at all. What’s the song – “Wishing, and a’hoping . . .”

        Cheers.

  26. Mike,

    [posted out of sequence for scroll]

    They then go on to ignore the reality which they have acknowledged, and burst into a perfect explosion of predictions, forecasts, scenarios outcomes and the like.

    Your habit of not making proper citations is annoying. Here’s the full block of text:

    Improve methods to quantify uncertainties of climate projections and scenarios, including development and exploration of long-term ensemble simulations using complex models. The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential.

    Section 14.2.2.2 Balancing the need for finer scales and the need for ensemble, makes a similar statement with slightly different surrounding text:

    In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.

    Weather forecasters have the same issue, thus their models generate probabalistic results, e.g., a 50% chance of rain tomorrow evening. Reasonable people understand that bounded estimates based on *projections* (NOT predictions) based on *plausible* future emissions scenarios are better than “derrrrrp, let’s wait and see how bad it might get before doing anything to moot the potential issues.”

    And you really believe this crap?

    I evaluate my beliefs based on what they actually write as opposed to your creative interpretations of their text. And yeah, I do believe it will continue to get warmer *IF* net emissions are not brought to near zero. How much? Depends on future emissions and those dang non-linearities. An honest and intelligent person would realize that uncertainty isn’t one-sided — it could be worse than the central estimate, it could be relatively benign. I’d rather not find out empirically either way.

    As to your Mosher-like demand for an alternative explanation, Professor Curry has recently provided a link to a peer reviewed paper providing exactly that.

    Looking for a CMIP5-compatible model which puts an alternative mechanism to the test, not another Stadium Wave paper or the like. I can curve-fit to modes of internal variability too, and my results are a darn sight better than CMIP5 for GMST. It looks like this:

    http://3.bp.blogspot.com/-5NsdtYi0Ifg/VqQqEq8O-BI/AAAAAAAAAkU/BzdI7Q7-Gsk/s1600/HADCRUT4%2Bvs%2BCO2%2Bmonthly%2B2015-12.png

    Problem is like CMIP5, I can’t predect ENSO, AMO, TSI, volcanoes, emissions, etc., so my model only gets brilliant fits when I know those things in advance. Like Stadium Wave proposes as one possible mechanism, I use Earth’s rotational anomaly:

    http://3.bp.blogspot.com/-vniapHOw-Po/VqQqEXqI2tI/AAAAAAAAAkQ/387eopVqVoM/s1600/HADCRUT4%2Bnon-CO2%2Bmonthly%2Bcontributions%2B2015-12.png

    Magenta curve, LOD for length of day is the one. It really makes 1890-1930 make more sense than the traditional external forcings + internal variability indices alone would get me. Sad to say (yes, really), none of those things explain the secular trend from 1880-present as well as CO2 all by its lonesome does:

    http://3.bp.blogspot.com/-fRV6ymzCb7A/VQCAQnsE63I/AAAAAAAAAYQ/SDw5lXjU4GA/s1600/CMIP5%2Bto%2BGISS%2BNov%2B2014%2B12mo%2BMA.png

    I’ll let you have the last word.

    Yeah sure, until the next time you trot out the same arguments you did last month. I suppose I shouldn’t be too harsh on that point, I myself have my own supply boilerplate.

    • Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

      What’s wrong with this is that it represents an infinite number of model runs, and that’s for only one single set of parameters.

    • brandonrgates,

      I intended to let you have the last word. Oh well, nobody’s perfect.

      As I said, just deny it was said, pretend it doesn’t matter, and point to something else. Perfect Warmist justifications.

      As the IPCC said, “. . . the long-term prediction of future climate states is not possible.” What part do you disagree with?

      You said you wanted an alternative explanation for the rises recorded by thermometers and and remote sensing equipment. I provided the information. It’s peer reviewed, realistic, and doesn’t depend on modelling or the greenhouse effect. Its conclusions are in accord with normal physics. Nothing to do with Stadium Waves at all. No curve fitting. Just the application of physics which not surprisingly agrees with observation.

      If you choose to reject it, I won’t interfere. Just what I would expect a Warmist to do.

      Cheers.

  27. Ron Graf,

    [posted out of sequence for scroll]

    Simple, discard the unfounded assumption of zero centennial and millennial scale variability or infallibility of a record that has been more homogenized and adjusted than a block of aged Swiss with a lower back problem.

    All that tells me is why you reject the mainstream consensus opinion. What explains the millennial scale variability? Who is saying the instrumental record is infallible? How do you even know about millennial scale variability if it’s been ignored and/or assumed to be zero?

    Again: Where is your CMIP5-compatible model which beats the present state of the art at its own game?

    • brandonrgates,

      Good grief. Demanding one useless toy model with zero predictive ability be supplanted with an equally useless toy computer model?

      You may claim that climate models can usefully predict the future, in spite of the IPCC saying it’s impossible, but of course you’re making a claim based on nothing more than wishful thinking.

      More Woeful Warmist Waffle. Consult an astrologer if you want to know the future. Climatologists are completely useless, by comparison.

      Have fun reading the tree rings,

      Cheers.

    • Again: Where is your CMIP5-compatible model which beats the present state of the art at its own game?

      The Russian INMCM4 is arguably the most accurate or one of the most accurate in terms of predictive trend

      The INMCM4 model predicts little future warming.

      Question asked and answered.

      • Considering how many warmists are admirers of Russia’s political model it’s ironic they are not fans of the Ruskie climate model.

      • PA,

        The Russian INMCM4 is arguably the most accurate or one of the most accurate in terms of predictive trend. The INMCM4 model predicts little future warming.

        Comparing to HADCRUT4 annual means over 1880-2005:

        Model                 RMSE   GMST 2100  2.0 C Year
        -------------  ----------- ----------- -----------
        CESM1-CAM5          0.0093        5.10        2043
        CNRM-CM5            0.0104        4.52        2047
        inmcm4              0.0121        3.60        2057
        GISS-E2-H           0.0123        3.95        2043
        MIROC-ESM-CHEM      0.0126        6.36        2030
        
        
        Model            Slope Dev   GMST 2100  2.0 C Year
        -------------  ----------- ----------- -----------
        MIROC-ESM-CHEM      0.0036        6.36        2030
        inmcm4              0.0070        3.60        2057
        FGOALS-g2           0.0222        4.15        2046
        CNRM-CM5            0.0299        4.52        2047
        GFDL-ESM2M          0.0463        3.54        2051

        “Slope Dev” is the absolute deviation of the model vs. observed change in C/century. Projections are from RCP8.0 for all models using a “pre-industrial” temperature baseline of 1861-1890. There is practically zero correlation between implied future sensitivity and either hindcast trend or hindcast skill as judged by root mean squared error.

        Question asked and answered.

        Not really. According to Mike, ECS shouldn’t just be low — it should be zero.

        Ron Graf,

        Considering how many warmists are admirers of Russia’s political model it’s ironic they are not fans of the Ruskie climate model.

        It’s funny that you think I admire Russian politics, and that four of the five coolest models by 2100 in RCP8.0 are American …

        Model            GMST 2050   GMST 2100  2.0 C Year
        -------------  ----------- ----------- -----------
        GFDL-ESM2M            1.89        3.54        2051
        inmcm4                1.68        3.60        2057
        GFDL-ESM2G            1.89        3.64        2056
        GISS-E2-R             2.08        3.69        2048
        GISS-E2-H             2.31        3.95        2043

        … and ironic that two of those are from GISS.

      • brandonrgates,

        Seeing as how there is precisely no scientific definition of ECS, you’re just bumping your gums. Climate is the average of weather, no more, no less. If you are reassigning a Warmist meaning to the word climate, it might be beneficial if you gave the Warmist definition.

        If you are implying that there is a quantifiable heating effect due CO2 surrounding an object, I might point out that extraordinary claims deserve extraordinary evidence. Maybe a repeatable scientific experiment?

        For example, use argon as a control. It’s supposed to be a non greenhouse gas, so if it warms under the same conditions as CO2, the greenhouse effect is not the cause.

        Comparing pointless toy computer models based on nothing more than climatological fantasy physics may be fun, but it has no utility. The past does not predict the future, and even the IPCC states that future climate states are impossible to predict.

        A stumbling pack of blundering buffoons. The mentally deranged leading the mentally bereft, perhaps?

        Unless you are a Creationist, and believe the surface was never molten, and the interior of the Earth is the same temperature as the surface. If the interior is hotter than the surface, you don’t need me to tell which direction the heat flows. And if the surface is hotter than outer space, at a nominal 3K or so, once again the net energy goes in one direction.

        I’m sure you can produce a computer model which shows temperatures rise as internal energy reduces. Or you can just as easily write a program which demonstrates the energy consumption of flying pigs. Or the number of Nobel Prizes won by Michael Mann.

        Just ignore reality, and off you go!

        Cheers.

      • AK,

        I can see you haven’t found Steven Mosher’s missing clue. I can give you one if you like. I have a few to spare, all based on fact. You might not be able to handle them, though.

        Maybe you could gather some facts of your own. There are many clues available. Nobody has to remain clueless, if they don’t want to. Warmists seem to prefer cluelessness. It probably makes life easier, avoiding reality.

        Particularly if you are a parasite, bleeding the taxpayers dry, and providing nothing useful in return.

        Cheers.

      • brandonrgates | June 25, 2016 at 9:42 pm |
        inmcm4 1.68 3.60 2057

        Really?

        http://www.drroyspencer.com/wp-content/uploads/CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1.png

        https://andymaypetrophysicist.files.wordpress.com/2015/12/christy_models_reality_nov_2015.png

        Most plots show INMCM4 at the bottom of the pack. I notice you don’t link to support your clams.

      • brandonrgates,

        If 73 models all show different results, which one, if any is correct?

        Of course, the answer is none of them. As the IPCC says, predicting future climate states is not possible. Here’s your chance to redefine climate states, and say that predictions are possible.

        The only question is, does averaging 72 incorrect answers provide the correct answer? If it does today, what about tomorrow?

        Maybe examining a bowl of entrails might be cheaper, and probably more accurate. I don’t think your ability to predict the future is any better than mine. Care to bet?

        Cheers.

      • PA,

        inmcm4 1.68 3.60 2057

        Really?

        Yes, really.

        Most plots show INMCM4 at the bottom of the pack.

        That’s because it is known to have among the lowest implied ECS in the CMIP5 ensemble … I didn’t say any different. Here’s the full list ranked by lowest to highest temperature obtained by 2100 under RCP8.5:

        Model            GMST 2050   GMST 2100  2.0 C Year
        -------------  ----------- ----------- -----------
        GFDL-ESM2M            1.89        3.54        2051
        inmcm4                1.68        3.60        2057
        GFDL-ESM2G            1.89        3.64        2056
        GISS-E2-R             2.08        3.69        2048
        GISS-E2-H             2.31        3.95        2043
        NorESM1-M             2.10        4.13        2049
        FGOALS-g2             2.13        4.15        2046
        MIROC5                2.18        4.22        2043
        NorESM1-ME            2.19        4.39        2044
        bcc-csm1-1            2.18        4.44        2046
        MRI-CGCM3             2.02        4.46        2048
        CNRM-CM5              2.17        4.52        2047
        CCSM4                 2.29        4.56        2043
        IPSL-CM5B-LR          2.25        4.56        2044
        CESM1-BGC             2.34        4.57        2039
        EC-EARTH              2.24        4.59        2044
        FIO-ESM               2.01        4.59        2050
        MPI-ESM-MR            2.22        4.62        2044
        MPI-ESM-LR            2.32        4.74        2041
        ACCESS1-3             2.86        5.10        2040
        CESM1-CAM5            2.42        5.10        2043
        ACCESS1-0             2.57        5.19        2040
        HadGEM2-AO            2.21        5.23        2043
        CSIRO-Mk3-6-0         2.39        5.23        2041
        CMCC-CM               2.34        5.46        2038
        IPSL-CM5A-LR          2.68        5.54        2036
        CMCC-CMS              2.52        5.59        2036
        BNU-ESM               2.79        5.60        2031
        IPSL-CM5A-MR          2.64        5.62        2036
        CanESM2               2.97        5.71        2031
        HadGEM2-ES            2.93        5.75        2034
        GFDL-CM3              2.88        5.84        2029
        HadGEM2-CC            2.88        5.85        2031
        MIROC-ESM             2.82        6.05        2033
        MIROC-ESM-CHEM        3.11        6.36        2030

        By that metric, it’s statistically tied for first place with 4 other models. The spread by 2100 is on the order of 2.5 C. Again, the absolute values of the GMST anomalies are relative to a “pre-industrial” baseline over 1861-1890. Here’s a pretty picture with the five coolest models shown for comparison:

        https://3.bp.blogspot.com/-fmu79bd8r3c/V3BJitV-dmI/AAAAAAAAA9s/pqKgSAzeF48zek6LQkYAOJ6V7MivD572ACLcB/s1600/HADCRUT4%2Bvs%2BCMIP5%2BRCP8.5%2Bfive%2Bcoolest%2Bmodels.png

        Everything in CMIP5 responds to CO2 forcing, even the ones which run cooler. They don’t qualify for the challenge I’ve issued Mike to produce a more skillful GCM which doesn’t invoke the radiative forcing he claims is unphysical.

        I notice you don’t link to support your clams.

        Data for HADCRUT4 and the model runs were obtained from KNMI Climate Exploder and baselined to 1986-2005. Then the whole mess was shifted up 0.6 C, which is the amount that brings the HADCRUT4 mean to zero over the “pre-industrial” interval of 1861-1890. When a given model contributed more than one run to the ensemble, I averaged the multipe runs into a single series.

        That should be enough information for you to replicate my results. This Excel file contains the rankings and raw data, though the plot formatting got stomped on when I saved the file down from Gnumeric.

      • Mike,

        If 73 models all show different results, which one, if any is correct?

        Of course, the answer is none of them.

        As all models are. Even if they all showed the same thing, they would still all be wrong.

        As the IPCC says, predicting future climate states is not possible.

        Suppose solar output increased 10% tomorrow and remained there until 2100. That we don’t know within two decimal places how much the planet would warm due to uncertainty in the modelled predictions doesn’t mean that we aren’t almost certain that warming would happen.

        The high degree of uncertainty, or inherent unpredictability, in response to external forcing is about the best argument I can think of to not ourselves make significant changes if we can help it.

      • brandonrgates,

        You wrote –

        As all models are. Even if they all showed the same thing, they would still all be wrong.

        You’re dodging the question. Maybe I should have said “completely useless”, rather than just wrong. So if at minimum, 72 model outputs out of 73 are completely useless, why waste money on any?

        As I mentioned, even the IPCC stated categorically that future climate states are not predictable. You attempt to deny, divert, and obscure, by plunging into “what if . . . ” fantasy. Really?

        Your presumption that any changes to climate, occurring as a consequence of chaos will be negative, is typical of Warmists. It could be summarised as “Woe, woe, thrice woe!”. You haven’t the faintest idea of what the future holds. Man’s influence on the weather of the future may be insignificant or substantial. It may be beneficial, or possibly not.

        Anybody with half a brain can see that the Earth has a vast range of weather and hence, climate. Ever changing, inconstant, and punctuated by periods of unpredictable terror for people. Add plagues, earthquakes, conflict, and a few other things, and the future might seem gloomy.

        Yet life goes on.

        Keep trying to ban coal. I hope you fail, but my hope is worth about as much as your wishful thinking. All the wishful thinking in the world, plus five dollars, will probably get you a cup of coffee.

        Cheers.

      • PA,

        Two more plots for you:

        https://1.bp.blogspot.com/-_4Sa587FpR8/V3B5lo37LVI/AAAAAAAAA-Y/j1iF0WU9ey4V_ab2TC6fCrZyfSYu3CtLACLcB/s1600/CMIP5%2BRCP8.5%2BModels%2BRanked%2Bby%2BProjected%2BTemperature%2Bin%2B2100.png

        https://3.bp.blogspot.com/-5WnyRjvKPP0/V3B5lmFQABI/AAAAAAAAA-c/xDoJYF4qD6oIUhyv0o52KONNmkGGxhCVQCLcB/s1600/CMIP5%2BRCP8.5%2BModels%2Branked%2Bby%2Btrend%2Bover%2B1861-2005.png

        As I mentioned previously, trend over the hindcast runs is not a great predictor of trend over the forward looking projections. Counter-intuitively, the four coolest models over the hindcast produce amongst the warmest projections.

        That said, it is true that models which produce projections in the middle of the pack tend to have the best fidelity to the historical record as measured by trend. I would not go so far as to say that this means those models are the best indicators of what will happen in the future … though I admit, it is tempting to do so.

      • Well, I plotted the model mean and INMCM4 over at KMNI
        Mean

        https://i.imgur.com/jXo8mSh.png

        Nice (INMCM4)

        https://i.imgur.com/S1M2wf0.png

        INMCM4 is below average. Now this is RCP8.5 a nonsensical model run.

        My next trick is RCP4.5 for INMCM4 which is a more realistic version of actual CO2 trends (if a bit high)

        https://i.imgur.com/Cng9cfM.png

      • PA,

        INMCM4 is below average.

        I don’t dispute that INMCM4 is near the bottom of the pack for ECS:

        http://www.nature.com/nature/journal/v505/n7481/images/nature12829-st1.jpg

        Now this is RCP8.5 a nonsensical model run. My next trick is RCP4.5 for INMCM4 which is a more realistic version of actual CO2 trends (if a bit high)

        I’d dearly love the RCP8.5 scenario to be nonsense, and it may well be. I would have used RCP6.0 if INMCM4 had contributed any runs to that ensemble, but it didn’t. As for RCP4.5, it seems a reasonable enough BAU scenario out to peak emissions in 2040 …

             Emissions      CO2
        Year    GtC/yr     ppmv
        ----     -----   ------
        2015      9.87   399.97
        2040     11.54   460.84
        2080      4.19   531.14
        2100      4.25   538.36

        … after which emissions tail off to about a third of the peak by 2080 and roughly stabilize through 2100. I would dearly love for that scenario to NOT be nonsense.

        Data from here.

        Here are the five coolest models in RCP4.5 ranked by projected temperature in 2100:

        https://4.bp.blogspot.com/-OdkdAKTGDXo/V3BKaRzFJ2I/AAAAAAAAA94/s4HJONDyv1Axq4or5b6LTZGkpDnlk1PUACLcB/s1600/HADCRUT4%2Bvs%2BCMIP5%2BRCP4.5%2Bfive%2Bcoolest%2Bmodels.png

        As you can see, plenty of models run hotter. Several of those have comparable hindcast performance to INMCM4.

  28. Vannevar Bush presciently warned that: “Basic scientific research should not, therefore, be placed under an operating agency whose paramount concern is anything other than research. Research will always suffer when put in competition with operations.”

    This of course explains the problem with business-funded research. Business’s paramount concern is profit, and its research effort will always be skewed to help it maximise profit..

    Of much more relevance here though, is that it also explains why government-funded climate research is inherently dysfunctional. Government’s paramount concern is political power, and its climate research results will always be skewed to help it maximise that power.

  29. Pingback: Weekly Climate and Energy News Roundup #230 | Watts Up With That?

  30. Pingback: “The Tangled Web of Global Warming Activism” – An Outsider's Sojourn II

  31. Judith – word omitted: ” raises some really ?? issues”.

  32. In Canada, outraged health researchers demand end to peer-review changes

    Critics, however, were skeptical. And they say their fears were confirmed after the council held its first competition for Foundation grants in 2015. Many blue-chip researchers received “tiny” awards, whereas the success rates and award levels for both female and younger researchers were significantly lower than for male, more elderly counterparts, says geneticist Janet Rossant of The Hospital for Sick Children and the University of Toronto in Canada, and a former member of CIHR’s governing council. “The criteria, I think, became very much skewed not toward outstanding science, but outstanding leadership,” which put many investigators at a disadvantage, says Rossant, a winner in the competition.