Too big to know

by Judith Curry

Update: see new cartoon by Josh at the bottom of the post

the massive amounts of data necessary to deal with complex phenomena exceed any single brain’s ability to grasp, yet networked science rolls on.

David Weinberger has published a new book Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts are Everywhere, and the Smartest Person in the Room is the Room.

I haven’t read the book, but David Weinberger gives a perspective on the book in an essay he wrote for the Atlantic entitled “To Know But Not Understand:  David Weinberger on Big Data.”  Some excerpts:

In 1963, Bernard K. Forscher of the Mayo Clinic complained in a now famous letter printed in the prestigious journal Science that scientists were generating too many facts. Titled Chaos in the Brickyard, the letter warned that the new generation of scientists was too busy churning out bricks — facts — without regard to how they go together. Brickmaking, Forscher feared, had become an end in itself. “And so it happened that the land became flooded with bricks. … It became difficult to find the proper bricks for a task because one had to hunt among so many. … It became difficult to complete a useful edifice because, as soon as the foundations were discernible, they were buried under an avalanche of random bricks.”

There are three basic reasons scientific data has increased to the point that the brickyard metaphor now looks 19th century. First, the economics of deletion have changed. We used to throw out most of the photos we took with our pathetic old film cameras because, even though they were far more expensive to create than today’s digital images, photo albums were expensive, took up space, and required us to invest considerable time in deciding which photos would make the cut. Now, it’s often less expensive to store them all on our hard drive (or at some website) than it is to weed through them.

Second, the economics of sharing have changed. The Library of Congress has tens of millions of items in storage because physics makes it hard to display and preserve, much less to share, physical objects. The Internet makes it far easier to share what’s in our digital basements. When the datasets are so large that they become unwieldy even for the Internet, innovators are spurred to invent new forms of sharing.  The ability to access and share over the Net further enhances the new economics of deletion; data that otherwise would not have been worth storing have new potential value because people can find and share them.

Third, computers have become exponentially smarter. John Wilbanks, vice president for Science at Creative Commons (formerly called Science Commons), notes that “[i]t used to take a year to map a gene. Now you can do thirty thousand on your desktop computer in a day.

The brickyard has grown to galactic size, but the news gets even worse for Dr. Forscher. It’s not simply that there are too many brickfacts and not enough edifice-theories. Rather, the creation of data galaxies has led us to science that sometimes is too rich and complex for reduction into theories. As science has gotten too big to know, we’ve adopted different ideas about what it means to know at all.

The problem — or at least the change — is that we humans cannot understand systems even as complex as that of a simple cell. It’s not that were awaiting some elegant theory that will snap all the details into place. The theory is well established already: Cellular systems consist of a set of detailed interactions that can be thought of as signals and responses. But those interactions surpass in quantity and complexity the human brains ability to comprehend them. The science of such systems requires computers to store all the details and to see how they interact. Systems biologists build computer models that replicate in software what happens when the millions of pieces interact. It’s a bit like predicting the weather, but with far more dependency on particular events and fewer general principles.

Models this complex — whether of cellular biology, the weather, the economy, even highway traffic — often fail us, because the world is more complex than our models can capture. But sometimes they can predict accurately how the system will behave. At their most complex these are sciences of emergence and complexity, studying properties of systems that cannot be seen by looking only at the parts, and cannot be well predicted except by looking at what happens.

Aiming at universals is a simplifying tactic within our broader traditional strategy for dealing with a world that is too big to know by reducing knowledge to what our brains and our technology enable us to deal with.

With the new database-based science, there is often no moment when the complex becomes simple enough for us to understand it. The model does not reduce to an equation that lets us then throw away the model. You have to run the simulation to see what emerges. We can model these and perhaps know how they work without understanding them. They are so complex that only our artificial brains can manage the amount of data and the number of interactions involved.

No one says that having an answer that humans cannot understand is very satisfying.  The world’s complexity may simply outrun our brains capacity to understand it.

Model-based knowing has many well-documented difficulties, especially when we are attempting to predict real-world events subject to the vagaries of history; a Cretaceous-era model of that eras ecology would not have included the arrival of a giant asteroid in its data, and no one expects a black swan. Nevertheless, models can have the predictive power demanded of scientific hypotheses. We have a new form of knowing.

This new knowledge requires not just giant computers but a network to connect them, to feed them, and to make their work accessible. It exists at the network level, not in the heads of individual human beings.

JC comment:  I’ve mentioned some of these general ideas in previous posts in the context of the complexity of climate science, but I think this article explains the challenge very well.  The climate community has responded to this challenge in two ways: building ever complex models, and trying to reduce the system to something that an individual can understand (simple energy balance models and feedback analysis).  Expending more intellectual effort on the epistemology of too big to know, where much of our knowledge resides in complex computer models, is needed IMO to better assess our knowledge level of the climate system and confidence in conclusions drawn from enormous data sets and complex system models.


249 responses to “Too big to know

  1. This is an unusually provocative post. I would say it plays into an important insight into the nature of science. Science is not the accumulation of “bricks” of knowledge, it only advances when a huge batch of mortar comes along in the form of a new theory that tells us how to arrange the bricks. Most such theories are very mathematical, for example, those of Maxwell (electromagnetics), Newton (gravity and motion), and Einstein (relativity).

    Another excellent point concerns computer simulations. I view this reliance on computer simulations as a potential death knell of science. Simulations of complex simulations are usually badly wrong. The science of analyzing the errors in simulations is a field of utmost importance. The fundamental problem here is asymptotics. The information / cost ratio for almost all computational simulations is terrible, effectively there will never be a computer big enough to achieve meaningful accuracy without a revolution in methods. Higher order finite element methods are a possibility.

    Most people are shocked when it is pointed out that most structural analysis finite element simulations are in error in the energy norm by 20%. But yet, these simulations are used to design virtually all our engineering structures. Asymptotic analysis is the key to the future of simulation and simulation is a vast hole in modern science that few have any idea how to deal with.

    • Nature is more simple than many of the mathematical models used to describe it. E.g., every atom (including its nucleus) can be explained as a combination of two forms of one fundamental particle:

      n H (p+ and e-)

      Variations in rest masses of these combinations show if the interactions between fundamental particles are:

      a.) Attractive (and reduce rest mass), or
      b.) Repulsive (and increase rest mass).

      Instead of using experimental facts to find reliable new sources of energy, governments squandered hugh sums of public funds trying to:

      c.) Find the hypothetical “God” particle – the Higgs boson, and
      d.) Build H-fusion reactors to mimic the SSM model of the Sun.

      • The two-headed arrow typed between n and H did not print above.

        The two forms of one fundamental particle are the neutron (n) and the hydrogen atom (H or p+ and e-).

        Under low pressure, n => H
        For high pressures, H => n

      • Two-headed arrow ↔ ↔
        Alt-29
        You’re welcome.

    • Is it possible to construct a “model” without built-in crippling assumptions and crucial omissions? That is, is a model limited to ultimately “illustrating” the POV of its creators? Cornell Creative Machines Lab has constructed, and made available, a kind of meta-model package, “Eureqa“, which purports to build its patterns and rules and equations from the raw data, no prompting required. Maybe this is the way it will have to go.

  2. Research agencies encourage scientists to generate data, but they discourage them from interpreting the data in ways that might violate mainstream models.

    Important breakthroughs are usually made by individual investigators. To avoid that dangerous possibility, federal research agencies have largely stopped funding individual scientists

    • Michael Larkin

      Oliver,

      Re: your first para, You really ought to read Rupert Sheldrake’s new book, “The Science Delusion”. Whether or not one has any sympathy with his theory of Morphic Resonance, his analysis of the modern science establishment is quite devastating. There’s also a good streaming video covering this, in part, here: http://videocenter.cst.edu/videos/video/411/in/channel/50/

      I’d start after the introductions, at around 22:32. Sheldrake questions 10 major assumptions of materialist philosophy, which is at the bottom of the scientific world view. It’s not so much that we have so many bricks, but that they have to be forced into a certain predefined structure that is essentially metaphysical (although most scientists don’t realise that – they take it as sellf-evident truth).

      It’s Sheldrake’s belief, and mine too, that we have reached the end of the significant scientific achievement that can be achieved based on the present scientific world view – Karl Popper’s “promissory materialism”.

      Sheldrake’s book and the video explores the philosophical origins of the currently predominant scientific world view, and why that is currently constraining scientific process. Among other things, it’s not only become sterile, but is profoundly boring and uninspiring.

      Sheldrake is, in my view, one of the cleverest people on the planet, and one of the very few scientists with the guts, integrity, intelligence and objectivity to openly try to get to the bottom of the malaise in modern science.

    • Oliver, I could not agree more. Several times I have run into papers where they produce excellent data and then screw up the paper by using an interpretation that is dead wrong. In one case I even wrote to the authors and told them what they had but they did not want to listen. In the end, I benefited from this when I published a paper using their data as a starting point. And the authors were big shots. As to your point about nature being simpler than models used to describe it, you are on the mark there too. Wrong models can easily complicate a situation by their formalism. Just think of the difference that changing from Roman numerals to Arabic numerals made. Arno

  3. 20% error, combined with more than 20% safety margin can be ok.
    Some safety margins are huge.

    • Yea, but just think of the product performance that is left on the table. As we design more optimal structures, these issues will become more important. In climate science, just think what a 20% error in the energy norm means. It means that virtually all the signal is damped out with time.

    • What meaning does a safety margin have outside of controlled testing, and how in hell do we run a controlled test on the climate?

    • I remember in freshman calculas when 52% was a passing grade on an exam and it came to me why engineers are so conservative in their designs.

      • The passing grade could have been made much lower or higher by adjusting the difficulty and number of the questions. Any grading system worth its salt attempts to present a challenge tuned to allow those with barely adequate mastery (in the opinion of the examiner/teacher) to score around that level, and also to require excellent mastery and a fast brain to attain 100%. That is, it attempts to maximize the information obtained by conducting the examination.

        Hence the fundamental inanity of “pass-fail” or grading by “effort” or just about any “curved” marks. They are designed to discard and suppress such information.

  4. When a complex models predicts a future that is hugely different from the past with the major change being a fraction of a trace gas, the model forecast has a huge chance of being totally wrong. When you have ten years or more, (1998-2012) where dire forecasts of major warming did not happen, the model forecast has a huge chance of being totally wrong.
    When the models show ice volume is decreasing and Leap Second Data shows the ice volume is increasing, the model forecast has a huge chance of being totally wrong.
    The problem with computers is that people start believing the numbers that come out and they quit thinking.
    Look at the data. There is no actual data that is out of range of the last ten thousand years. Earth temperature is less than 1 degree C above the average of the past ten thousand years and it is not rising.

    • Herman, I love the way you think and write. Too many people, including yourself, have already made the point that occurred to me as I read the article our hostess selected. The ONLY thing that matters is the observed data. Until this becomes completely clear for climate science, then we are very likely to get the wrong answer; as many people already have. And far from having too MUCH observed data; we have too LITTLE.

      CAGW is a hoax.

      • How differently can people emphasize the issues of knowledge.

        You say that the only thing that matters are observational data, while many have expressed the view that the observations get a meaning only trough some theory, perhaps only a seriously lacking one, but a theory anyway.

        I agree with this second view: Observational data is a set of meaningless numbers until put into a theory context. There’s also enough theory context to provide meaning for the climate related observations.

      • Jim and Pekka

        The ONLY thing that matters is the observed data.

        As far as corroborating a scientific hypothesis is concerned, Jim is correct. It can ONLY be validated by observed data, whether from physical observations or reproducible experimentation. It CANNOT be validated by computer models, as these represent virtual (rather than observed) data and cannot be any better than the information that has been programmed in.

        Pekka, I would agree with you that the “observed data” alone do not tell us much, unless we tie these together with a theory, which can then be challenged – and if it survives repeated attempts at being challenged – can be considered to be validated.

        The premise of CAGW has not yet passed this threshold, as there are no empirical data to corroborate it.

        What’s more, the past several years of no observed warming of the atmosphere or upper ocean despite CO2 concentrations reaching record levels have presented an embarrassing challenge to the CAGW theory, whereby human CO2 emissions should cause a potentially disastrous impact on global warming.

        So the hypothesis (or theory) is only as good as the observed data.

        I know that the “mainstream consensus” group like to find rationalizations to explain when the observed data do not support their hypothesis (Chinese aerosol emissions, natural variability, etc.), while eagerly accepting the observed data when they do, but IMO these are feeble and transparent attempts to torture the “observed data” into fitting the preconceived hypothesis.

        In a nutshell, that is the inherent weakness of the CAGW hypothesis.

        Max

        .

      • Max,

        You still miss the point of our created science to that of real world science.
        No point is EVER actually the same. Every parameter slightly changes from second to second.

      • I would not say that the ONLY thing that matters is the observed data. The observed data is the most important, but must be interpreted in the correct context. Theories interpreting the data are important. However, only those theories which continue to be supported by the data, should be pursued. Too many times in science, a scientist will steadfastly stick to their own theory until it ultimately collapses under the weigth of the contradicting evidence.
        Science should be fluid, constantly molded by new ideas and data. Those that do not move with the flow, will be left behind. I would not say it is a hoax, but rather there are those who have not moved on with the science.

      • Dan,

        This negates to look for what parameters created the data strictly for following numbers with no principles in understanding the planet and all the differing interactions.

      • As far as I can judge Jim Cripwell made his statement as an absolute statement in order to invalidate climate science. That’s a totally wrong approach. Every science is formed as combination of observational knowledge and theories. How these components are put together varies from case to case, but both are always essential.

        We have seen in very many messages in many different threads that people with little understanding of what science is about make absolute statements on what it should be like and how climate science fails fundamentally. That approach is empty of value.

        It’s fully legitimate to discuss the strength of evidence that science can provide for some particular results, but the answer can be given only, when both the experimental evidence related directly to the issue and the evidence based less directly on theory and other observations are taken into account. All evidence must be of high quality and well enough understood to count, but as long that’s true the less direct evidence may be as valuable as the direct one.

      • Pekka writes “As far as I can judge Jim Cripwell made his statement as an absolute statement in order to invalidate climate science”

        My apologies. Pekka has every right to interpret what I wrote in that light. I made an absolute state,ment, when I ought to have written what Max wrote. It is, of course, obvious that data without a theory to put it in context, is little more that a bunch of meaningless numbers. However, an idea is merely a hypothesis until it is supported by observed data.

      • The point is that practically no scientific ideas are presented without supporting data. Essentailly nothing can be dismissed on that criterion alone. I doubt that there’s a single scientific idea about climate without some supporting data, even the obviously wrong ones have some such support, while they are contrdiced by strong data and therefore obviously wrong.

        The common claims by many skeptics are in direct conflict with what I write above (and what i belive to be true). In many cases they may be right in that the scientific claims lack sufficient justification, but the arguments used to “prove” that are not valid. With this approach correct scientific claims can be contradicted as easily as weak ones. Therefore that type of argumentation is of little value.

      • Pekka,
        Yes, most theories started with some observable data that resulted in an hypothesis to explain the data. Further testing is then needed to (in)validate the theory. Some of what we are seeing are people using a subset of the available to support their own theory, while dismissing that which does not. This has been done repeatedly on both sides of the climate debate.
        For example, Manacker mentioned “several years of no observed warming.” By itself, this data does not invalidate the CAGW theory. On the other side, there are those who dismiss this data, and claim that temperatures have continued to rise. The prudent scientists will modify his theory to account for this new data.

      • Pekka you write “I doubt that there’s a single scientific idea about climate without some supporting data.”

        That is not the issue. The question is whether CAGW has ENOUGH supporting observed data. That is the issue which I would dearly love to discuss in detail. Is there enough supporting observed data to warrant the IPCC conclusion that it is “very likely” that CO2 is going to be responsible for CAGW? My answer is a very decided NO!!!

      • Useing the expression “CAGW theory” is one of the problems, because such a theory does not really exist.

        By that I don’t mean that there would not be people, who paint a loomy picture of CAGW, but in science the catastrophic outcomes are only a possibility and there’s no separate threory of AGW.

        The science is climate science and the theories are theories of the atmosphere and the Earth system. Those theories allow for various possible futures. Many scientist see very bad futures possible and some even highly likely, but these outcomes are projections based on climate science and assumptions about human actions, not a separate theory.

      • That is not the issue. The question is whether CAGW has ENOUGH supporting observed data.

        I agree, but I have seen so many comments that do not make any sense, when this is accepted. They don’t present any specific arguments to support the notion that there is too little or enough observerd data, they rather state categorically that the data is not there (or is sufficient).

        It’s of little interest to hear that some denizen has a particular view on that, when he presents no relevant support for his view or when he present evidence that is totally lacking and one-sided.

        To decide, what is enough, more is needed than unjustified statements.

      • “Useing the expression ‘CAGW theory’ is one of the problems, because such a theory does not really exist.

        By that I don’t mean that there would not be people, who paint a loomy picture of CAGW, but in science the catastrophic outcomes are only a possibility and there’s no separate threory of AGW.”

        This is a bait and switch used by CAGW activists and luke warmers alike.

        Who cares if there is a “throery” called CAGW?

        The central issue in the climate debate, the one that generates numerous climate blogs with millions of views and hundreds of thousands of comments, is the policy issue of decarbonization. That is the holy grail sought by the CAGW activists, including most “climate scientists.”

        When CAGWers stop fighting so hard for massive taxes, idiotic cap and tax schemes, and other centralized control of the economy, we’ll stop referring to CAGW. It is the C – catastrophe (that Pekaa dismisses as a “[g]loomy picture”), that is used as the justification for massive government control of the energy economy (and thereby of the entire economy).

      • Dan H. – OHC has continued to rise 0 – 2000 meters, and warming has been found below 2000 meters.

        What some are doing to to repeatedly claiming OHC has not risen, when it is only in the layer from 0 to 700 meters where that is true, and that is a misrepresentation of OHC. I believe you just did it again.

      • The central issue in the climate debate, the one that generates numerous climate blogs with millions of views and hundreds of thousands of comments, is the policy issue of decarbonization. That is the holy grail sought by the CAGW activists, including most “climate scientists.”

        Where do all these blogs lead? Most of them do not lead anywhere, because the issues are discussed in such a way that nobody is learning anything from what the others are saying. Fighting something that does not exist is one way of writing irrelevant messages.

      • “Where do all these blogs lead? Most of them do not lead anywhere, because the issues are discussed in such a way that nobody is learning anything from what the others are saying. Fighting something that does not exist is one way of writing irrelevant messages.”

        “Something that does not exist?” Are you kidding me? The world does not exist in a laboratory or computer model. Out in that real world, there are real world politicians, with real world “climate scientists” assisting them, who have been trying very hard, for a very long time, to implement their progressive policy goals using CAGW, euphemistically known as “climate science,” as a means to that end.

        CAGW is KyotoCopehagenDurban; it is EPA impending decarbonization regulation in the US; it is cap and trade and trade war starting fuel taxes in the EU; it is billions wasted on wind mills in Spain…. CAGW is very real, and already having a severely damaging effect wherever its proponents have full control of the government. So forgive us skeptics for not blithely changing the terms of the debate to AGW on blogs, while the CAGW activists continue fighting on every political front to impose their dreams of economic central planning.

        So go ahead on seeking some nonexistent middle ground, and castigating all those who are opposing the blind policies that will ruin the US, EU and world economies with decarbonization.

        All you middle-of-the-roaders should stop trying to pretend that there is any mutuality in the debate between CAGW and skepticism. It’s like telling the Poles to stop being so belligerent and learn to get along with the Germans in 1938. (And that’s not a Nazi reference, it’s a strong military defense vs. pacifism reference.)

      • Michael Larkin

        Pekka,

        It’s more that observational data is given meaning by a theoretical framework, and that that framework is always wrong – the degree to which varies over time. The problem is that there’s a tendency in the scientific community, for a period (until the current paradigm is overthrown), to assert it is wholly right. That isn’t at all scientific – it’s just a sign of human arrogance, often expressed in sociological ways. Scientists often behave really badly as human beings and have been responsible for delay in human progress despite how much they have advanced it.

        It’s as if they provide the bricks to build a house, but insist it should be a certain shape, e.g. cylindrical, and then persecute anyone who thinks box-shaped houses might have some merit.

    • Harold H Doiron

      Alex,
      Extremely well-said!! A better model would make much more sense of all the data available. Why doesn’t some enterprising climate researcher just simplify the problem a bit, look at that 10,000 years of data before humans had any impact on the climate, and develop a model that explains all of those 10,000 years of stable oscillations in the global average temperature? Once they have that model working and explaining all of those little wiggles in the data, then they could add the complexity of human related activity and see to what extent that has a measureable effect on the little wiggle we currently live in.

  5. What is amazing is that most scientists are totally unaware of what the numerics for their models are and how large the errors are. It is kind of frightening in a way.

  6. A minor point of pedantry – computers have become exponentially smarter. Presumably he means ‘have become enormously faster’. ‘Exponentially’ doesn’t necessarily mean a large amount. It might pass muster in a newspaper – though it would annoy most writers – but in a book aimed at a scientific audience?

    Dr Curry, I like your expression “Expending more intellectual effort on the epistemology of too big to know“, although it isn’t particularly clear to me how one would go about such intellectual efforts.

    I don’t have a great deal of experience with models – especially complex climate models – so my thinking might be unnecessarily luddite or suspicious but I wonder if it is strictly true that..”Nevertheless, models can have the predictive power demanded of scientific hypotheses.“. Does that just mean that they can be wrong?

    I’m reminded of Gavin Schmidt being asked a question on how warm it would get by 2070 – his answer was framed in the language of model outputs – “The kind of projections people have been looking at..“.
    The fact that the numbers were ludicrous (4-6C over land, more at northern latitudes) didn’t register because, well, it was what the models suggested….And that reminds me of the phenomenon of early users of hand-held calculators who relied on the ‘numbers’ without having a sense that,say, a decimal point had been completely omitted.

    Presumably it was models “with the predictive power demanded of scientific hypotheses” that the IPCC used for the FAR prediction of 0.3C per decade of warming in 1990. They certainly had the capacity to be wrong, but then so would a random number generator…..

    Hm…. It’s very late here in England so I might be unduly cantankerous..

    • Anteros, The asymptotics are so bad that most people are shocked. Consider a second order method (the error goes down like the second power of the grid spacing). In 3D, this means that to reduce the error by a factor of 4, you need 8 times as many points. Now, all real world problems involve singularities in the solution. In this case to get the factor of 4, you need 64 times as much grid and probably a 128 times larger computer. Asymptotics are terrible and most modelers don’t understand this because their rigorous training has been neglected. I have been unable to determine if even Gavin Schmidt understands it and that means in all probability he doesn’t. It’s not hard, its in all finite element textbooks. The problem is that your average modeler hasn’t advanced beyond the 1950’s.

      • What a load of BS. That includes all previous statements made by you in this thread, and all subsequent similar ludicrous statements that you might still make.

        No real world problems involve singularities.

        Only mathematical assumptions involve singularities.

        “The problem is that your average modeler hasn’t advanced beyond the 1950′s.”

        Yeah right, making such a asinine statement proves that you are the only real world singularity, in your own mind that is.

        Or perhaps you could show us all the 1950’s textbooks and publications that are still in exclusive use today, while at the same time excluding all post 1950’s textbooks and publications, that ANY numerical modeler uses today.

      • Junior, Learn a little mathematics. Everything I say is amply documented in the literature. Search for Demkowicz and at least learn the most elementary mathematics. Most real world problems involve singularities. Any structures problem with a corner involves a singularity in the derivative. You are pathetic and ignorant.

      • I would recomend an elementary book: “Analysis of the Finite Element Method” by Strang and Fix. It has all the details but I fear it would take you a decade to master the rigorous proofs. Let me know when you know what the H1 norm is. Being a jerk is easy in the world of the internet.

      • I would suggest instead that being a concern troll is easy on the internet.

        Curry did mention the idea of energy balance, which can reduce many problems to first-order physics. Same can be said for the application of statistical mechanics to reduce the complexity and states of a system.

        So the question is how your asymptotic analysis fits in with energy balance, statistical mechanics with constraints, and growth-limiting feedbacks. Techniques such as maximum entropy can fit the bill here.

        EFS has a point, and it supports the suggestion at the end of the top-level post.

      • WebHubTelescope: I would suggest instead that being a concern troll is easy on the internet.

        What is a “concern troll”?

      • D.Young is a concern troll, someone that lectures against possibilities. The cure is to ignore them and go with your ideas. That’s what I do, and what research is all about.

        Look up concern troll on Wikipedia.

        The false flag concern here is that unless one has complete knowledge of FE methods and nonlinear dynamics, it is hopeless to proceed. Yet, we all know that science can advance incrementally, and so you go forward to the best of your abilities.

      • WHT –

        The cure is to ignore them and go with your ideas.

        I don’t agree. I think there is no “cure.” They will persist no matter. Better, IMO, is to describe the flaws in their analysis. And that’s why I read your posts.

      • Web, Energy balance is a very weak constraint on a complex system. You also need to conserve mass and momentum to get any information about the dynamics. BTW, I don’t want people to drop all other lines of research. I am saying that modelers can do a lot better by using modern concepts of adaptivity and error control.

      • Energy balance goes a long way to explaining the 33 degree C warming above the black body steady state. And it also explains the baseline BB SD.

        Can’t beat that for a starting point — energy balance applied to statistical mechanics generating the negative feedback cooling effect as the planet warms. Add the GHG and you have a first-order physics model.

      • Web, The problem is that I don’t think such models explain past climate changes like the ice ages. You need knowledge about distributions of forcings and tge dynamics.

    • Anteros you write “A minor point of pedantry – computers have become exponentially smarter. Presumably he means ‘have become enormously faster’.”

      I suspect you are right and wrong at the same time. It is quite true that ordinary super computers have little or no capability to produce what migt be described as “intelligence”. But there are a few computers AND programs which border on intelligence; both my examples come from IBM.

      Big Blue has taken all the fun out of chess grand masters playing against a computer. Big Blue wins every time. Watson had an impressive debut on the TV game Jeopardy. Now, I understand, the medical profession is looking at the possibility of using Watson technology to give ordinary doctors the diagnostic skills of the finest physicians in the world.

      We should no undersestimate the human ability to turn technology into something that resembles intelligence.

      • I suspect at this point there are computers that would pass the Turing test if administered by Anteros.

      • That wasn’t meant to single Anteros out. Presumably, they’d pass the test irrespective of who was administering it.

      • I believe you mean Deep(er) Blue. BigBlue/DeepBlue lost the first match. The second iteration, nicknamed DeeperBlue required alterations between games of the second match. Kasparov asked for a rematch, but IBM declined and dismantled DeeperBlue. http://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)

        Even stronger were subsequent creations like Hydra, Fruit, Houdini, Rybka, Ivanhoe.
        http://en.wikipedia.org/wiki/Hydra_(chess)

        On the contrary it is very fun to lose to a better opponent, as it identifies weaknesses and helps me become a better player.

        Don’t take my word for it. Download “Arena” and a few free chess engines and have a go. http://www.playwitharena.com

    • “I don’t have a great deal of experience with models – especially complex climate models – so my thinking might be unnecessarily luddite or suspicious but I wonder if it is strictly true that..”Nevertheless, models can have the predictive power demanded of scientific hypotheses.“. Does that just mean that they can be wrong?”

      I have explained many times why models cannot be used for prediction. A model reproduces reality but a set of scientific hypotheses describes natural regularities and can be used to explain and predict future occurrences that are instances of the natural regularities. You cannot model the future because you cannot reproduce what does not yet exist. People who take the contrary position do not realize that their models amount to very elaborate toys for extrapolating new graphs, presumably about the future, from old graphs.

      On the other topic of complexity, please give me a break. Any Law of Science (well confirmed hypothesis) implies infinite observations. For example, Kepler’s First Law which states that all planets (all objects meeting the condition of being a planet) of our Sun (all objects meeting the condition of being our Sun) travel in an elliptical path with the Sun at one foci of the ellipse holds for all points in space-time.

      The facts are organized (made understandable to our minds) by the well confirmed hypotheses which describe the natural regularities that consist of all the facts. We need the hypotheses and we need to make predictions from them so that we can test the hypotheses against the facts. But we do not need and do not want all the facts that make up some object of scientific inquiry such as a cell. All the intellectual achievement is contained in the hypotheses and none of it is contained in the facts. All of the understanding is contained in the hypotheses and not in the facts.

      The behavior of storing all the facts about some object is similar to the behavior of an obsessive who never discards anything he touches but takes it to one of those storage centers where you can rent a “shed” and fill it with whatever you want as long as you pay the rent. Given the evolving behavior of Americans, investing in one of those storage centers as a business is probably a good idea.

  7. randomengineer

    Good heavens. This sounds like the irreducible complexity argument, where some stuff is held as too complex to reduce to the knowable. In the hands of the ID crowd this notion is laughed at, but not here. Hmmm. There’s a lot of unintended interplay with the last climate etc. topic.

    • randomengineer
      Re “too complex to reduce to the knowable”
      In climate change there is an attempt to infer an anthropogenic cause without / and in spite of, knowledge of the natural variations.

      See Humlum et al. (2011)
      Identifying natural contributions to late Holocene climate change, Ole Humlum et al. Global and Planetary Change Volume 79, Issues 1–2, October–November 2011, Pages 145–156

      Humlum et al. give a very important 4000 year temperature graph. Humlum et al. show the Minoan Warm Period, Roman Warm Period, Medieval Warm Period, all higher than the current and projected Modern Warm Period.

      See also: Scafetta on his latest paper: Harmonic climate model versus the IPCC general circulation climate models

      Scafetta posts Humlum’s figure

      It is the lack of understanding of these natural harmonic oscillations combined with a Gaia/nature worship that led IPCC astray.

      This higher /larger perspective of major causes the natural world is required together with a good understanding of the uncertainties involved that is essential to putting claims of “catastrophic anthropogenic climate change” in perspective. The IPCC underestimates the uncertainties, and ignores these larger natural variations.

      (PS You have the ID argument backwards. It is because of what is known of stochastic processes in chemistry and biochemistry compared to the results of intelligent agents that there is an inference to an intelligent cause behind the detailed purposeful “factories” that are observed in biochemistry. See Behe The Edge of Evolution: The Search for the Limits of Darwinism, for current evidence. See especially page 146
      The probability of a double CCC is one “in 10^40. There have likely been fewer than 10^40 cells in the world in the last four billion years. . .”
      and “I Nanobot” in the appendix.)

  8. Look at the NOAA section on climate modeling and see the Warming Statement from their article that I included here. You can look on this page and search for this statement or just scroll down to it.

    http://www.research.noaa.gov/climate/t_modeling.html

    Greenhouse Warming
    On NOAA’s website they say:
    “Climate models are the only means to estimate the effects of increasing greenhouse gases on future global climate. The Administration’s climate program will use model studies to examine the impacts of technological mitigation scenarios on reducing the impacts of climate change.”
    If the Climate Models are wrong, they don’t have any means or proof.
    They have no data to support their estimate of the effects of increasing greenhouse gases on future global climate, only model output. This is their words. Look at the link to their words.

  9. Climate science isn’t complicated. All you have to know is “sensitivity to a doubling of CO2,” and all the rest follows – sea levels, glacier melt, hurricanes, droughts….

    I know ’cause the consensus tells me so.

    • And the consensus is going to have to rely on models. Michael Mann at least, thinks observational data is consistent with a climate sensitivity of anywhere between 1.5 and 9 degrees C.

      So I guess it’s a case of ‘keep on making it up’

  10. “computers have become exponentially smarter”

    No! Computers are not smart.
    Computers do what we program them to do.
    It matters if we are smart or not. And if we are smart or not,
    they can churn out valid results or churn out junk that gives the answer we tweaked it to churn out.

    • Maybe your computer does what you tell it to do, but my computer does what it wants to do. And when Norton finally decides that it’ll let me have a few CPU cycles, I come here.

      • randomengineer

        Easy fix. Kill Norton. Turn off activeX plugins and javascript. No need for AV software. No viruses. No malware. ActiveX / javascript is how bad stuff happens.

      • Computers do what some person[s] programed it to do.
        There are some problems with that.

      • P.E.

        Maybe your computer does what you tell it to do, but my computer does what it wants to do.

        Same thing’s true of my computer. It’s amazing – every time something happens on my computer that I didn’t expect, it’s because my computer decided to do something on its own.

    • No! Computers are not smart.

      If you perform a Turing test (on a computer) and you can’t tell if your interlocutor is a computer or a human, who’s “smarter,” you or the computer?

  11. Seem like a dance around the related but more important problem; as facts accumulate and knowledge accumulates, people become so specialized that nobody sees the big picture. The volumes of data are just an annoyance. The problem with expertise too narrow to be of value even in a multidisciplinary team is a much bigger problem.

    • This it true too, but the question is how do you do multidisciplinary science? The historical answer is rigor and mathematics. In my experience, the problem is that every discipline says “you are overlooking something critical that only I know about.” Generally, that is not true. The trick is to find models that are “good enough” and “validated” and use those. Very detailed models, that every specialist champions, are useless because of the curse of asymptotics.

      • Two very important points here that i think need repeating
        1-‘validated’
        2-‘good enough’

        You don’t NEED to be able to predict every single facet of a system, you just need a ‘good enough’ working knowledge of it and the proper validation in place to be confident in any predictions you make off it.

        It of course depends wholly on your end goal, but it’s completely acceptable to have a massive unknown in a system- so long as you ‘know’ how that unknown operates under given circumstances- i.e. the black box principle.

        It’s validation all the way down!

      • Lab,

        Many principles are to build it first and find the flaws later.
        Negated around the area of scientific understanding.
        Example is sports…Add in velocity principles of the orb and all parameters around makes it a very fascinating game of angles and motion principles(not the current motion theories but actual parameters).

  12. Basically an exercise in what epistemologists might in their more reflective moments call a “redefinition” of the word “know”.

    Designed to sell books I’d say.

  13. Interesting post. Our cognitive function is not based on binary type data, but the statement “To know but not understand” is valid. Steven Pinker says we have an innate sense or programming to help us understand and learn language, maybe this innate programming has limits. Could this be some sort of evolutionary maximum that restricts our ability to understand complex systems? I think we spend too much time trying to mimic what we see, without really understanding what we ar looking at.
    Everyone talks about climate as simple physics, energy budget, sensitivity, feedback and forcing but it clearly is more. What is going to be the theoretical tipping point to bring all this climate data into a simple yet elegant explanation of Terra climate? Computer models are not the answer.

  14. As one who struggles with the plethora of “knowledge” in neuroscience as related to psychiatry, I’m all too aware of the difficulties integrating what we think we know. However, I’m not sure if people are aware of the limitations inherent in our the data set itself. All else aside, people tend to publish studies which yield positive findings. Negative findings just aren’t all that exciting and yet negative findings can be just as important as positive findings. Moreover, few look at the validity of findings and ask whether we are measuring what we think we are measuring.

    In climate science, Iconoclastic sites such as WUWT delight in unearthing examples in which events said to be consistent with adverse effects of warming are in fact attributable to some other phenomenon. Love it or hate it, WUWT thereby performs a useful function. Personally, I am especially uncomfortable with “multiple converging lines of evidence” approaches which allow broad brushing of inconvenient facts or misattributions particularly when dealing with systems of exquisite complexity. I would not automatically reject “converging lines of evidence” though I would treat them with the same level of reserve as “circumstantial evidence” receives in the court room.

    A little bit of humility is vital – a quality often lacking among highly intelligent folk who forego potentially very high salaries in the corporate world to pursue research in the intensely competitive world of science. You have to have a thick skin to handle criticism from colleagues and intense drive to chase the grant money all the while maintaining a keen political sense within academic hierarchies with a heightened instinct for what line of research is likely to be “sexy.” Indeed, you have to be somewhat narcissistic (though many of us would know scientists who don’t fit this profile).

    Given these imperatives, admission of ignorance or lack of understanding is challenging indeed. I recall that when I first embarked on my field, I thought that age and experience would bring confident mastery of patient care. Thirty years down the track, I find patients seem more and more complicated challenging. I know my colleagues say much the same. It’s said that capacity to accommodate ambiguity is an essential quality for psychiatrists. Awareness of ambiguity increases exponentially with knowledge and experience.

    Socrates as Plato would have it once said, “I am wiser than this man, for neither of us appears to know anything great and good; but he fancies he knows something, although he knows nothing; whereas I, as I do not know anything, so I do not fancy I do. In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know.”

    Sometimes this is abbreviated to the pithy, “I know one thing, that I know nothing.” Not a comfortable starting point for anyone aspiring to unravel the intricacies of climate science. Even less comfortable if you are making the transition from climate scientist to climate advocate. Hence, a stance that very few adopt. And of course, the reality is that you do know something and indeed very likely a lot more than most people about your specific area and related disciplines. Perhaps 75% of what you know is accurate and unassailable (a generous estimate in some fields of endeavour). The problem is, you often don’t know quite which 75% is accurate and which 25% represents incomplete or frankly erroneous knowledge – a very messy situation!

    • chris 1958 –
      Well said.
      I would add that there is a whole layer of problems that exist above the level of the science, and the scientists. The fact that the problem is (allegedly) serious and potentially catastrophic means there is a vast mechanism for advocacy. Even those scientists with some natural humility and reticence will find their tentative correlations treated as definitive proof by a journalist, an activist or a politician.

      To be fair, if the alarmists are right, climate science doesn’t have two or three hundred years to quietly mature and discover what it can (and can’t) know, in the way that, perhaps astronomy or astrophysics has. Certainty is required by tomorrow afternoon! So the pressures are enormous – and the whole fabric of the field of climate science (especially) is not particularly well-equipped to adapt itself to this forced circumstance.

    • Very well said.

    • Yes, well said. There are far fewer bricks than that metaphor suggests; put differently there are too many crappy bricks and the peer-reviewed literature doesn’t help you tell strong bricks from crap bricks.

      Some areas of quantitative inquiry are, at their core, observational rather than experimental: Epidemiology, Economics, Climate Science and large precincts of Genetics come to mind. This is not to say experiments don’t contribute to these: They do. I am an experimental economist so I don’t want to make a case for my own total irrelevance. But in the end, the thing to be explained is the field observations.

      In his article “Why Most Published Research Findings Are False,” Ioannides writes from the perspective of epidemiology and genetics, but many of his points apply equally to all of the ‘sciences’ that are stuck with a fundamentally observational empiricism. This is a great, sobering statistical argument of poverty:

      http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

      This always goes on my syllabus for statistics and experimental design.

  15. Too many bricks? Hilarious! Total bunk!

    Look just at land surface temps. We have, what, 200 years’ worth of bricks, starting at extremely light coverage for about ¼ of the land surface 200 years ago and reaching modest coverage of very roughly 3/4 of the land surface today. We employ this data in an effort to understand phenomena that cycle at multiple periodicities ranging up to thousands of times the length of our observation period.

    What we have is endless discussion about a minimal number of bricks. Each time a rare new brick is produced, we spend years twiddling with the exact placement of the brick, analyzing the validity and benefits of its manufacturing method, arguing over the figure that goes in the nth decimal place of its dimensional measurements, devising schemes to record its exact color, and cetera and cetera.

    Models are a great example of this hypo-brick approach. Models consume very few facts and produce none, yet they’re one of the primary investigative tools in climate science. They’re built entirely on an edifice of postulation that’s hardly changed in two decades, despite the reams of supposedly ever more reliable predictions.

    What we need for climate are paleoclimatological data, and we need that data produced at about 100x the current rate of data production, while we need about 1/100th of the current rate of model analysis.

  16. It’s a very interesting concept, but i can’t help thinking that this is just yet another ‘wall’ being placed in front of those trying to understand the climate. It seems that only the ‘experts’ are able to make any conclusions off it….

    The climate is exceptionally complex and while the cell analogy is apt, it is not the whole story either. With Cells we can predict what will happen with protein x if we add factor y. We know this because we’ve modelled it, tested it, again and again, and have in effect ‘validated’ the outcome.

    This simply is not possible in climate science because we don’t even know waht we don’t know yet.

    It’s a fascinating and rapidly moving field- enough to pull this humble micro/biochemist in, but the notion that a system can be too complex to understand completely is, in my experience, hogwash.

    Now of course, you’re never going to fully understand every single interaction, every single combination and outcome- but you can get a very good general overview and further, with this overview you can make good predictions if the ‘base’ knowledge is sound.

    There is of course a danger that one can become lost in the details- but without all these details you can never hope to get the full picture.

    No, imo the danger is not in having too much information, but in not having enough- that and the experience to know how to USE that data. My fear is that if one piece comes across to change the way the current ‘climate’ picture looks (as is often the case in science) that it will be ignored as people are far too entrenched.

    • Labmunkey, I suspect your underestimate the complexity of events even at the single cell level – for example, the functioning of simple cell receptors depends on the configuration of protein structures whose function in turn depends on events at the atomic level.

      Of course, studying an isolated cell is easier in some respects insofar as you can maintain more constant parameters. However, once you move to an organ like the human brain containing some 80 to 120 billion neurons all widely divergent in structure and function interacting with one another via multiple neurotransmitters, neuromodulators (and a vast array of astrocytes, oligodendrocytes,etc) within a supportive matrix nourished by circulating blood which may carry varying levels of nutrients and toxins, you start to wonder whether data based on the behaviour of an isolated neuron validly and reliable guide us in the functioning of the system as whole.

      It’s even harder once you try studying the behaviour and interaction of brain structures in the near infinitely variable range of circumstances confronted by living organisms.

      We do the best we can with what information we have bearing in mind that much of our understanding of how an organ like the brain functions remains rudimentary despite the wealth of data.

      I suspect climate science is a bit like that.

      I don’t propose for a minute that we discard information. However, climate science rather like neuroscience yet has to stumble upon an organising principle such as the periodic table in chemistry, Newton’s laws of motion, relativity, particle physics, and the like. Even here, major gaps in knowledge and understanding remain.

      Nevertheless, kudos to all who keep trying so long as they don’t fool themselves into believing that they really understand it all.

      • Ah, i’m not for a second suggesting that all the intearctions etc are known and easy for one person to fully understand- i was trying to show how a pragmatic view of ‘what works’ would be sufficient as a ‘high level’ understanding.

        You’re not always going to need to know every single interaction, their effects and how they work; it’s good to have that knowledge and it can be important (especially in trouble shooting), but it is not necessary to understand the whole matter. It’s my poor explanation- apologies.

        Or to put it another way, you do not need to understand how electrical impulses travel across nerves to make muscles contract, to understand that the heart -pumps blood around the body.

        I guess what i’m saying, is that it is not the amount of knowledge or data available that is the porblem, rather the individuals (or scientists) ability to understand what is relevant and what isn’t (to understanding the particular facet they are looking at).

    • “This simply is not possible in climate science because we don’t even know waht we don’t know yet.

      It’s a fascinating and rapidly moving field- enough to pull this humble micro/biochemist in, but the notion that a system can be too complex to understand completely is, in my experience, hogwash”

      Your second statement does not follow from the first, unlike micro biochemistry, The Earth’s climate is not a sentinel system, it is part of a astrophysical system, which we know only some about and cannot fully measure the rest.
      I agree with your last statement that it’s not too much information precluding conceptualization, it’s too little..

    • Labmunkey: With Cells we can predict what will happen with protein x if we add factor y.

      Actually, for most cells, most x, and most y, no one can make that prediction.

      Labmunkey: Or to put it another way, you do not need to understand how electrical impulses travel across nerves to make muscles contract, to understand that the heart -pumps blood around the body.

      That’s a fair statement, but you are changing your ground. One of the goals of climate science is to predict changes in temperatures and rainfalls a few decades in advance for at least a few regions in the world. One approach is more like your heart/blood analogy (though not exactly like) — estimating trends and extrapolating them. The other approach is more like predicting protein interactions: the GCMs that model detailed processes at local levels, and make predictions by running simulations.

      Right now, neither approach has been shown to have any predictive power for multidecadal forecasts. The large “omics” research program is like the new knowledge described in the feature article.

  17. “Too many bricks? Hilarious! Total bunk!” Thanks Jimmer – I was going to make a similar point.

    It is well known that the number of meteorological stations reporting temperature and precipitation has fallen from a peak in the 1980s. We are developing ever more sophisticated models on the basis of less and less data.

    What is more a lot of that data that are out there are not readily available. Since the 19th century cloud cover has been measured – either by observers reporting ‘oktas’ (eighths of sky covered) or using Campbell-Stokes solarimeters. Wet bulb temperature – and hence relative humidity – and wind speed have been measured for as long. These are all parameters which climate models simulate but in the absence of such data we have little idea of how well the models do.

    To a certain extent data have been replaced by factoids. Hurricane Katrina (though not its impact) was one data point in long sequence of tropical storm data. SCUBA cabinet meetings in the Maldives make a point about rising sea levels but there is very little real data from tide gauges.

    The impacts of climate change and/or combating its impacts are going to cost trillions. Why not spend a fraction of that of putting together some real data?

  18. From the point of view of scientific understanding there are many different types of complexity.

    One type is the complexity is temporary, related to the momentary state of science, and resolved with a new better theory.

    Another type is due to the large number of individual elements, each expected to follow known laws of nature. Furthermore this type is characterized by the limited role that each individual element has and by the uniformity of the different parts of the system. This combination leads typically a situation, where statistical mechanics or similar approaches can describe well the large scale properties of the system.

    But then we have the inherently and irreducibly complex systems, where neither a new fundamental theory nor statistical methods can resolve the whole. There are perhaps many subsystems well described by statistical mechanics, but these subsystems form interacting structures at all scales from molecular to macroscopic, which may extend to global or even larger scales. For these systems no resolving theory is likely to ever appear, and no macroscopic variables are likely to be fully controlled by simple statistical laws. It’s possible to learn ever more about these systems and understand their subsystems and other details better and better, but they can never be fully understood in the same spirit the two above types can be understood. The Earth system is certainly one of these inherently and irresolvably complex systems.

    When complex systems are studied different data mining techniques are used more and more. It’s not always noticed that the methods used belong to the class of data mining. Data mining methods are problematic, because they have a tendency of producing often spurious results. Using the results of such methods in forecasting is risky and total failures are common.

    For inherently complex systems complex models are often the only known way to gain more comprehensive understanding. As complex models are also prone to produce spurious results, much effort must be put in understanding the models. Studying models is an essential part of the research, and only a fraction of that is in the form of comparison with data. Singularities were mentioned above in some messages. It’s true that real singularities are rare or totally absent in real world, but very common in models, both in relatively simple models that are idealizations of the real physics and in complex models, where they appear typically in dynamics. One important part of the research on the models concerns these singularities. Non-physical singularities must be removed or circumvented, but in a way that does not make the model all too smooth and stable.

    The fact that complex models are needed in describing the real system can be and must be accepted, but the problems created by their use must also be acknowledged. Finding the right balance on this point is one of the main sources of controversy in climate science.

  19. “”What we have is endless discussion about a minimal number of bricks. Each time a rare new brick is produced, we spend years twiddling with the exact placement of the brick, analyzing the validity and benefits of its manufacturing method, arguing over the figure that goes in the nth decimal place of its dimensional measurements, devising schemes to record its exact color, and cetera and cetera.

    Models are a great example of this hypo-brick approach. Models consume very few facts and produce none, yet they’re one of the primary investigative tools in climate science. They’re built entirely on an edifice of postulation that’s hardly changed in two decades, despite the reams of supposedly ever more reliable predictions.””

    Any bricklayer knows even if you figure how to place a brick in a wall to complete that wall, without reinforcement, it will fall over with seismic activity.

    The brick structure produced so far by GCM’s has no reinforcement in observed reality, it is crumbling in the face of skepticism currently at 8.0 on the Richter.

  20. No point of winging.
    Just concentrate on the essentials
    http://www.vukcevic.talktalk.net/CET-SW.htm

  21. Judith Curry

    Based on the cited book review (haven’t read the book yet) David Weinberger’s ”Too Big to Know” summarizes the dilemma in all sciences of having too much (often conflicting) data, or “brick-facts”, as the author calls them, and the role computers are playing in generating this data overload. There are ”too many ‘brick-facts’ and not enough ‘edifice-theories’”.

    One reason cited for this is,”computers have become exponentially smarter”. IMO this is a common misconception. They have not become ”smarter” at all. They are just ”able to perform a greater number of different functions at an increasing speed” – but they are still no ”smarter” than the people who are programming them.

    Weinberger is writing about science, in general, but a practical counterargument to the problem of too much data, which is specifically related to climate and weather, can be read in Mike Smith’s ”Warnings: The True Story of how Science Tamed the Weather”.

    Instead of a brickyard full of haphazard piles of largely useless data, Smith’s book describes the step-by-step search for specific knowledge about tornadoes, which could be used to save lives and property by providing early warning.

    Here we had a real down-to-Earth problem in search of information.

    In today’s climate science we have mountains of scattered information in search of a postulated problem.

    The tornado deaths to be avoided were real – and the facts that were painstakingly gathered helped reduce this threat.

    The contrast with today’s brickyard full of data being selectively used to create an imaginary future threat couldn’t be greater.

    The added notion that we can actually change our planet’s climate at will and with it, its weather, is a step further from reality.

    You point out the challenge that we need to increase our efforts to ”better assess our knowledge level of the climate system and confidence in conclusions drawn from enormous data sets and complex system models”. This is certainly correct and it will involve gathering several more piles of bricks as well as analyzing those we already have.
    But we need to stop fooling ourselves into thinking we can predict (or even alter) the future climate of our planet. Those tasks are still a number too big for us despite our brickyard full of “brick-facts”.

    Max

    • I think Mike Smith’s book should be in every library in the world. It’s a handbook for adaptation.
      =========

      • kim

        I actually got Smith’s book after you recommended it on another thread. It is a great read and a “handbook for adaptation” (as you say).

        For me the big contrast with today’s “mainstream consensus” climate science is that Smith describes the search for data to help solve a real problem rather than using selected piles of data to create a virtual future problem.

        I also liked the author’s modest assessment of himself and his knowledge – a breath of fresh air when compared to the arrogance of a Trenberth or a Hansen.

        Thanks for the tip.

        Max

      • So humble be he
        In the face of tornados.
        So curious, too.
        ==========

    • Apparently the brickyard is short of mortar.

  22. Labmunkey @ 5.23 am (I can’t quite access the reply button to your comments)

    Or to put it another way, you do not need to understand how electrical impulses travel across nerves to make muscles contract, to understand that the heart -pumps blood around the body.

    I guess what i’m saying, is that it is not the amount of knowledge or data available that is the porblem, rather the individuals (or scientists) ability to understand what is relevant and what isn’t (to understanding the particular facet they are looking at).,,

    Very true as far as it goes and good enough much of the time. On the other hand, taking your specific analogy further, it does help to know that that heart muscle is only partly controlled by the nervous system, that it has its own electrical conductive system, which can be damaged following a heart attack leading to to arrhythmia, is only partly overridden by the autonomic nervous system, and goes kaput if you happen to have too much potassium in your blood stream. All this is vital information if you happen to be a cardiologist or even a humble family physician. If on the other hand you are part of an execution team (bad taste analogy, I know), the bit about potassium is the one single important fact – that’s how you stop the heart beating when you administer a lethal injection.

    In short, the necessary complexity and depth of understanding of a topic depends upon the context. The problem for climate science lies in the reality that the decision that need to be made lie in the hands of politicians (few know much about science – most come from liberal arts/ business/ economics/ political science/ labour union backgrounds) who have to communicate to a public including those with an even slenderer knowledge base all motivated by short term gains (re-election and the hip pocket).

    Governments world wide consequently resort to arguments that are “clearer than truth.” Sometimes, they get away with this. Otherwise, they leave it to the next government to clean up the mess. This is but human nature.

    Scientists unfortunately are also human and equally tempted to frame their perspectives in language “clearer than truth” if they see their work as having ramifications for the well being of the human race. Stephen Schneider’s captured this well in his famous quote pertaining for the need to strive to be both truthful and effective when engaging in advocacy.

    On the other hand, when speaking of advocacy, it’s well worth remembering that the best trial lawyer arguing his brief must maintain powerful awareness of the weaknesses of his case.

    • chris1958: In short, the necessary complexity and depth of understanding of a topic depends upon the context.

      I have rather enjoyed your interchange with LabMunkey.

      In the neurosciences/behavior field: if you want to know about animal learning, you do experiments on whole animals with learning tasks; to understand the neural basis of learning, you need experiments on components of the neurons, and simulations of neural networks such as those by Eugene Izhikevich and many others. The latter are not quite capable of predicting the former, but the latter are like the knowledge systems described in the feature article.

  23. Powerful computers give a false sense of security to any modellers, including those who have constructed nosensical models, or erroneous models of complex systems which are beyond the ken of the modellers.

    One can’t help but suspect that the models began by trying to measure the impact of GHGs on temperature, created some equations (2*CO2 = 3C) and then added X million lines of code around them to deal with natural variation and feedback. Instant confirmation bias!

    And then the computer models become too complex to understand! Hansen attributed a presumed mistake in ocean mixing shown by all models to “some shared ancestry in their code”.
    ( http://pielkeclimatesci.wordpress.com/2011/10/27/candid-comments-from-global-warming-climate-scientists/ ). IT types everywhere are familiar with dubious coding modules being ‘cut-n-pasted’ around different systems.

    Meanwhile, the banks spent a fortune showering money at physicists and mathematicians to model the world, and were therefore convinced that everything that has happened financially in the last 5 years was impossible.

    In short, complex systems are beyong computers unless the major parameters are already pinned down; computer models are themselves often beyond the grasp of the human mind.

    Unfortunately, at the end of the process, scientists (or economists or bankers) look at the pretty graphics that the computer produces and say ‘Now that can’t possibly be wrong, can it?’

    False…sense..of..security……….

    • “One can’t help but suspect that the models began by trying to measure the impact of GHGs on temperature, created some equations (2*CO2 = 3C) and then added X million lines of code around them to deal with natural variation and feedback. Instant confirmation bias!”

      Clear demonstration of your own confirmation bias.

      • Louise,

        Bias is not understanding the science in the first place.
        Putting faith in a theory that should not be questioned as the experts tell us.

    • cui bono said, “One can’t help but suspect that the models began by trying to measure the impact of GHGs on temperature, created some equations (2*CO2 = 3C) and then added X million lines of code around them to deal with natural variation and feedback. Instant confirmation bias!”

      There is nothing wrong with starting a model based on an initial estimate. If confirmation bias exists, it shows itself in how the modelers deal with the “AH HAs” and the “AW Sh$ts”. It is impossible for the initial estimate to be right. Observations will fall outside of the range, that is the purpose of the baseline estimate, to see what needs to be fix or modified. The confirmation bias is when the modelers assume that their predetermined unknowns are the only significant true unknowns and adjust those to make the fix.

      In the initial model development, clouds, aerosols and natural variability were the main unknowns. The modelers focused mainly on cloud and aerosols, neglecting natural variability. That is confirmation bias, since greater impact of natural variability is an “AW SH$T” we they are looking for AH HAs. :)

  24. Tomas Milanovic

    It’s true that real singularities are rare or totally absent in real world, but very common in models, both in relatively simple models that are idealizations of the real physics and in complex models, where they appear typically in dynamics.

    Not so. Singularities abound both in real world and in models. Specifically complex systems are often organised by/around singularities.

    One typical example in classical physics is the non steady state heat equation in a rectangular radiating wall.
    The derivatives are not continuous in corners and this is often a “detail” for a layman because as there are just 4 corners, these 4 annoying points can’t be a big deal among the infinity of the other friendly points of the wall.
    Yet it is a real problem for numerical models and one has to adjust the solving methods often by hand to get rid of these 4 singularities and obtain reasonable solutions.

    Second example, more telling, is the Rayleigh-Taylor flow. Here the problem is opposite. There are no singularities in the model and it comes to the surprising conclusion that a Rayleigh-Taylor flow doesn’t exist.
    Yet we observe that not only it exists but it is a very complex example of a spatio-temporally chaotic flow.
    The reason for this macroscopic flow are microscopic instabilities that happen at scales far below the usual Navier Stokes resolution.
    They happen at a scale where the real world is singular everywhere because matter can no more be considered as continuous.
    This flow is also a good example where both the macroscopic representation fails (because the continuity requirement fails) and the statistical representation fails (because there is no kind of invariant pdf).

    One of the frequent features of complex (chaotic) systems is the approximate self similarity. The usual scientific method consisting to reduce a complex problem to simple independent problems stops working because there is no characteristic time-space scale where the most important things happen.
    The smallest scales interact with the largest scales and neither can be neglected as “irrelevant”.
    In the already mentionned brain example individual microscopical neurons interact with macroscopical thoughts and inversely.
    In the climate the local configuration of the fields interact with large scale structures (f.ex oceanic oscillations) and inversely.

    I am not so pessimistic as to believe that these problems will be never solved.
    But I believe indeed that the brickyard image is amusing and probably relevant.
    For this kind of complex systems we don’t really need megatons of data fed in inadequate or simply wrong models.
    We need the right paradigms that can deal with systems that are neither reductible to elementary individual elements nor stochastical.
    As this kind of systems was discovered only recently, it is normal that we have not (yet) a clue how to handle them correctly. But there are ideas – fractal geometry, ergodic theory, Hilbert spaces.
    We will surely find one day one that works but it is true this work should have priority before piling mountains of bricks in the yard.

    • Tomas,
      There are near singularities in the real world, but those are usually, if not always dampened by some phenomenon, when studied accurately enough.

      Models with singularities are often very useful and good descriptions of the real system, but only, when used far enough of the singularity of the model.

      • Pekka,

        Your right.
        Every area has to have it’s own model in order to understand the mechanics and influences in each individual area.
        To slap together one massive model with a single calculation is our current model. This misses many areas not considered and therefore ignored as of no significance.

  25. Judith,

    In the current science forum of the single calculation, much science is not considered.
    So, it is ignored as of no consequence to the temperature data, even though they influence the circulation of temperatures.

    Why should I contribute?
    There is many physical areas of mechanics that defy current scientists theories as scientists are not trained in the area of mechanics.
    I created many different items that would be of great value to our current society. A perpendicular compass is far superior to our regular compass that has restrictions due to it’s flaws. Power generation by the individual energy is also far superior to bulk harvesting and loosing vast amounts of energy.

    Do you know why an atomic bomb is impressive on the planet surface and a dud in space? NASA knows this but the public does not.
    Still in the areas of understanding planetary mechanics.
    These all go to the same area of understanding the limitations of individualized fields that have boundaries.

  26. Larry Osterman (a developer at Microsoft for 26 years) talks about the size and complexity of Windows beginning at minute 22 (aprox.). “Don’t underestimate the value of the network, the personal network.” (minute 27 aprox.)

    http://channel9.msdn.com/Shows/Checking-In-with-Erik-Meijer/Checking-In-Larry-Osterman-26-Years-of-Programming-at-Microsoft

    • It didn’t help that Windows was built around DOS, which Microsoft purchased from someone else back in the day without adequate documentation. They don’t completely understand the inner kernel of their flagship product.

      Is there an analogy here for climate science?

      • If you had clicked on the link provided, you would have learned the following:

        The challenge is that at some point a code base becomes too large to be comprehended … When I was working on NT3.1 it was possible, hard but possible, for someone to truly know all of Windows … one individual could understand all of the interactions, all of the components … I know of at least four different people on the kernel team that knew every aspect of the memory manager, the object manager, the file system, the I/O subsystem … it was possible for an individual to actually understand that.
        Nowadays … it can’t be done.

        You asked, “Is there an analogy here for climate science?”
        From Weinberger quote:

        The problem — or at least the change — is that we humans cannot understand systems even as complex as that of a simple cell … Cellular systems consist of a set of detailed interactions that can be thought of as signals and responses. But those interactions surpass in quantity and complexity the human brains ability to comprehend them.

        We have developed over many years software systems that are quite complex — so complex that one human brain can’t build, change, maintain or control them. Perhaps there are lessons that climate science can learn from computer science and software engineering in order to understand their complex systems.

      • The difference is that the software people realize that the is the nature of the beast, and are grappling with how to keep a project under control that nobody understands fully. And yet, Windows has some serious competition from open-source software, that has even less central coordination. At some point, to build devices that function, we have to abandon the top-down paradigm where there’s somebody who understands the big picture, and learn to work with the evolutionary paradigm, where stuff just happens.

        Open source projects usually have a benevolent-dictator-for-life who has the big picture, so it’s really a hybrid of top-down and self-organizing paradigms, but I’m sure Linus Torvolds has no idea how the USB driver for the Brother 2040 printer works. He doesn’t need to.

        But as these things get bigger and bigger, even the BDFL won’t know what he ideally should know. We’re just going to have to get used to doing business that way. That means things are going to look less and less like they were intelligently designed, and more and more like they evolved from the ooze.

        Science is going to be like this, too. It has to.

      • randomengineer

        The difference is that the software people realize that the is the nature of the beast, and are grappling with how to keep a project under control that nobody understands fully.

        I see this as somewhat misleading.

        I routinely work on 1M+ LOC projects that are big enough that sections of code that I created 36 months back are black boxes in that I have no recollection of the details. If someone needs to know something I too have to refer to the documentation; there is no install recall of minutae. In that sense it’s also too big for me, and I’m the writer. This is common.

        On the other hand this is not a bug, but a feature. Ever since the dawn of time the premise has been that modules (or objects if OOP) are perfected for the smallish task that they are responsible for and all that is necessary for the use is a descriptor of I/O and parameters/limits. Thus unless there’s a problem suspected in a base object it’s not necessary to know how e.g. the FFT calculation object works to be able to implement this. A lot of software “big picture” meetings has stuff broken into functional blocks (and these are comprised of other functional blocks if you want to look inside) to convey the big picture. So yeah Torvalds has no idea how the USB driver works and never needs to unless it is shown that this particular function block/object is breaking something.

        But as these things get bigger and bigger, even the BDFL won’t know what he ideally should know.

        I can’t agree with this. The purpose of modularisation (functional blocks within functional blocks) is to be able to adjust picture size.

        if I may, I think the problem with the climate sciences re the article premise is that there aren’t really many bricks at all; in other words there’s too few reliable low level function blocks so that a GCM subsection would include e.g. a known to be perfect (or perfect enough) functional block of [pick something which is now called “parameterisation”] whose reliable output feeds into other functional blocks. Anything that is parameterised isn’t a reliable brick at all. If it were understood there wouldn’t be parameterisation (i.e. guesses.)

      • Random, that’s all well and good as long as you’re 100% sure that module xx that you’ve been using successively for three years is really 100% functional. What creates the problems is when you learn after three years that it really isn’t because it wasn’t ever tested fully.

        I can still make Windows give me the BSD. Just had one a few days ago. Why is that?

      • randomengineer

        I can still make Windows give me the BSD. Just had one a few days ago. Why is that?

        In many (most?) cases it’s because a driver was provided by a mfg that didn’t follow the rules. In my company we have separate bits of code to get around crap video drivers (well known companies!) because when the customer sees the app go belly up he thinks it’s you because M$ word doesn’t do this — not realising that M$ word doesn’t tickle that particular API function that seemed to fail. You won’t win trying to convince the user that uber popular brand X video card that sells umpteen million boards has a crap driver, but that IS the reality.

        And yeah there can be module bugs no matter how much you try to avoid it. As when you are calculating in mph and the module you feed to wants kph, whooooops, goodbye super expensive mars probe.

  27. Big computers and big databases are useful tools to science (and other enterprises), but *only* that, they are not the science itself. And lately people seem to be throwing piles of the bricks they generate into the public domain to support this, that, or the other, as though these bricks were inarguable science. My humble opinion is that that like kids hooked on games we’re playing these big tools too much, and should row back a little to concentrate more on fundamental theory, informed and assisted but not dominated by these tools.

    It seems to me that very simple laws can result in massive complexity (e.g. contructal theory, natural selection), and this is likely to be universal (once they finally drill down to the ‘most fundamental’ physics :) It isn’t neccessary to know all the details unless one wants to build something or predict something within such a system, at which point computers are useful. It seems to me unlikely that climate will be an exception, just a case of many simple things combined, for which we don’t yet know the recipe or indeed some of the ingredients.

    Incidentally, after 32 years in the computer industry I’m also pretty certain that computers have not gotten exponentially ‘smarter’, and not only that, in the last 5 years or so as fundamental limits in thermal management of silicon have begun to bite, most performance increases are essentially achieved through an increase in parallelism. This means that if your task or problem cannot be beneficially distributed to more parallelism, you aren’t going to make further purchase on it unless or until someone makes a new breakthrough in computing technology. This issue is why you may have noticed Intel and other silicon vendors giving you 2 then 4 and now moving on to 8 and 16 etc processor cores lately. It isn’t because they have developed an uncharacteristic generosity regarding processor cores, it is because they are simply unable to provide performace increases any other way, at least without your laptop or desktop setting on fire, or needing a fan the size of your desk to cool the machine :)
    Andy

  28. Reading through this thread, I didn’t see the word: “insight.” That peculiar thought that makes whole, from disparate information into a single “aha.” I would guess most of us have had many Aha moments. Aha moments, or insight as I label it, comes from old information; i.e., experience and previous learning, mix with new information. The mixing of old with the new usually occurs in layers of thought processes. When I go to bed and awaken with an insightful moment, I “slept on it” belies the multiple layers that are at work, ongoing, all the time, briefly interrupted by conscious thoughts. Its kind of like playing three diminutional chess, or watching a mountain river flow around rocks and shores with its swirls and eddies. The sights and sounds and the feel of wind on your face as you watch the water flow sometimes delays the construct of fluid dynamics because of the details the senses are providing at the moment. Standing on the banks of that mountain river, it would appear that there is too much information, emotionally overwhelming. Yet, having observed the river flow, gone to bed exhausted from the hike and mountain air, getting up in the morning one feels elated, the emotional equivalent of insight. One comes away with a greater “appreciation” for the mountain river flowing. It is not until some time later, when confronted with another problem dealing with non-linear flow, lets say airflow, that the mountain river turbulence and air turbulence and clouds, and weather and oscillations all can be related in some way: turbulence and turbulent flow. One takes some of the old, mixes it with selected portions of the new and appreciates that there are connections amongst many areas of nature. Studying one part of nature may be helped by using some of the tools or ideas from another part of a similar process. I know that crossing boundaries frequently require treaties and all, (don’t bite me if I don’t say it just the way you say it) but I’m just rummaging around in your toolbox to see what I might be able to use to work on my project. Oh by the way, feel free to take a look in my toolbox, I know its a mess, but just ask if you need something, I’ll help you look. More information is good, just as long as you trust the information. If you can trust the information and you combine it with your old information, there is a chance for insight.

  29. Computers are not smart. They operate software that may be clever at doing what it was designed to do, They have become vastly faster at operating software, but they are still no smarter than they were in the early days of computers, back at the dawn of the Atomic age in WW2.
    The software is very clever now, but clever should not be confused with smart.
    Think of computers like cockroaches: even if you get millions of them into a room, and the combined weight of of their neural systems outweigh the human brain, no one could honestly say they are smarter than a human. You would just have a room full of roaches that badly needs cleaning.

    • randomengineer

      The software is very clever now, but clever should not be confused with smart.

      This is no longer quite true. Due to implementation of fuzzy logic and other bits and pieces of quasi-AI (e.g. fuzzy cascades, similar to information cascades discussed in another thread) some software is capable of delivering output which is smarter than what humans do and not necessarily predictable output. Fuzzy logic allows the machine to do something other than IF/THEN branching, and IF/THEN is the nominal limiter in most such discussion. Fuzzy logic allows for interim decisions that “lean in a direction” as opposed to a simpler branching tree.

      At my company fuzzy logic is what allows a user to design stuff using a notion of importance ranking and being able to deliver an output that is not necessarily what said user would have derived. Being able to duplicate the user’s desire is clever; being able to deliver beyond what the user is thinking is smart (relatively speaking.)

      • randomengineer,
        Good points. As a disciple of Vernor Vinge, one should not write off the potential of advancing technology irt human performance.
        I do wonder how bounded the fuzziness is, but surprises that are not simply malfunctions is a good milestone for a distinction between clever and smart.

      • In the end, fuzzy logic is a calculation. It’s deterministic. It’s no smarter than Boolean logic, just more subtle. It’s a useful tool. it’s not smart.

      • randomengineer

        PE read Wolfram’s book New Kind Of Science especially re emergent properties and then look up Vinge and the discussions re singularity.

        Simply put, a *single* fuzzy routine can be accused of being “deterministic” but cascades a half dozen levels deep are not.

        Atoms are deterministic re valence electrons etc, but then combine these and then the combinations of combinations and keep going; at some point “deterministic” atoms create life as an emergent property, and this isn’t deterministic.

      • That’s why pseudo random number generators are pseudo. It’s impossible to make a digital computer non-deterministic. They best you can do is either fake it with a pseudo algorithm or use an analog measurement of physical noise.You can model chaos, but for any starting seed, it’ll always play out the same way.

      • Does it matter, whether the result is not really random or only as unpredictable for the programmer and the user as a real random outcome.

        It’s also common to seed the pseudorandom number generator by something external to the program like the precise moment, when the seeding occurs at accuracy of microseconds. That makes the starting point truy random in practice while all generated random numbers are then determined by this single random event. This is different from real random numbers but still the result is random.

      • Pekka, the point was about fuzzy logic. I can imagine fuzzy logic systems behaving chaotically, but not randomly.

        In actuality, it doesn’t matter in the biosphere, because organisms always respond to their surroundings, so no two ever behave the same way, even though they may be more-or-less deterministic.

      • P.E.,

        My comment was on your reference to pseudo random numbers.

        I haven’t been following the development in the use of fussy logic, but based on what I learned years ago I would agree that it’s not an approach that’s likely to lead to creative intelligence. The calculational rules of basic fuzzy logic are an ineffective way of combining uncertainties. The rules used are computationally light but in many cases seriously inferior to convolution type solutions of probability calculations.

  30. I don’t see a well formulated epistemic problem here, just a lot of scary noise about complexity. Our collective knowledge has been greater than our individual knowledge ever since there were two people, so that is nothing new. Having billions of people knowing stuff is just a scale change, not a new problem.

    As for computers, we passed this point decades ago. My first journal article was entitled “The Engineering Computer Revolution, Over or Just Beginning?” That was in 1972 because we had pretty well finished computerizing everything we did in my branch of civil engineering. But the revolution was not over because we were facing new computerized stuff that was beyond human ability to compute, beginning with finite element analysis. We had to learn when to trust the computer, because we could not check its work. And we did learn.

    Opaque computer systems and models are now everywhere, in science, in engineering and in human life. It is not a problem per se, any more than cars are a problem, or fire for that matter. Most importantly the issues in the climate debate do not flow from the increasing complexity of the systems that support human life on earth, including science. The climate issues have to do with the specifics of certain systems, such as climate models, statistical models and satellites. By analogy, the problem is not with the car per se, but with the car they are trying to sell us.

  31. The people who initially started developing GCMs appear to have been working on models for purely scientific studies. The model outputs pointed to issues that they believed to be a concern. It was said that the models may not be that accurate in near term predictions, but over the very long term it was believed that they would provide fairly accurate representations of the future climate. The models were never initially developed to be used for the formation of government policy development.

    The next step became more interesting and more political than scientific. Certain “climate scientists” saw the outputs of these models and thought that they should be used to advocate a change in behavior that met their preconceived political bias. They worked quite hard to hide or minimize discussions on the weaknesses of the models or their unreliability. They proposed “solutions” to the “problem” that they highlighted that advocated more developed nations giving funds to less developed nations. This idea was readily accepted by the leadership of the less developed nations, because human nature being what it is, we all like to get something for nothing. The idea of this transfer of wealth certainly impacted the initial general support of the concept of cAGW by many in currently less developed countries.

    The current GCM’s are not really restricted by computing speed although running the models does take vast computing power. It is just that other people have other more valuable uses for the same computing resources. The models are really limited because the software is currently wrong and does and cannot accurately predict the future climate with acceptable repeatable accuracy. There are more variables than we understand and we do not understand the different weights that should be applied when.
    Imo, the last 10 years have been largely wasted in trying to “refine” or “improve” or develop new GCM’s vs. developing better regional models that are designed to predict climatic or weather accurately enough for the next 20 to 30 years so that they can be used for government policy decisions. That is more the timeframe relevant to government policies and it would then be much more possible to have a model that could be better evaluated for accuracy.

    • Absolutely right Rob.

    • When the models consistently exhibited high climate sensitivity you bet people started realizing the implications of that had a bearing on policy.

      I fail to see how the models can be dismissed. They are overwhelmingly showing a significant amount of change in climate CO2 doubles. The details are largely irrelevant to that general point. It’s enough to base policy on. People today who take the information we have and turn out wrong will look a lot less foolish than people who dismiss the information we have today and it turns out right.

      If the Earth does warm 2C by 2100 a lot of people are going to look back at the archived blogs from the early 21st century and wonder why the hell so many people were dismissing the best information we had available.

      • Lolwot

        Warming is not necessarily a bad thing. The “fear” is based on what would happen to other issues that impact the human condition as a result of the warming. The largest of these fears is probably the one related to changes in rainfall. GCM predict that certain regions of the planet will be dryer as a result of the warming. The problem is that these same GCM’s have been shown to be highly unreliable in making accurate predictions when compared to actual observed rainfall amounts.

        So the question to you is, why should a model (or in this case a bunch of models whose result were averaged) be used as a basis of policy making decisions when its outputs have been shown to be unreliable and inaccurate?

        Would you want to fly in an airplane if the flight control computer had been show to crash the plane 50% of the time?

      • So don’t use the GCMs for the detailed predictions. Use them for the broad one that we are about to face a very large and fast change unprecedented for millions of years.

        Then just consult a list like this of what *could* go wrong
        http://www.numberwatch.co.uk/warmlist.htm

        and try to convince me that out of all of those it’s likely none of them will.

      • Lolwot

        Sorry, but you are really seeming to be a bit of a silly alarmist nitwit with that link. Please look at the link you posted again and be realistic. Go through that list and try to highlight ANY items that are a valid concern, especially if there was proper infrastructure constructed to support humans in that area.

        LOL, your list linked more human prostitution to potential global warming, and you wish to be taken seriously?

      • Rob & Lolwot –

        It’s interesting. Both warmists and sceptics use that list. The latter because it is a hilarious parody of fear-mongering and the kind of utter nonsense that gets pedalled as ‘research’. The former because they don’t realise the list was compiled for satirical purposes and only includes ridiculous claims.

      • lolwot,

        gotta love the one about the Minneapolis bridge collapse being possibly due to climate change.

        I for one am willing to go on record and say I’m not worried about that one.

    • Re: “regional”: I once had passing contact with efforts to trace (compute) heat flow though a nuclear pressure vessel. This was an important issue, since the results were to guide real-world design.
      A new and more powerful computer was installed. The engineers cut the grid side in half. The computational load increased eight-fold.

  32. Jeez

    When I was a kid my Mom used to say, “dont criticize a farmer with your belly full of food”

    I can’t believe some of you numbskulls are typing that computers are not smart. That’s some seriously bad karma.

    Kim is a bot.

    • Computers will never be as smart as kim. Probably not even as smart as a simple cell.

      • You mean like winning Jeopardy or beating the world’s best chess champ? That is some cell you got.

        But cells think while computers do not, not yet anyway.

    • I can’t tell whether you’re being sarcastic or real here (no /sarc ), but if the latter maybe you are reading a bit too much into such comments. ‘Smart’ is a relative term after all, and besides, not being full of smarts does not mean they are not tremendously useful, for instance in facilitating blogs. But after 32 years of working with them and on them every day in my career, I’d say they are not real smart at this point. Intellectual performance per watt pretty atrocious compared to a brain. Intellectual performance for any amount of watts is not great. But some narrow functions of course are wildly better than in people. And also for computers as for people, a well integrated bunch of them (e.g. for people a canny company or good research team), is a lot smarter than just one :)

      • steven mosher

        if you guess sarcastic with me you will be right more often than wrong.

        However, our experience with computers has forced us to redefine how we define intelligence and being human. Our experience with animals has forced us to redefine it.

        we cant have pigs and dogs and CPUs exhibiting intelligence.

        For my part I think of man as homo ludens, some Im not too invested in reserving intelligence as a defining human characteristic.

        Heck if we did that, Robert and Joshua wouldnt be human

      • The problem with IQ is that we haven’t defined intelligence. I doubt that we will have a definition that everybody can agree with any time soon. Sorta like pron…

      • Well intelligence is a quantity not a threshold, and not all cultures at not all historic eras decided that pigs and dogs had so little of it. When comparing with CPUs it becomes more obvious only that it has different types as well. I guess comparing with savante syndrome individuals has raised similar issues long before computers ever existed. Myself, I agree with ‘even by our own aspirations, we are but half-men’, which means the real intelligence is held by multiple higher level structures, for instance memeplexes. I hope this perspective allows your two exiles to rejoin the human race :)

      • Mosher – toolmaking is used by some as evidence of being human. Would this mean that the parents of Joshua, Louise, Robert etc. are/were?

      • steven mosher

        Andy, thats such a wonderful way of defining man, I think I will let Joshua and Robert back into the human tribe.

        Willard and kim… hmm not so sure

      • This TSR, Terminate and Stay Resident? Does this make sense?
        Turn & burn, it is the American way today.
        What a bummer, Right you guys?
        Anything for the cause…
        Say it ain’t so Joe.

    • Aw c’mon, Steven – don’t be so modest.

      It’s not the multi-jillion dollar computer that has the smarts.

      It’s (some of) the guys that are feeding it.

      • steven mosher

        I got started in programming when I was in a Phd program in English Lit.
        I was fascinated by two problems:

        1. The notion that one could read through a text to the authors intention. As if there was a ghost in the machine that acted freely. From my stand point I saw the production of a text as the output of a system of rules.
        You had rules for how letters combine to make words, rules for how words combine to make sentences, rules for how sentences combine to make ideas, and rules for how ideas combine to narratives, stories, poems.
        So, to prove that, I started to write NLG ( natural language generation)
        The first little gem was a program ( back in the 80s) that generated poems. They went on my office door. some were fooled. I fiddled around with this thing since then teaching it to write limericks and sonnets. I call that bot kim. Many of you think of him as a commenter, in truth he is just a set of rules.

        2. Ways in which we can measure the “rule breaking” in a text automatically. basically, I came to a point where I thought my idea in number 1 was a load of crap, and what made texts interesting was the way they broke rules and upset expectations ( think Shannon entropy). This kinda lead to the notion that in the creative arts and science one cannot distinguish between orginality and a mistake.. history decides that

        I’ve been of two minds on AI.

      • steven –

        in truth he is just a set of rules.

        Would you mind elaborating on the set of rules that led kim to “connect the dots” to identify Obama’s “Muslim sympathies?”

      • steven mosher

        every program has a bug

      • Kim is me and ‘I’ is kim.

        Mosher is just an algorithm I taught to wear clothes

        ===========

      • So Kim is a bot that can be overridden

      • every program has a bug

        Which is why hackers hack.

      • So Mosher is just a ‘dummy’ that can be overdressed
        =========

      • Uh-oh. Project ‘Kim’ is getting away from Mosher. Or maybe the other way around. Remember Nomad.

      • Or the WOPR…

        Joshua, can you save us?

      • steven mosher

        kim, please check your code. my closet is full of black.

      • OK boss –
        I’m outa here
        ============

      • steven mosher

        Nomad, P.E?

        hehe,

        http://en.wikipedia.org/wiki/Creative_NOMAD

        someday I will tell you the real story about how the ipod was born

      • steven mosher

        kim was written as a TSR. fricken thing wont go away.

      • randomengineer

        TSR? Kim is *old* in computer years. It’s been 25 human years since I wrote one of those.

      • I actually have an old 6GB Nomad somewhere. Apple never invented anything, they just pretend that they did.

        I remember calling Creative support when the damn thing sorted all the tracks alphabetically. They guy there said they never imagined that anybody would actually rip CDs let alone classical, where order matters.

    • steven,
      In no way am I dsrespecting the amazing advances in computing.
      I am simply trying to point out that that we are not at ‘smart’ at this point.
      The excellence that computer engineers, programmers and scientists have developed by their hard work is where the smartness lies. Not in the computers.
      Computers are very excellent clever tools which very smart people make do amazing things
      I respect and admire good tools. I appreciate respect and admire more the maker of the good tool.

      • steven mosher

        I think its interesting the way intelligence gets redefined..

        if a dog could count would you call it smart?

        if a monkey could play chess and beat you would you call it smart?

        fun questions. I have no answers, but generally I try not to type things that would piss off my computer. Its got a memory.

      • Oh, really. My computer knows who’s boss. Every time it gives me crap, I threaten to reformat it and load Linux. Works like a charm.

      • steven,
        Great points.
        P.E.
        My son took a laptop and for a summer project completely reloaded it with Linux. He claimed it worked better. We only use Mozilla for IE on the home comp. Maybe that means we already have the comps intimdated enough?
        Here is why I am a bit shy about computers and smarts, by the way:
        http://mindstalk.net/vinge/vinge-sing.html

      • I used to use Linux on my main box. Then they “improved” the kernel and lost RAID support. That, and there are a couple of Windows only apps that I need. You’d be amazed what you get get under unix, though. For example, a completely free schematic editor and PCB CAD package. Who needs training wheels?

      • Well the number one son never stopped taking tough challenges. He pursued his computer obsessions and got a good comp engineering degree from a tough school and is now pursuing dreams of conquest in the Bay Area. When lucky, I can follow what he is doing on a conceptual level, lol.

      • steven mosher

        PE.

        You might be interested in a company I once started

        http://en.qi-hardware.com/wiki/Main_Page

        First project was fun

        http://en.qi-hardware.com/wiki/Ben_NanoNote

      • steven mosher

        if you like to tinker PE

        http://en.qi-hardware.com/planet/

  33. –> “…facts — without regard to how they go together. Brickmaking…”

    If only that were true. In reality, the AGW believers have been fabricating facts and putting them together for purposes of fearmongering.

  34. –> “Third, computers have become exponentially smarter.”

    Even so, there is not enough computing power in the world to resolve a climate model containing all of the relationships between all of the variables.

  35. –> “Models this complex — whether of cellular biology, the weather, the economy, even highway traffic — often fail us, because the world is more complex than our models can capture.”

    It is as if it would it burn the lips, sear the brain and turn the body of the writer into a pillar of salt if he were to utter the word ‘holistic.’

  36. –> “Aiming at universals is a simplifying tactic within our broader traditional strategy for dealing with a world that is too big to know by reducing knowledge to what our brains and our technology enable us to deal with.”

    This is why a socio-political culture founded on personal freedom, individual liberty, capitalism and personal responsibility is such a stuuningly successfully existential survival strategy.

  37. –> “No one says that having an answer that humans cannot understand is very satisfying. The world’s complexity may simply outrun our brains capacity to understand it.”

    Hucksters prey on human ignorance; however, as honest Abe understood, no hoax lasts forever–i.e., “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

  38. –> “It exists at the network level, not in the heads of individual human beings.”

    Complete and total evidence of a complete ignorance of human history, good and evil or is this a simple example of self-defeating nihilism? Your choice..

  39. where much of our knowledge resides in complex computer models,

    Let me offer the opinion that knowlege “resides” in no such place.

    Does the knowledge to build a safe automobile reside in the Pinto, Jaguar, or Lincoln you see on the road? Computer programs are tools, built by teams of programmers that employ SOME of their knowledge, and OMIT by design a lot of their other knowledge to render something useful. There is much to learn about what was put in, what was left out, and what was done badly.

    • Perhaps people are confusing knowledge with information?

      • I think it is hard to argue that the Differential Form of Maxwell’s Equations is about a pure a form of knowledge as can be found. Of course, it is useless without the additional knowledge of what a “curl” or “divergence” is as well as “B”, “H”, “E”.

        Knowledge can be employed in a digital computer program. But the compromises that must be made to turn an analog process of field effects into digital computer representations means that the “knowledge” must be compromised. Hopefully, the stuff you must leave out is insignificant and unimportant. But how do we, or any third party, KNOW the implementation of that knowledge has not been corrupted.

  40. David Weinberger: We have a new form of knowing.

    That’s a good way to put it. Supplementing systems of equations and tabulations of data (e.g. the periodic table, the Balmer series), we have computer programs.

    David Young: I view this reliance on computer simulations as a potential death knell of science.

    It’s hard to argue with a claim modified by “potential”: CO2 accumulation has the potential to end human civilization, as does the immense debt of the EU, US, Japan, etc. Nevertheless, I would argue that the reliance on computer simulations has no potential to ring in the end of science. The computer program is a formal assembly of many pieces of knowledge, some theoretical and some empirical measurements, about how something in nature works. When the model is complete enough and the measures are accurate enough, the computer output is reliable enough to use as a guide in planning and designing. Computer simulations have been used in the design of aircraft and the planning of space missions; but they were subjected to much step-by-step testing before they were relied upon.

    What is lacking so far in climate science is sufficient evidence that the models are complete enough and the parameter estimates accurate enough to be reliable guides to the future. Quoting Trenberth’s published and private expressions, the models do not account for the gross flows of energy. Taking published model outputs, no model has yet correctly predicted the course of the spatio-temporally averaged global temperature 20 years in advance, much less 50. There seems to be wide-spread, perhaps not universal, agreement that the models can not predict what effect additional CO2 will have on cloud cover, either in the aggregate or for any particular places. Taken all together, the evidence is that the models are either incomplete, inaccurate, or both.

    JC Comment: Expending more intellectual effort on the epistemology of too big to know, where much of our knowledge resides in complex computer models, is needed IMO to better assess our knowledge level of the climate system and confidence in conclusions drawn from enormous data sets and complex system models.

    That strikes me as too vague to be of any use. For the climate system, what’s needed are the traditional scientific tools, on a large scale. For example, testing theories (and parameter estimates) by using them to make novel predictions, and judging the goodness of the knowledge from the accuracy of the predictions. That element of epistemology isn’t new. Also public debate about all empirical and theoretical claims is not new. Also, identification of gaps in the knowledge and creating methods to investigate those gaps — that isn’t new epistemology, but it requires that at least someone has the psychological ability to recognize and admit to ignorance.

  41. When I read the extensive citation from David Weinberger, my intuitive reaction was “he can’t be a scientist”. I then looked up his biography, and it turns out he is a “philosopher by training”.

    I knew it! This is exactly what we get when a philosopher is let out of his cage for too long – a litany of abstractions that soar above the real world so high that they see reality from far too great a distance to discern the details. In the citation, the abstractions include “complexity”, “theories”, “interaction”, and “systems”, among others.

    Abstract concepts are a critical element of human (and electronic) thinking, but to exploit their value requires us to constantly compare their logic with real world observations to determine whether they are generating new insights or simply leading us astray. In the language of semantics, we must constantly both ascend and descend the abstraction ladder to test the consequences of our abstract thinking. Failing to do this can give us conclusions that are completely logical, but… well, here’s an example.

    In Weinberger’s narrative, the decision that reality shouldn’t interfere with philosophizing comes early. He introduces his perspective by referring to a 1963 letter from medical scientist Bernard Forscher lamenting the emergence of “too many facts”. (These apparently included the inconvenient data on HIV and AIDS, which would never have transpired in a world less encumbered by facts). Forscher and Weinberger are right, of course – biomedical information has exploded well beyond the capacity of individuals to comprehend it all. At the same time, human health has improved, life expectancy has increased, and our ability to prevent and treat heart disease, cancer, and infectious diseases (including AIDS) has grown dramatically, because we have been able to exploit the new information to formulate principles of treatment and prevention that actually work. When they are applied, people end up healthier.

    In the realm of climate science, abstract concepts that include irreducible complexity, uncertainty, epistemic error, and the like have value – but not unless one descends the ladder of abstraction to determine their relationship to observational data. To what extent do complexity and uncertainty determine the error margins in our ability to relate upper tropospheric specific humidity to an amplification of the temperature effects from CO2 forcing? How much error is related to sampling inadequacies in ARGO data? At what timescales do chaotic fluctuations dominate and when are these outweighed by long term trends?

    The list is almost endless. I don’t want to suggest answers to those and other questions, except to say that when we don’t try to descend the abstraction ladder, the abstractions can be used to justify almost any conclusion one wishes on the state of climate knowledge – this is one reason for interminable arguments in this and other blogs on how well we can understand, interpret, and generalize from available data.

    The problems Weinberger attributes to knowledge explosion are real, but they are problems we should welcome as we welcome any embarrassment of riches. I don’t dispute the need to address them, but I would be troubled if, in discussions here, we remain in the abstraction stratosphere too long without asking how well specific pieces of information can be used, despite complexity, to advance climate health in the manner that complex biomedical information too vast for easy comprehension has advanced human health.

    • Fred

      You complaining about long winded posts with little actual content is a bit funny given what you posted.

      • The main point is, however, stated clearly in Fred’s message and is valid: Trying to reach conclusions through philosophical considerations without understanding of actual scientific work and processes cannot tell, what is possible and what is not.

        Some early threads on this site did discuss the scientific process and what’s written about them. In these threads some books are mentioned that are based on following what scientists really do and, how they reach their conclusions have added very essential points to the purely philosophical attempts do define the scientific process. Without such real world data any conclusiosn can be drawn, and the text of Weinberger seems to fall in that trap.

      • Pekka

        You effectively summerized what he was trying to state in less than a 1/3rd of the time.

      • The reverse is also true. Trying to do science without understanding philosophy results in a lot of self-deception. I see that as the more common error.

    • I knew it! This is exactly what we get when a philosopher is let out of his cage for too long –

      Kolmogorov and Gnedenko suggested that the philosophy and logic of the “British statisticians” was firmly rooted in the 17th century.One could extrapolate this to the western hemisphere.

      Nalimov who became AK assistant director ( after a decade in a Yamal gulag for arguing that sociological behaviour is irriational) was remarkable in his prescient paper 1980 eg

      One of the principal scientific tasks is to explain observational results according to the underlying factors (or mechanisms). The vast data accumulated up to the present show that this task in its general formulation cannot be unambiguously fulfilled either by the methods of “numerical spectroscopy” or by those of “logical spectroscopy.” Computers have only made the task more difficult. But now a new way seems to emerge: reducing information by means of unfolding it, i.e., presenting it through a set of models. However, this may be another illusion.

      A few words must be said about another approach to simulating complex systems by computers, which does not refer to “mathematical spectroscopy.”I have in mind the grandiose program of simulating five
      ecosystems: the desert, coniferous and foliage forests, the tundra, and the prairie, carried out in the United States in 1969-1974. Expenses for the research of only the three latter systems exceeded 22 million dollars,
      8.6 million of which were allotted directly for the simulation, synthesis, and control of the whole project; 700 researchers and postgraduates from 600 U.S. scientific institutions participated in the project; 500
      papers were published by 1974, though the final report is not yet ready. Mathematical language was used in the project to give an immediate (not reduced) description of the observed phenomena. A lot of different
      models were used which were divided into blocks with an extremely great number of parameters (their total number reaching 1,000), and yet it was emphasized that the models described the system under study in an approximate and simplified manner. The researchers had to give up any experimental verification (or falsification) of models; instead they used “validification,” which means that a model is accepted if it satisfies the customer and is particularly favorably evaluated if bought by a firm. At present, all these activities are being evaluated. Mitchell et al. (1976) conducted a thorough analysis of the material and evaluated the simulation of the three ecosystems in an extremely unfavorable way. From a general methodological standpoint, the following feature is important to emphasize: the’ language of mathematics, for the first time, is allowed to unify different biological trends, and this has happened without a generally novel or profound understanding of ecology. Mathematics was used not to reduce complexity but to give a detailed, immediate description. This is a new tendency in science. But where will it lead? The time is not yet ripe for a final conclusion, but scepticism is quite in order.

      At present there seems no reason to reject his final conclusions.Of course objectivity is open,maybe Fred you can point us to the “greatest hits” in climatology of the 21st century the top 5 will suffice.

      • I appreciate discussions, but I generally decline invitations simply to argue. Your last statement struck me as an example, simply because any answer could be counted on to elicit an argument. The excerpt you quoted, on the other hand, if I understand it correctly, illustrates my point that one can derive or infer almost any conclusion by talking in generalities. The critical task is to test those generalities against specific facts. In the case of climate, this includes evaluation of model simulations, but is not limited to them, a reality that I don’t think is sufficiently appreciated.

        This is why I find posts here that address detailed data and their interpretation to be more useful than posts that discuss philosophy without relating it to observational details in a quantitative manner.

    • Fred,
      This is an excellent post by you.
      Abstraction is good, but it is only great if we can apply it in the real.

      • hunter –
        I agree. Long-winded, true, but along the right lines.

        However, I would say that the idea of ‘climate health’ worries me. It seems like a creeping extension of the ‘out of balance/disrupted’ meme that results from too much anthropomorphising by enviro-loonies. Pretty soon the climate will be ‘unhappy’ and looking for ‘revenge’….

      • Yeah, and if I were Mosher’s bot, I’d be worried about one of those lightning bolts.

      • P.E. –
        If you were Mosher’s bot?
        I hope you’re not serious ;)

      • kimi,
        I like Fred, no matter how long winded he can get. He is like an ethical Polonius.
        As to the anthropomorphization of environment and climate, ou are spot on. How many papers and shows either deliberately or by default humanize the birds, the beasts and the Earth itself? Think of Lovelock and Gaia.

    • Fred, as a philosopher (of science) who has been out of his cage for a long time I find your insults telling. You probably lead the pack here in conceptual confusion. Abstraction escapes you, but it is the heart of science, and the soul of mathematics.

      • David – If you can contain your injured feelings for a moment, you should go back to read what I wrote. You won’t find any disparagement of abstraction.

  42. I found this reference just now.

    http://www.sciencebits.com/IPCC_nowarming

    It is extremely relevant to what I write. Let me frame the discussion we have been having with Pekka from a different point of view. When the proponents of CAGW started their investigation 30/40 years ago, the first question they ought to have asked (following in the ways of physics established by Galileo and Newton), was, “Is there enough observed data to support the hypothesis of CAGW?”. The answer to that question, then, as it is today, is a very decided NO.

    The second question they should have asked is “Can we devise means to go out and get the necessary data?”. The answer is exactly the same; Mother Nature lets us have the data on her terms.

    These questions and answers should have been written up in the peer reviewed literature, so thay they were clearly established. Then, and only then, should the third question be asked “Is there enough information from estimations, models, simulations etc. to support the hypothesis of CAGW?”. That is the issue we should have been debating.

    Instead, by general underhanded tactics, the supporters of CAGW set out to establish that the science was settled, with the people who hold the purse strings. And they succeeded extremely well. Now we skeptics/deniers are faced with the uphill battle of trying to get away from postnormal science (whatever that is), back to proper physics established centuries ago. And it is a very difficult battle to fight.

    I would dearly love to engage in the debate as to whether there is enough non-observed data to support CAGW, given that it is clearly established that there is not nearly enough observed data.

    • ceteris non paribus


      Instead, by general underhanded tactics, the supporters of CAGW set out to establish that the science was settled, with the people who hold the purse strings. And they succeeded extremely well. Now we skeptics/deniers are faced with the uphill battle of trying to get away from postnormal science (whatever that is), back to proper physics established centuries ago. And it is a very difficult battle to fight.

      Post-normal science is what happens on blogs.
      For example – You get to say things like “back to proper physics established centuries ago” with a straight face.

      The success of classical physics up until 1900 or so represents the place where science and epistemology should have stopped and declared themselves “settled”, I guess.

      Everything that came afterwards just brings on the ire of those who know that, down deep, everything is made of little perfectly elastic billiard balls, and that Heisenberg and Pauli were nefarious cranks.

  43. Rather, the creation of data galaxies has led us to science that sometimes is too rich and complex for reduction into theories. As science has gotten too big to know, we’ve adopted different ideas about what it means to know at all.

    The problem — or at least the change — is that we humans cannot understand systems even as complex as that of a simple cell.

    Why is this a change? New knowledge necessarily reaches out into the area of unknowns, and requires new theories to offer explanations.

    Same as it ever was. The absolute determination of increased complexity is actually a relative increase in complexity.

  44. chris1958

    In reference to “advocacy” you mention “trial lawyers (“avocats” in French).

    A trial lawyer is paid by his client (or the state) NOT to find the “truth”, but to win the trial for his/her client (or the state).

    It is up to the judge or the jury to “find the truth”.

    A climate scientist is paid (usually) by the taxpayer to find the “truth” about what makes our climate behave as it does, NOT to provide the “proof” for a preconceived premise he/she happens to support.

    If he/she does the latter, then this is no longer “science”, it is “advocacy”.
    See: http://judithcurry.com/2011/10/13/advocacy-science-and-decision-making/#more-5268

    This should not be financed by the taxpayer.

    Max

    • Nor should it be misrepresented to the public. The general public assumes that the mission of scientists is to dispassionately find facts. This is wrong of course, but most people still believe it.

    • “A trial lawyer is paid by his client (or the state) NOT to find the “truth”, but to win the trial for his/her client (or the state).”

      True of some lawyers, not all. An ethical trial lawyer is paid to put on the best case he can, consistent with the rules of ethics, to “win the trial” for his client. Of course lawyers, like climate scientists or any other profession, are not all ethical.

  45. Mydofgsgotnonose

    We have to be brutally honest about the models, They are wrong not because the GCMs are wrong, but because they are a vehicle for incorrect science; a propaganda exercise. It worked in the fast warming 1990s but now that’s stopped they’re in real trouble. As I investigate further I see how these people are trying to erase past mistakes as well.

    I entered the problem when I saw that after all the modelling using as heat source the ludicrous back radiation perpetual motion machine, unique to climate science, they correct 3-5 times overestimate of post industrial warming by using cooling from clouds. Firstly they use double the real optical depth, then to get the finer detail they assume aerosol pollution gives further cooling.

    Hansen is presently trying to explain no present heating by imagining that the latter correction has just jumped ~50% and there’s been extra ocean cooling. He could of course be correct but the justification of the extra polluted cloud cooling is a modelling exercise from MSU which claims it is 3-6 times higher than experimental data by competent physicists who are generally honest because that discipline is self-checking.

    What I and others have realised is that those satellite data are wrong because the aerosol optical physics fails to include a second optical process. In reality, the satellite data are probably reversed in sign. In 2004, NASA published fake physics to keep this scam afloat. It got AR4 but not Copenhagen and it convinced from authority the poorly educated [in physics] climate scientists. It didn’t fool me or an increasing number of insiders who are saying this fraud has to stop,

    • ceteris non paribus

      “the satellite data are probably reversed in sign”
      “fake physics”
      “didn’t fool me”

      Oh goody – You are smarter than the average climate scientist.
      And brutally honest too.

      Now, please stop beating around the bush and publish these great revelations.

  46. http://judithcurry.com/2012/01/09/too-big-to-know/#comment-157588
    http://judithcurry.com/2012/01/09/too-big-to-know/#comment-157589

    David “curse of asymptotes” Young,

    Please define “corner” in the context of an idealized perfect corner, of which none exist in the real world. Once you’ve done that (unsuccessfully, I’ll add a priori), please show us all the stress/strain modelling ASSUMPTIONS used for these “so called” perfect corners. Please then go on to expand into the field of fracture mechanics as it applies to the tensile side of reinforced concrete sturctures, and the modeling thereof.

    When a polymath, such as I, talks down to a monomath, such as you, we do so for very good reasons.

    Further, if we are to believe ANYTHING you say here in the future, please explain why all structures built to date, using any design methods, have not all failed to date, because all such structures have “corners” and if one were to obsess only about “corners” then no structural engineer would have ever got anything off the ground to begin with in the first place.

    So much for the conjectured “curse of asymptotes”.

    I’ve have done laboratory work in hydrodynamics as well as numerical work in hydrodynamics. Now, in the laboratory there is no such thing as a “perfect flat bottom” BUT it is very easy to assign such in a numerical model (too easy, in fact). Now when I come upon a numerical modeler, who implicitly ASSUMES perfect corners or perfectly flat bottoms (and who also appears to be ignorant of the fact that such things, do not, in fact, exist), I simply ask them to replicate the real world BC’s and IC’s that actually existed in the laboratory (and/or real world) study. That means, in the limit, or the asymptotic behavior, model EXACTLY what exists in the real world (atoms and quarks and what not). Because in the limit, you bump you hear up against scaling issues, from the very small to the very large.

    This is a real world research engineer having to school the applied mathematician on their own modeling ASSUMPTIONS.

    I’m still waiting on your asinine list of today’s numerical modelers (circa 2010), who for some rather odd reason, only picked up numerical methods dating from the 1950’s (circa 1950’s or before).

    Since you can’t possibly defend this statement, it is rather obvious why you have avoided directly addressing such an asinine statement. I on the other hand, having directly addressed your asinine statement, have quite obviously, put you in your place, so to speak. Nothing you can possibly say will ever negate that one basic fact.

    Me personally, I’ve taken two courses in FEA, I’ve developed my own FEA codes in hydrodynamics. If however, I were to seek the advice from an FEA SME, you would be the last person on Earth that I would seek for any such advice.

    As usual, it is my only intention here, to single out people who say stupid stuff. In this thread, I have chosen you, for having said the most stupid thing (so far). As they say, easy pickings.

    If I were to address all the stupid stuff said in this thread, well now, that would occupy a book (or two). If I were to address all the stupid stuff said on this blog, well now, I’d need an infinite amount of monkeys …

    There seems to be an awful lot of monkeys on this blog, for some not-so-very-odd reason.

    • ceteris non paribus

      Actually, the infinite monkey theorem requires only one (immortal) monkey – but infinite patience.

      But I suppose if your key-slapping monkeys are mortal, it would help to have an infinity of them.

      I can’t help but wonder whether Cantor’s diagonalisation argument would apply to primate numbers…

    • John Carpenter

      Hey God…. is that you?

      • John – I agree. The gratuitous insults aside, EFS Junior hasn’t provided enough documentation for me to judge how well he knows these topics, although he gives the impression of being prone to overstatement, and so that probably applies to his criticisms of others.. I don’t always agree with David Young, but David’s concerns about error correction do need to be taken seriously, even if they may not be as much of a problem as David claims.

    • I will not stoop to responding in kind.

      Two points:
      1. Details about numerical methods and their date of origin were handled by me on a previous thread. Also see Paul Williams’ video at the Isaac Newton Institute.
      2. Most structures are conservatively designed, which is a good thing. Testing is used to “adjust” models for particular applications. That’s why there are not more widespread bad consequences. The good engineers are actually really first rate and keep modeling results in their “place.”. The problem comes in areas like climate science where there aren’t experienced engineers to blow the whistle.

    • With regard to corners: Most engineering structures have corners or relatively sharp edges. Even if they are rounded, they still present areas where large gradients happen and require non-uniform refinement of the grid. There is also the matter of fasteners and the holes for them. These are notorously difficult to model and usually they are handled empirically with testing and the models are “corrected” for this effect. Complete structural models are rather rare in engineering practice and require tremendous engineering experience to interpret correctly. I won’t go into the horror stories, because I can’t do so, but engineers continue to earn very high respect from me for their instincts concerning what will actually work and what is a modeling artifact. The problem is when you CAN’T do testing to validate your models. It gets very dicey very fast.

  47. The Author makes a fair point. The ‘Brick Pile’ enables anyone, SIF, NGO or National Government to pick through the pile and find what they need to further their current cause. Bee’s are growing/shinking/expanding their range or simply becoming extinct – depending upon which brick you fish out of the pile to promote your particular agenda.

    The CO2 debate will probably be taught in Schools everywhere by 2100 as a classic example of a sensible debate that should have taken place but was rapidly destroyed by an overdose of bricks.

  48. For some reason that Josh picture of the room full of Gavins reminds me of this.

    http://1.bp.blogspot.com/-9x0iSiZpERo/TbV5ov2S-pI/AAAAAAAAAho/kpMw2vdmWm8/s1600/elbonia.jpg

  49. Judith,

    Expending more intellectual effort on the epistemology of too big to know, where much of our knowledge resides in complex computer models, is needed IMO to better assess our knowledge level of the climate system and confidence in conclusions drawn from enormous data sets and complex system models.

    This is more of you chasing after the new shiny thing, rather than understanding and accepting that what you already know is sufficient to deal with the questions before you. A short while ago, it was “black swans”. Before that, “post normal blah, blah, blah.” Now it is “too big to know”. It is all very interesting, in a very “beauty of the minutia” sort of way, but it is trivial to the point of being wholey irrelevant to your stated area of interest.

    In your call to “expend more intellectual effort” on this latest academic novelty, you seem oblivious to the fact that each time you seize upon the most recently articulated “insightful way of looking at things”, you are doing nothing more than succumbing to the very intellectual overload that Weinberger is warning you about. These are your bricks.

    The interesting thing about Weinbergers esaay, is the conclusion he draws, that we already know:

    “Nevertheless, models can have the predictive power demanded of scientific hypotheses.

    That is correct. Models can have that power. But the frequently do not, and they are truthful and useful only to the the extent that they do.

    The problem with “climate science”, to use Weinberger’s analogy, is not that it has too many bricks from too many brick makers, and not enough houses from too few house builders. It is that it has too many castles built from clouds in the air by a handful of storytellers who can’t even make a decent brick, but fancy themselves to be great architects.

    The problem with models, and climate models in particular, is that they are not being offered as hypotheses to be tested via an assessment of their predictive power. Instead they are asserted as unassailable truth machines, against which hypotheses may be tested. We see paper after paper endorsing this odd and unscientific behaviour. We dont need “too big to know”, or whatever, to recognize this problem. And wasting our time idly pondering the endless chain of momentary pop culture science gurus only distracts us from the task of fixing such problems.

  50. Interesting factoid:
    “Forscher” is the German word for RESEARCHER.

    Hal

  51. “Third, computers have become exponentially smarter.”

    It’s truly appalling how fundamentally ignorant a supposed intellectual can be and how many people can be impressed by it.

    Pointman

  52. One of the ways people deal with too much data/information is to make a simplifying story. A good simplifying story would be Newton’s theory of gravity, which in its simplist form ignores friction and other minor factors. But in other cases, people make up a simplifying story that is not consistent with itself or with the facts. This is done for example to help people deal with other cultures by making generalizations about chinese or indians that are only true in a vague way (or even false but reflect some singular personal experience) and ignore individual differences. In primitive times unexplained events were due to witchcraft or demons. In the climate change debate, the story is that climate change will be bad: this enables one to explain all the bad things that happen like floods and hurricanes. Over-application of this story line leads to people “explaining” earthquakes as due to climate change or to contradictory “explanations” like it being colder due to it getting warmer. Story lines like this lead quite comfortably to political action, but not to effective action, because magnitudes are lost, costs and benefits are ignored, etc.

    • Was Newton creating a simplifying story, or teasing apart the elements? I tend to look at it as the latter. You have to know what all the instruments are playing before you can put an orchestra together.

      • The genius of Newton wrt the planets and other bodies was simplifying them to point masses. If you try to model the dynamics of an asteroid field or all the stars in the galaxy, however, even knowing that Newton’s law exists, you start having difficulties due to the complexity.

  53. In Ersilia, to establish the relationships that sustain the city’s life, inhabitants stretch strings from the corners of the houses, white or black or gray or black-and-white according to whether they mark a relationship of blood, of trade, authority, agency. When the strings become so numerous that you can no longer pass among them, the inhabitants leave: the houses are dismantled.

    From a mountainside, camping with their household goods, Ersilia’s refugees look at the labyrinth of taut strings and poles that rise in the plain. That is the city of Ersilia still, and they are nothing.

    They rebuild Ersilia elsewhere. They weave a similar pattern of strings which they would like to be more complex and at the same time more regular than the other. Then they abandon it and take themselves and their houses still farther away.

    Thus, when traveling in the territory of Ersilia, you come upon the ruins of abandoned cities, without the walls which do not last, without the bones of the dead which the wind rolls away: spiderwebs of intricate relationships seeking a form.

    Invisible Cities by Italo Calvino

  54. What a conundrum!
    Models too complex to comprehend, yet KISS don’t cut it.
    Yikes.

  55. With regard to systems biology, the dawn has not augured well. It so far has proven to be on par with reading chicken entrails; in both cases the chicken dies.

  56. I work as a development engineer in the auto industry. A good friend and colleague of mine is a talented full-vehicle modeller. He has a sign in his office that says: “All models are wrong. Some models are useful.”

  57. The basics of climate don’t need more than an energy balance model. The forcing (even independent of feedback details) defines the climate. From this, we know the Cretaceous was warmer because of more atmospheric GHG forcing, the LIA was colder because of less solar forcing, the ice ages were colder because of albedo and GHG forcing, and the future climate will be warmer again because of GHG forcing increasing again towards Cretaceous values. In fact, just looking at forcing, doubling CO2 is seven times more change than the estimated solar change since the LIA. It does not need a GCM to understand that this is a large forcing change. GCMs only fill in the details without affecting the bottom line of the energy balance that has to be satisfied.

    • In fact, I am fairly sure that MEP would be consistent with such a model where heating by the sun warms the warmer surface and earth radiates the same amount of heat from the colder atmosphere resulting in a net entropy gain of the system despite the net energy balance.

      • Or maybe not quite, but the conversion of solar photons to more infrared photons by the earth system, would be an entropy gain. I guess the MEP connection needs a bit more thinking through.

      • Markus Fitzhenry

        You talking to yourself like that made me laugh Jim D, you’re funny.

  58. Judith,

    To continue with Forschers’ brick
    analogy:

    When a “team/company” manages and
    controls what the quality of the brick
    ingedients and firing temperatures should
    be, and what facings the bricks should
    have, it’s done to industry standards.

    At the brickyards, bricks used to be
    classified as being “A” brick, used for
    facings, “B” brick, used for interior walls
    and not usually seen on exteriors, and
    “C” or cull brick that was unsually over
    or under-fired or in some way physically
    or visually inferior.

    If the folks drawing the kilns or loading
    the banding machines clapped two bricks
    together and they had a sustained “ring”
    and looked good, they were “A” bricks.

    Those with not so much “ring” were “B’s”.

    When they clapped two together and got
    a “thud”, those were “C” bricks and were
    normally sent out for fill or the bat crusher
    and reground/recycled within the plant.

    The brickyard management “team”
    never came up with their own sizes or
    quality specifications without having a
    special order for such bricks in hand.

    The brickyards and the contractors
    were never in control of the overall
    “industry” standards. That was always
    the function of commerical standards
    boards, contract specification writers,
    the architects and a myriad of local
    building inspectors.

    Such has not </i/ not been
    the case for bricks made by “climate
    science” since the 1970s.

    The folks making the modern science
    bricks have also determined what the
    final use of their “bricks” will be, made
    up their own quality standards, and
    even engaged in determining who can
    make climate bricks and lobbied for
    the consumers to accept their “B” or
    “C” bricks as the best there can be.

    The folks who runing this particular
    and peculiar brickyard have lobbied
    and tried to convince the contractors
    and consuming public that their brick
    designs, specifications and final use
    plans are the only ones worth using.

    However, an ediface built with “B” or
    “C” brick can almost never stand long
    against use, weathering, or even light
    shaking.

    They almost never pass detailed
    inspections, unless the inspector has
    ties to the brickyard “team”.

    We won’t even go into the problems
    found in buildings built on sand or
    with poor foundations.

  59. The ‘bricks’ argument is fallacious, IMO. During the Victorian era, tens (maybe hundreds) of thousands of amateur, semi-amateur and professional people collected data, conducted experiments etc. Nobody then or now would be able to understand it all or even know what the scope of ‘fact collecting’ was.

    Even in those days, the amount of data around about butterflies or orchids was vast. It was often inacessible, but in fact many private and public institutions (like the Royal Society) were set up for precisely this purpose.

    There is nothing new about ‘too much data’ in the history of modern science. Today, most of it is like the Victorian hobbyist recording and collecting butterflies in his backyard for 10 years – just noise.

    It has always been a question of following lines of research which are beneficial or interesting. Scientists always have potentially unlimited areas of exploration.

    The idea that the world can be explained by ‘bricks’ (and presumably, mortar) owes more to suburban concepts of building and renovation than to science.

  60. Who needs another Royal Society, when we have nasa and other noteable scientists on the job doing their real important stuff: 24/7?
    All on your dime. 10:10, just blows their minds you know?

    http://www.wanttoknow.info/ufos/apollo_lunar_landing_videotapes

    Makes one wonder, just how much grant money Queen Victoria, was dolling out in her day? A neo-penny for your thoughts…

    • Well, Queen Vic. was cute enough in her day, but I doubt she was dolling it up! As for grants, the Royal Navy and other gov’t agencies doled out for numerous projects.

  61. Steven Wright joke: I have a map of the United States… Actual size. It says, “Scale: 1 mile =1 mile.” I spent last summer folding it. I also have a full-size map of the world. I hardly ever unroll it.”

    Instead of getting simpler and more elegant, climate science is going the other way, it seems to me. And especially troublesome when computer science tells us that you can make a computer give any answer you want.

  62. Computer models are just that, they are models that you give parameters to and they do the sums for you. You can vary millions of things and get billions of answers, maybe probabilistic statements about how likely things are if only…, imagine that on your Casio. Still depends on the parameters you put in. However if something is real it tends to jump out and smack you in the face upon even just a cursory look at the data. If it doesn’t and some fancy model tells you it should then it’s possibly rather just fanciful. The more complex things tend to be the more belief it seems the human being has in our ways of dealing with it, ‘they must be geniuses’ and ‘we must not doubt their word’. By association the parrot gains self-appointed self-esteem, they feel better about their lives and a purpose is born.
    If one looks at the trend of temperature data up to around 1970 there are some cyclical trends easily observable (30 years of cooling and then 30 years of warming ish), the smack in the face moment. Back in 1970 some predicted a warming, anyone could really we were at a low and it was clear what the next 30 years would bring. Tinkering with the data has probably downgraded estimates pre 1970 and upgraded events post 1970, again ish. I did see somewhere the level of tinkering NOAA had done in just the last few years to the historical temperature records. Even just a few years of believing they knew better than history could result in a delta of around +0.15C over time, so maybe the temperature warming that was on the cards has been exaggerated and just maybe the flat lining now is just a decline that’s being hidden by data chicanery (might be unintentional but in most fields one would have to justify even the smallest change in just one datapoint). I would with my fancy brain based model predict that the temperatures over the next 20 or so years will decline but I’m not so sure the longitudinal data temp record is in any fit shape to detect it. The human being is often fuelled by ambition, my time in academia and in industry post that have taught me that everything in some people’s world is directed at accumulation of kudos and its attendant rewards. It’s smooth corner world in academia and regulated world in industry (thank heavens).
    Billions of small pieces (I like ‘bricks’) of information are conveniently and non-randomly gathered and a publication bias house of cards is built on the need for alarm. There’s no scientific mortar to hold them together and if you wish to huff and puff then you’ll have to turn science completely around and prove something is not happening. Prove there is no needle in that haystack boy, otherwise you must bow to the needle. The alternative has become the null and that rather unscientific bone is not being given up without bared teeth.
    I listened to David Spiegelhalter on my car radio this morning talking about uncertainty. The radio interviewers were asking questions and of course like the good guy he is he gave indicators of likely problematic factors affecting the chance of everyday events. Time was running short and the final question was how likely is someone else to have the same birthday as me, is it 1/365. Yes was the answer but no time for the very informative buts. What a shame.

  63. […]Here is a great Blog You may Obtain Interesting that we Encourage You[…]

  64. […]below you will uncover the link to some sites that we believe you ought to visit[…]