Site icon Climate Etc.

Agnotology, Agnoiology and Cognitronics

by Judith Curry

I’ve just come across three really interesting words, that I have somehow missed up to this point in my studies on uncertainty and ignorance:  agnotologyagnoiology and cognitronics.

While cruising my blogroll last nite, I spotted a post on Michael Smithson‘s blog Ignorance and Uncertainty entitled “Writing on Agnotology, Uncertainty, and Ignorance.”  Which led me to the Wikipedia page on agnotology, which introduced me to agnoiology and cognitronics.

Wikipedia definitions

From the Wikipedia:

Agnotology (formerly agnatology) is the study of culturally-induced ignorance or doubt, particularly the publication of inaccurate or misleading scientific data. The neologism was coined by Robert N. Proctor, a Stanford University professor specializing in the history of science and technology. Its name derives from the Neoclassical Greek word ἄγνωσις,agnōsis, “not knowing” (confer Attic Greek ἄγνωτος “unknown”[4]), and -λογία, -logia. More generally, the term also highlights the increasingly common condition where more knowledge of a subject leaves one more uncertain than before.

A similar word from the same Greek roots, agnoiology, meaning “the science or study of ignorance, which determines its quality and conditions” or “the doctrine concerning those things of which we are necessarily ignorant” describes a branch of philosophy studied by James Frederick Ferrier in the 19th century.

Cognitronics aims (a) at explicating the distortions in the perception of the world caused by the information society and globalization and (b) at coping with these distortions in different fields… Cognitronics is studying and looking for the ways of improving cognitive mechanisms of processing information and developing emotional sphere of the personality – the ways aiming at compensating three mentioned shifts in the systems of values and, as an indirect consequence, for the ways of developing symbolic information processing skills of the learners, linguistic mechanisms, associative and reasoning abilities, broad mental outlook being important preconditions of successful work practically in every sphere of professional activity in information society.

Proctor on agnotology

From an interesting interview with Robert Proctor, who coined the term “agnotology”:

“When it comes to many contentious subjects, our usual relationship to information is reversed: Ignorance increases.

[Proctor] has developed a word inspired by this trend: agnotology. Derived from the Greek root agnosis, it is “the study of culturally constructed ignorance.”

As Proctor argues, when society doesn’t know something, it’s often because special interests work hard to create confusion. Anti-Obama groups likely spent millions insisting he’s a Muslim; church groups have shelled out even more pushing creationism. The oil and auto industries carefully seed doubt about the causes of global warming. And when the dust settles, society knows less than it did before.

“People always assume that if someone doesn’t know something, it’s because they haven’t paid attention or haven’t yet figured it out,” Proctor says. “But ignorance also comes from people literally suppressing truth—or drowning it out—or trying to make it so confusing that people stop caring about what’s true and what’s not.”  

Maybe the Internet itself has inherently agnotological side effects. People graze all day on information tailored to their existing worldview. And when bloggers or talking heads actually engage in debate, it often consists of pelting one another with mutually contradictory studies they’ve Googled: “Greenland’s ice shield is melting 10 years ahead of schedule!” vs. “The sun is cooling down and Earth is getting colder!”

As Farhad Manjoo notes in True Enough: Learning to Live in a Post-Fact Society, if we argue about what a fact means, we’re having a debate. If we argue about what the facts are, it’s agnotological Armageddon, where reality dies screaming.

Can we fight off these attempts to foster ignorance? Despite his fears about the Internet’s combative culture, Proctor is optimistic. During last year’s election, campaign-trail lies were quickly exposed via YouTube and transcripts. The Web makes secrets harder to keep.

We need to fashion information tools that are designed to combat agnotological rot. Like Wikipedia: It encourages users to build real knowledge through consensus, and the result manages to (mostly) satisfy even people who hate each other’s guts. Because the most important thing these days might just be knowing what we know.

Proctor also has a book “Agnotology: the making and unmaking of ignorance.”  From the blurb:

What don’t we know, and why don’t we know it? What keeps ignorance alive, or allows it to be used as a political instrument? Agnotology—the study of ignorance—provides a new theoretical perspective to broaden traditional questions about “how we know” to ask: Why don’t we know what we don’t know? The essays assembled in Agnotology show that ignorance is often more than just an absence of knowledge; it can also be the outcome of cultural and political struggles. Ignorance has a history and a political geography, but there are also things people don’t want you to know (“Doubt is our product” is the tobacco industry slogan). Individual chapters treat examples from the realms of global climate change, military secrecy, female orgasm, environmental denialism, Native American paleontology, theoretical archaeology, racial ignorance, and more. The goal of this volume is to better understand how and why various forms of knowing do not come to be, or have disappeared, or have become invisible. 

Michael Smithson on agnotology

From Smithson’s blog post:

“Agnotology” is the study of ignorance (from the Greek “agnosis”). “Ignorance,” “uncertainty,” and related terms refer variously to the absence of knowledge, doubt, and false belief. This topic has a long history in Western philosophy, rooted in the Socratic tradition. It has a considerably shorter and, until recently, sporadic treatment in the human sciences. This entry focuses on relatively recent developments within and exchanges between both domains.

A key starting-point is that anyone attributing ignorance cannot avoid making claims to know something about who is ignorant of what: A is ignorant from B’s viewpoint if A fails to agree with or show awareness of ideas which B defines as actually or potentially valid. A and B can be identical, so that A self-attributes ignorance. Numerous scholars thereby have noted the distinction between conscious ignorance (known unknowns, learned ignorance) and meta-ignorance (unknown unknowns, ignorance squared).

The topic has been beset with terminological difficulties, due to the scarcity and negative cast of terms referring to unknowns. Several scholars have constructed typologies of unknowns, in attempts to make explicit their most important properties. Smithson’s book, Ignorance and Uncertainty: Emerging Paradigms, pointed out the distinction between being ignorant of something and ignoring something, the latter being akin to treating something as irrelevant or taboo. Knorr-Cetina coined the term “negative knowledge” to describe knowledge about the limits of the knowable. Various authors have tried to distinguish reducible from irreducible unknowns.

Two fundamental concerns have been at the forefront of philosophical and social scientific approaches to unknowns. The first of these is judgment, learning and decision making in the absence of complete information. Prescriptive frameworks advise how this ought to be done, and descriptive frameworks describe how humans (or other species) do so. A dominant prescriptive framework since the second half of the 20thcentury is subjective expected utility theory (SEU), whose central tenet is that decisional outcomes are to be evaluated by their expected utility, i.e., the product of their probability and their utility (e.g., monetary value, although utility may be based on subjective appraisals). According to SEU, a rational decision maker chooses the option that maximizes her/his expected utility. Several descriptive theories in psychology and behavioral economics (e.g., Prospect Theory and Rank-Dependent Expected Utility Theory) have amended SEU to render it more descriptively accurate while retaining some of its “rational” properties.

The second concern is the nature and genesis of unknowns. While many scholars have treated unknowns as arising from limits to human experience and cognitive capacity, increasing attention has been paid recently to the thesis that unknowns are socially constructed, many of them intentionally so. 

In philosophy and mathematics the dominant formal framework for dealing with unknowns has been one or another theory of probability. However, Max Black’s ground-breaking 1937 paper proposed that vagueness and ambiguity are distinguishable from each other, from probability, and also from what he called “generality.” The 1960’s and 70’s saw a proliferation of mathematical and philosophical frameworks purporting to encompass non-probabilistic unknowns, such as fuzzy set theory, rough sets, fuzzy logic, belief functions, and imprecise probabilities. Debates continue to this day over whether any of these alternatives are necessary, whether all unknowns can be reduced to some form of probability, and whether there are rational accounts of how to deal with non-probabilistic unknowns. The chief contenders currently include generalized probability frameworks (including imprecise probabilities, credal sets, belief functions), robust Bayesian techniques, and hybrid fuzzy logic techniques.

In the social sciences, during the early 1920’s Keynes distinguished between evidentiary “strength” and “weight,” while Knight similarly separated “risk” (probabilities are known precisely) from “uncertainty” (probabilities are not known). Ellsberg’s classic 1961 experiments demonstrated that people’s choices can be influenced by how imprecisely probabilities are known (i.e., “ambiguity”), and his results have been replicated and extended by numerous studies. Smithson’s 1989 book proposed a taxonomy of unknowns and his 1999 experiments showed that choices also are influenced by uncertainty arising from conflict (disagreeing evidence from equally credible sources); those results also have been replicated.

More recent empirical research on how humans process unknowns has utilized brain imaging methods. Several studies have suggested that Knightian uncertainty (ambiguity) and risk differentially activate the ventral systems that evaluate potential rewards (the so-called “reward center”) and the prefrontal and parietal regions, with the latter two becoming more active under ambiguity. Other kinds of unknowns have yet to be widely studied in this fashion but research on them is emerging. Nevertheless, the evidence thus far suggests that the human brain treats unknowns as if there are different kinds.

Agnoiology

There is an agnoiology blog  www.agnoiology.com, which gives the following definition.  agnoioligy: n.  the study of human stupidity.

From an article by Lehrer entitled “Social consensus and rational agnoiology“:

A person may reasonably accept some experimental report, hypothesis or theory because there is a consensus among an appropriate reference group of experts.  It may be unreasonable, moreover, for a person to accept such statements when there is a consensus against such acceptance.  A person may, however, conclude on the basis of careful study that the experts are in error.  Having concluded thus, he may reasonable dissent from the experts, refusing to accept what they do, or accpeting what the do not.  For such a man, dissensus is reasonable and conformity counterproductive.  When is it reasonable for a person to conform to a consensus and when is it reasonable for him to dissent?

We shall answer the question in terms of an intellectual concern of science and rational inquiry.  Succintly stated, the concern is to obtain truth and avoid error.  We shall argue that consensus among a reference group of experts thus concerned is relevant only if agreement is not sought.  If a consensus arises unsought in the search for truth and the avoidance of error, such consensus provides grounds which, though they may be overridden, suffice for concluding that conformity is reasonable and dissent is not.  If, however, consensus is aimed at by the members of the reference group and arrived at by intent, it becomes conspiratorial and irrelevant to our intellectual concern. (JC emphasis)

The full article is not available online, unfortunately, I would love to read the rest of this paper.

Cognitronics

As for cognitronics, I have not been able to find any useful info online.  The wikipedia link goes nowhere, and the primary reference seems to be this paper which is unavailable.

Barlow, H.B., 1985. “Cognitronicsmethods for acquiring and holding cognitive knowledge,” Unpublished manuscript. Barlow.

I would certainly be interested in learning more about this interesting concept.

JC comments:  Agnotology is often used in a merchants of doubt context, but the broader definition of Smithson formalizes many of the concerns that I have had in the context of the climate debate.  I find the idea of cognitronics very intriguing if unfortunately vague.  The first page of Lehrer’s angiology paper provides a brilliant insight, IMO:

If, however, consensus is aimed at by the members of the reference group and arrived at by intent, it becomes conspiratorial and irrelevant to our intellectual concern.

Exit mobile version