by Judith Curry
Groupthink: A pattern of thought charaterized by self-deception, forced manufacture of consent, and conformity to group values and ethics.
Groupthink: Collective Delusions in Organizations and Markets, by Roland Benabou, published in the Review of Economic Studies. Benabou also has a talk (ppt slides) on this subject.
First, a definition of groupthink (from the ppt slides):
Janis (1972)’s eight symptoms [of groupthink]:
- illusion of invulnerability
- collective rationalization
- belief in inherent morality
- stereotyped views of out-groups
- direct pressure on dissenters
- illusion of unanimity
- self-appointed mind guards
Sound like any groups that we know? If you are on different ‘sides’ of the AGW debate, you may be evaluating the IPCC and anthropowarmists against these criteria, or you may be evaluating the opposition against these criteria. While both groups seem to be subject to the first 4 symptoms, I would say that the IPCC and anthropowarmists have a lock on the last 4 symptoms.
Excerpts from the paper:
To analyze these issues, I develop a model of (individually rational) collective denial and willful blindness. Agents are engaged in a joint enterprise where their final payoff will be determined by their own action and those of others, all affected by a common productivity shock. To distinguish groupthink from standard mechanisms, there are no complementarities in payoffs, nor any private signals that could give rise to herding or social learning. Each agent derives anticipatory utility from his future prospects, and consequently faces a tradeoff: he can accept the grim implications of negative public signals about the project’s value (realism) and act accordingly, or maintain hopeful beliefs by discounting, ignoring or forgetting such data (denial), at the risk of making overoptimistic decisions.
The key observation is that this tradeoff is shaped by how others deal with bad news, creating cognitive linkages. When an agent benefits from others’ over optimism, his improved prospects make him more accepting of the bad news which they ignore. Conversely, when he is made worse off by others’ blindness to adverse signals, the increased loss attached to such news pushes him toward denial, which is then contagious. Thinking styles thus become strategic substitutes or complements, depending on the sign of externalities (not cross-partials) in the interaction payoffs. When interdependence among participants is high enough, this Mutually Assured Delusion (MAD) principle can give rise to multiple equilibria with different ‘social cognitions’ of the same reality. The same principle also implies that, in organizations where some agents have a greater impact on others’ welfare than the reverse (e.g., managers on workers), strategies of realism or denial will ‘trickle down’ the hierarchy, so that subordinates will in effect take their beliefs from the leader.
JC note: This last sentence highlights one of the problems of AGW advocacy statements by professional societies in terms of amplifying groupthink.
The underlying insight is quite general and, in particular, does not depend on the assumptions of anticipatory utility and malleable memory or awareness. To demonstrate this point, I analyze a variant of the model in which both are replaced by Kreps-Porteus (1978) preferences for late resolution of uncertainty. This also serves, importantly, to address collective willful ignorance (ex-ante avoidance of information) in the same way as the benchmark model addresses collective denial (ex-post distortion of beliefs). In line with the MAD principle, I show that if an agent’s remaining uninformed about the state of the world leads him to increase the risks borne by others, this pushes them toward also delaying becoming informed; as a result, ignorance becomes contagious and risk spreads through the organization. Conversely, when information avoidance has beneficial hedging spillovers, it is self-dampening.
The model’s welfare analysis makes clear what factors distinguish valuable group morale from harmful groupthink, irrespective of anticipatory payoffs, which average out across states of the world. It furthermore explains why organizations and societies find it desirable to set up ex-ante commitment mechanisms protecting and encouraging dissent (constitutional guarantees of free speech, whistle-blower protections, devil’s advocates, etc.), even when ex-post everyone would unanimously want to ignore or ‘kill’ the messengers of bad news.
In the remainder of this section, I provide empirical evidence on both types of cognitive distortions (ex-ante and ex-post) considered in the model. On the theoretical side, the paper relates to two broad literatures: (i) self-deception, anticipatory preferences and attitudes toward information; (ii) social conformity, herding and bubbles.
Besides the vast literature on over-confidence and over-optimism, there is a long-standing body of work more specifically documenting people’s tendency to selectively process, interpret and recall data in ways that lead to more favorable beliefs about their own traits or future prospects. While earlier studies relied on self-reports rather than incentivized choices, several recent papers offer rigorous confirmations of a differential response to good and bad news.
The curse of Cassandra. Consider a denial equilibrium. Suppose now that an individual or subgroup attempts to bring the bad news back to everyone’s attention. If this occurs after agents have sunk in their investments it simply amounts to defeating expectations, so they will refuse to listen, or may even try to ‘kill the messenger’ (pay a new cost to forget). Anticipating that others will behave in this way, in turn, allows everyone to more confidently invest in denial at t = 0: To avoid this deleterious outcome, organizations and societies will find it desirable to set up ex-ante guarantees such as whistle-blower protections, devil’s advocates, constitutional rights to free speech, independence of the press, etc. These will ensure that bad news will most likely resurface ex-post in a way that is hard to ignore, thus lowering the ex-ante return of investing in denial. Similar results apply if the dissenter comes at an interim stage, after people have censored but before investments are made. They should welcome the opportunity to correct course, but in practice this can be hard to achieve, requiring full coordination. With pay off heterogeneity, dissenters’ motives may also be suspect. Things are even starker for people who strongly value hope and dislike anxiety. Facing the truth now lowers everyone’s utility, generating a universal unwillingness to listen – the curse of Cassandra. Free-speech guarantees, anonymity and similar protections nonetheless remain desirable ex-ante, as they avoid welfare losses and, on average, save the organization or society from wasting resources on denial and repression.
JC comment: people on both sides of the debate can lay claim to Cassandra’s curse. But in the case of AGW, we have a scientific debate/disagreement about a highly uncertain and complex system. Acknowledging the complexity and uncertainty is key to generating a willingness to listen to to different ‘prophecies’ of what the future might hold.
The intuition for what I shall term the ‘Mutually Assured Delusion’ (MAD) principle is simple. If others’ blindness to bad news leads them to act in a way that is better for an agent than if they were well informed; it makes the news not as bad, thus reducing his own incentive to engage in denial. But if their avoidance of reality makes things worse than if they reacted appropriately to the true state of affairs; future prospects become even more ominous, increasing the incentive to look the other way and take refuge in wishful thinking. In the first case, individual’s ways of thinking are strategic substitutes, in the latter they are strategic complements. It is worth emphasizing that this ‘psychological multiplier’, less than 1 in the first case and greater in the second, arises even though agents’ payoffs are separable and there is no scope for social learning.
Proposition 1 shows that the scope for contagion hinges on whether over-optimism has positive or negative spillovers. Examples of both types of interaction are provided below, using financial institutions as the main illustration.
Limited-stakes projects, public goods: The first scenario characterizes activities with limited downside risk, in the sense that pursuing them remains socially desirable for the organization even in the low state where the private return falls short of the cost.
High-stakes projects: The second scenario corresponds to ventures in which the downside is severe enough that persisting has negative social value for the organization. In such contexts, the greater is other players ‘tendency to ignore danger signals about ‘tail risk’ and forge ahead with the strategy — accumulating yet more subprime loans and CDO’s on the balance sheet, increasing leverage, setting up new off-the-books partnerships– the deeper and more widespread the losses will be if the scheme was flawed, the assets ‘toxic’, or the accounting fraudulent. Therefore, when red flags start mounting, the greater is the temptation for everyone whose future is tied to the firm’s fate to also look the other way, engage in rationalization, and ‘not think about it’.
The proposition’s second result shows how cognitive interdependencies (of both types) are amplified, the more closely tied an individual’s welfare is to the actions of others.
Groupthink is thus most important for closed, cohesive groups whose members perceive that they largely share a common fate and have few exit options. This is in line with Janis’ (1972) findings, but with a more operational notion of ‘cohesiveness’. Such vesting can be exogenous or arise from a prior choice to join the group, in which case wishful beliefs about its future prospects also correspond to ex-post rationalizations of a sunk decision.
A first alternative source of group error is social pressure to conform. For instance, if agents are heard or seen by both a powerful principal (boss, group leader, government) and third parties whom he wants to influence, they may just toe the line for fear of retaliation.
Self-censorship should also not occur when agents can communicate separately with the boss, who should then want to hear both good and bad news. There are nonetheless many instances where deliberately confidential and highly credible warnings were flatly ignored, with disastrous consequences for the decision-maker.
A second important source of conformity is signaling or career concerns. Thus, when the quality of their information is unknown, agents whose opinion is at odds with most already expressed may keep it to themselves, for fear of appearing incompetent or lazy. Significant mistakes in group decisions can result in contexts where differential information is important, if anonymous communication or voting is not feasible.
This paper developed a model of how wishful thinking and reality denial spread through organizations and markets. In settings where others ignorance of bad news imposes negative externalities (lower expected payoffs, increased risk), it makes such news even worse and thus harder to accept, resulting in a contagion of willful blindness. Conversely, where over-optimism has beneficial spillovers (thus dampening the impact of adverse signals), ex-ante avoidance and ex-post distortion of information tend to be self-limiting. This mechanism of social cognition does not rely on complementarities in technology or preferences, agents herding on a subset of private signals, or exogenous biases in inference; it is also quite robust. The Mutually Assured Delusion (MAD) principle is thus broadly applicable, helping to explain corporate cultures characterized by dysfunctional groupthink or valuable group morale, why willful ignorance and delusions flow down hierarchies, and the emergence of market manias sustained by new-era thinking, followed by deep crashes.
Patterns of Denial
The paper has an Appendix D: Patterns of Denial, listing 7 patterns of denial and illustrating with examples from Space Shuttle disasters and financial crises. Here I discuss these in context of the IPCC:
1. Preposterous probabilities. The 95% confidence level is arguably an example of this, although it is not exactly clear how to interpret the 95% in context of probabilities.
2. New paradigms: this time is different, we are smarter and have better tools. Every case also displays the typical pattern of hubris, based on claims of superior talent or human capital. The ‘we are smarter and have better tools’ is reflected in the extensive reliance on climate models, and labeling of anyone who disagrees as a ‘denier.’
3. Escalation, failure to diversify, divest or hedge. Wishful beliefs show up not only in words but also in deeds. The most vivid current example seems to be President Obama’s ramping up of a climate program in the U.S.
4. Information avoidance, repainting red flags green and overriding alarms. The ‘pause’, and its dismissal in the AR5 is a prime example of this one.
5. Normalization of deviance, changing standards and rationales. How do organizations react when what was not supposed to happen does, with increasing frequency and severity? An example of this is the changing goal posts for the pause. A few years ago, periods of pause/cooling longer than 10-15 yrs were not expected, which was recently bumped to 17 years by Santer et al. The start date for the pause seems to be moving towards 2001 – away from the big El Nino of 1998.
6. Reversing the burden of proof. See my essay on Reversing the Null Hypothesis for a discussion of this issue.
7. Malleable memories: forgetting the lessons of history. This one is particularly true re arguments linking AGW and extreme weather. Often ‘remembering’ back to the 1950’s or the 1930’s is all that is required.
JC comments: I find Benabou’s analysis to be very insightful. Awareness of these symptoms and patterns is the first stop towards inoculating against groupthink. Encouraging dissent is key to not falling into the groupthink trap.
While the examples provided are markets and public and private sector disasters, these ideas are broadly applicable to the different social ‘realities’ surrounding anthropogenic climate change. I’ve tried to find an analogous set of examples for the ‘denial’ of say U.S. Republicans and some oil companies, but could only come up with examples for 3, 4, 5 of the ‘patterns of denial’. Sort of changes which foot the ‘denier’ shoe fits best.