by Judith Curry
Pathological altruism can be conceived as behavior in which attempts to promote the welfare of another, or others, results instead in harm that an external observer would conclude was reasonably foreseeable.
Concepts and implications of altruism bias and pathological altruism
The profound benefits of altruisminmodern society are self-evident. However, the potential hurtful aspects of altruism have gone largely unrecognized in scientific inquiry. This is despite the fact that virtually all forms of altruism are associated with tradeoffs—some of enormous importance and sensitivity—and notwithstanding that examples of pathologies of altruism abound. Presented here are the mechanistic bases and potential ramifications of pathological altruism, that is, altruism in which attempts to promote the welfare of others instead result in unanticipated harm. A basic conceptual approach toward the quantification of altruism bias is presented. Guardian systems and their over arching importance in the evolution of cooperation are also discussed. Concepts of pathological altruism, altruism bias, and guardian systems may help open many new, potentially useful lines of inquiry and provide a framework to begin moving toward a more mature, scientifically informed understanding of altruism and cooperative behavior.
Published 2013 in PNAS [link]
Pathological altruism can be conceived as behavior in which attempts to promote the welfare of another, or others, results instead in harm that an external observer would conclude was reasonably foreseeable. More precisely, this paper defines pathological altruism as an observable behavior or personal tendency in which the explicit or implicit subjective motivation is intentionally to promote the welfare of another, but instead of overall beneficial outcomes the altruism instead has unreasonable (from the relative perspective of an outside observer) negative consequences to the other or even to the self. This definition does not suggest that there are absolutes but instead suggests that, within a particular context, pathological altruism is the situation in which intended outcomes and actual outcomes (within the framework of how the relative values of “negative” and “positive” are conceptualized), do not mesh.
A working definition of a pathological altruist then might be a person who sincerely engages in what he or she intends to be altruistic acts but who (in a fashion that can be reasonably anticipated) harms the very person or group he or she is trying to help; or a person who, in the course of helping one person or group, inflicts reasonably foreseeable harm to others beyond the person or group being helped; or a person who in reasonably anticipatory way becomes a victim of his or her own altruistic actions. The attempted altruism, in other words, results in objectively foreseeable and unreasonable harm to the self, to the target of the altruism, or to others beyond the target.
There are broader implications related to these issues, particularly regarding the policy aspects of the scientific enterprise. Good government is a foundation of large-scale societies; government programs are designed to minimize a variety of social problems. Although virtually every program has its critics, well designed programs can be effective in bettering people’s lives with few negative tradeoffs. From a scientifically-based perspective, however, some programs are deeply problematic, often as a result of superficial notions on the part of program designers or implementers about what is genuinely beneficial for others, coupled with a lack of accountability for ensuing programmatic failures. In these pathologically altruistic enterprises, confirmation bias, discounting, motivated reasoning, and egocentric certitude that our approach is the best—in short, the usual biases that underlie pathologies of altruism—appear to play important roles.
Well-meaning but unscientific approaches toward altruistic helping can have the unwitting effect of ensuring that the benefits of science and the scientific method are kept away from those most in need of help. In the final analysis, it is clear that when altruistic efforts in science are presented as being beyond reproach, it becomes all too easy to silence rational criticism. Few wish to run the gauntlet of criticizing poorly conducted, highly subjective “science” which is purported to help, or indeed, of daring to question the basis of problematic scientific paradigms that arise in part from good intentions.
To object to a scientific theory is one thing, but to object to a scientific theory that connects however tenuously to feelings of morality is quite another. Once morality plays a role, even at the most subliminal level, the formidable cognitive biases of altruism and its pathologies can swing into play. Perhaps for that reason different academic disciplines and specific topics within those disciplines show differing requirements for rigor. In disciplines related to helping people (which can encompass a surprisingly broad swathe of even hard-science topics), scientists’ differing treatment of research findings that elicit altruism bias can skew the findings of seemingly objective science. As Robert Trivers has noted: “It seems manifest that the greater the social content of a discipline, especially human, the greater will be the biases due to self-deception and the greater the retardation of the field compared with less social disciplines”.
One of the most valuable characteristics of science is that, despite the obvious imperfection of biases in ostensibly objective scientists, it provides a potential mechanism for overcoming those biases. At the same time, altruism bias may be one of the most pernicious, hard-to-eradicate biases in science, because it involves even-handed examination of what groups of seemingly objective rational scientists subliminally have come to regard as sacred.
Potential Steps to Address Altruism Bias in Academic Disciplines and the Scientific Enterprise. There are active steps that could be taken to prevent the potential for altruism bias within the scientific enterprise. In all-important journal review processes, for example, mixed panels of reviewers (e.g., cognitive psychologists and neuroscientists reviewing social psychological papers) could become standard practice (105). Doctoral programs can place heavier emphasis on the scientific method and careful use of statistics so that graduate students, who are themselves future journal reviewers, can learn to spot problematic submissions more easily and perhaps be less likely to conduct problematic research themselves. The many aspects of altruism bias and the problems as well as benefits of empathy can be much more broadly discussed and emphasized in textbooks, beginning even in high school and the early years of college. Disciplines heavily involved in social advocacy, whose primary goal involves truly benefitting others, should be among the first to take interest in incorporating these concepts and approaches into research and training programs, editorial efforts, and textbooks.
JC comments: Pathological altruism is an interesting concept (with a catchy name). To date, the arguments for climate mitigation/adaptation policies have been mostly economic, although there are very substantial uncertainties in such assessments. There have also been ‘ethical’ arguments based upon concerns over future generations and the populations that are currently the most vulnerable (e.g. Bangladeshi); it is such arguments for climate action that could be characterized as altruistic.
As defined here, I certainly see evidence of pathological altruism in climate policy being espoused by politicians, advocacy groups and even climate scientists. I am particularly concerned by altruism bias in climate science and among some climate scientists, and attempts being made to silence rational criticism; any scientist who uses the word ‘denier’ is likely to suffer from altruism bias .
I look forward to your comments in fleshing out this concept in context of climate science and policy.