The doctrine known as “effective altruism” (EA) is very recent. It was introduced around 2009 at Princeton University by Matt Wage, a graduate student of the controversial moral philosopher Peter Singer.
The ideas behind EA spread like wildfire, arriving in Oxford University in 2011, where Will MacAskill founded the Centre for Effective Altruism.
What is “effective altruism,” how is it different from ordinary philanthropic giving, and why has it caused such a stir?
Philanthropies have long been aware of the need for self-policing. As a result, a number of philanthropic NGO “evaluators”—such as Charity Navigator, founded in 2001—were created some time ago.
The specific appeal of EA is that it does not just promise to do a better job of pursuing the same old goals, like reducing overhead and so forth. Rather, it claims to apply strict reason and mathematical efficiency to ascertaining the best goals to pursue from the perspective of doing the most good one can do.
While EA found quick acceptance within the larger world of international philanthropies, it only became known to the general public beyond the academic and philanthropic communities in November of 2022, with the financial collapse of the cryptocurrency trading platform and hedge fund known as FTX.
FTX was the brainchild of politically connected venture capitalist Sam Bankman-Fried (SBF), who was also a prominent backer of EA. SBF’s personal fortune (estimated at some $26 billion) was wiped out overnight by the bankruptcy of FTX and related ventures that he owned.
These startling events were followed in quick succession by SBF’s indictment on December 13, 2022, on a variety of criminal charges, including securities fraud, wire fraud, money laundering, and campaign finance violations. Later, witness tampering was added to the list.
SBF was arrested and initially spent 10 days in jail. He was then released upon posting an enormous $250 million bond (presumably supplied by his independently wealthy parents, as SBF himself was now destitute).
SBF—who is still only 31 years old—lived under house arrest in his parents’ home for about eight months, until August 11, 2023. Since then, he has been held in a cell in the Metropolitan Detention Center in Brooklyn, New York, awaiting trial.
Now, many readers will no doubt be wondering what SBF’s personal travails have to do with EA.
There are two main reasons for us to consider these sordid facts. First, SBF had been a major contributor to the Democratic Party and “progressive” political causes, generally. It is claimed that he contributed some $40 million to the Democrats during the 2022 election cycle and, before falling from grace, planned to donate a cool billion to them during the 2024 presidential election season.
The other reason why SBF is important is that he has become, fairly or not, something of a poster boy for EA, and his personal defects are being attributed by some critics to other EA proponents—the overwhelming majority of whom, it must be said, are (like SBF) young, white, male, well-educated, and rich.
Yet another criticism of the movement has nothing to do with money or politics. Rather, it involves that third major motivator of human (especially male) misbehavior—sex.
A large number of accusations of sexual impropriety have been made over the past several years by women involved with the EA movement.
For example, a Time magazine writer by the name of Charlotte Alter interviewed at least seven women who have accused EA activists of sexual harassment and abuse.[1] In addition, Alter describes some eye-opening practices of EA proponents.
One woman interviewed by Alter recounts how, upon joining a Bay-area EA community, she was repeatedly pressured by male colleagues to join their “polycule,” which is EA slang for a group of people who not only live together but sleep with each other.
EA proponents regard this practice—known as “polyamory”—as rational and enlightened, whereas monogamy, being traditional, is in their view archaic and irrational. Alter reports that up to 30% of EA proponents have been estimated to favor polyamory.
One of the women Alter interviewed explained how male EA supporters are adept at rationalizing their bad behavior. She reports one man telling her, “It’s not a hookup, it’s a poly relationship.” This interviewee concluded by saying the gaslighting she was subjected to by the EA group she wished to join made her feel like she “was being sucked into a cult.”
Another woman Alter interviewed tried to put herself in the shoes of a male EA proponent, saying: “You’re used to overriding these gut feelings because they’re not rational.” She then went on to observe: “Under the guise of intellectuality, you can cover up a lot of injustice.”
Interviews recorded in a similar article, written by a journalist named Ellen Huet and posted in the spring of 2023 on the Bloomberg website, corroborate the picture drawn by Alter.[2] Several of Huet’s subjects stress the faddishness and cliquishness of EA.
For example, Huet reports that one of the women she interviewed explained why she was attracted to the group despite the sexual harassment. She said it was because being accepted by these highly intelligent and powerful men was like being “in the cool kids group.”
Huet’s interviewees also stress the high opinion the EA members of their acquaintance had of themselves. They basically saw themselves as a small group of Nietzschean supermen who were both smarter and morally better than anyone else, and so “beyond” ordinary moral demands. They referred to non–group members as “normies” or “NPCs,” that is, “non-player characters”—meaning extras in a “role-playing game” (RPG).
Now, one might push back against this narrative by saying it is nothing more than an ad hominem attack on the entire EA movement on the basis of the misbehavior of a few low-ranking members. And that, one might think, is quite unfair.
The natural reply to this defense is to point out that that the movement’s ideology seems to stimulate delusions of grandeur and the warping of ordinary moral intuitions. As we shall see below, transgressing traditional moral norms is in a sense the whole point of utilitarianism, of which EA is an example.
Moreover, the leaders of the EA movement do not inspire much more confidence than the members of the Bay-area polycule.
First and foremost, there is Peter Singer, who is most famous for advocating the killing of babies long after their birth. He vehemently denies that human life is sacred and teaches that animals often have a greater claim on our moral consideration than human beings do.
You have to hand it to Singer: He probably takes the prize for arrogance (against some stiff competition). For example, he has said that people who disagree with him are not capable of leading “a minimally acceptable ethical life.”[3]
Then there is William MacAskill, after Singer probably the most visible and influential member of the EA movement. Both a mentor to SBF and a board member of the latter’s FTX Future Fund, MacAskill has advocated that all carnivorous species be exterminated, thus one-upping your friendly, neighborhood conservationist who goes around shooting cats to save birds.
A third prominent exponent of EA—Oxford philosopher Nick Bostrom—has abandoned charitable giving to living people in favor of supporting research into the possibility that someday the human species may be wiped out by rebellious robots—a strong candidate for the Nobel Prize in Philosophical Fantasy.
Another charming idea of Bostrom’s is “preventive policing,” by which he means 24/7 surveillance of all human beings, everywhere on earth.[4]
Of course, none of this will cut any ice with EA’s supporters. However, I suspect—I hope—that most people probably feel there is real reason for concern about an intellectual movement whose leading lights hold lunatic views.
Common sense, after all, lies at the heart of all moral thought. All we really have to go on in ethics, or in philosophy generally, is how things seem to us.
And unless our minds have been warped by ideology, we can just see that it is wrong to kill babies and to purposely exterminate entire species of animals—and that with all the things there are in life to worry about, the robot apocalypse belongs pretty far down the list.
It may be worthwhile to pause here for a moment to look more closely at this last bizarre idea of Bostrom’s, which has been hugely influential. This influence has extended to contributing to the birth of a brand-new offshoot of EA, the intellectual movement known as “longtermism.”
What is longtermism?
Philosopher Émile P. Torres, interviewed by Nathan J. Robinson in the online version of the bimonthly magazine Current Affairs, explains that the basic idea underlying longtermism is extremely simple.[5]
Start with the fundamental concept behind utilitarianism, in general, and Peter Singer’s work, in particular—namely, that location in space is (or ought to be) irrelevant to one’s moral responsibilities.
That is, on this view, a hungry child on the other side of the world ought to be morally equivalent in your eyes to your own hungry child. This means that any unknown child has the same claim on your concern and your care as a child you know, love, and are morally responsible for.
In other words, Singer’s ethics requires that you take no account of the fact that you have a special relationship to your own child. For him, ethics is a matter of objective reason and universality, not subjective feeling and your own embodied placement within the world and the web of social relationships.
So, the fundamental principle underlying general EA is the moral equivalence of all locations in space. Longtermism merely takes the next logical step, positing the moral equivalence of all locations in time.
What does that even mean?
Well, it cannot be taken at face value since we have no power to influence the past. For this reason, there cannot be true moral equivalence between all locations in time. We have no choice but to privilege the present, and—according to longtermism—the future, over the past.
Setting that consideration aside, proponents of longtermism reason as follows:
EA requires that each of us does the most good that he can with his personal resources. However, the good each of us can do by focusing our concern on ordinary giving is limited by the number of people presently in existence (a mere eight billion or so).
However, the good each of us can do by focusing our concern on contributing to the well-being of future people is for all practical purposes unlimited. The number of people who may live between now and the far future, when humanity will have spread throughout the universe, may easily be in the hundreds of billions or trillions of individuals, or more.
If one subscribes to EA, then, it is obvious to which group of people one ought to contribute one’s time and treasure. After all, the earth’s present population’s moral claim on our attention pales into insignificance in comparison with the claims of future people.
What should we make of all of this?
It is difficult to take such ideas seriously. One’s first impulse is to meet them with what the late philosopher David Lewis liked to call the “incredulous stare.”
However, we should not give in to this impulse. EA supporters will reply that the mere fact that a proposition violates common sense does not logically refute it.
I submit that most of the madness that permeates the EA movement is a magnification of the insanity it inherits as an offshoot of utilitarianism.
Utilitarianism itself is already a scientistic ideology.
What do I mean by this?
I mean that utilitarianism marches beneath the banner of “science” and “reason,” while having nothing to do with science and being manifestly irrational.
EA takes this fundamentally scientistic stance and inflates it by orders of magnitude with its own explicit claims to being super-scientific and ultra-rational.
For this reason, no inquiry into EA would be complete that did not also raise the question of the moral status of utilitarianism. For without its utilitarian foundations, the entire EA edifice crumbles.
So, how successful is utilitarianism as an account of human morality?
The fundamental problem with utilitarianism is its failure to offer a persuasive account of the nature of the good. The formula “the greatest good for the greatest number” is obviously vacuous until one has identified what is to count as “good.”
What distinguishes utilitarianism from other forms of consequentialism (that morality reduces to its consequences) is its claim to being able to specify the morally correct action in any situation. That is, utilitarianism essentially claims to have reduced moral decision-making to an algorithmic formula.
This is undoubtedly why utilitarianism is so attractive to the proponents of EA. The latter merely takes the formula “the greatest good for the greatest number” and updates it with modern analytical and statistical tools.
Now, the definition of “good” employed by utilitarianism is “units” of “pleasure” or “happiness.” In this guise— identifying the good with pleasure—utilitarianism is a form of hedonism.
For the most part, EA echoes utilitarianism’s emphasis on “pleasure” as the good that it is trying to maximize. Or, rather, EA proponents mostly say they want to minimize pain, which is fair enough.
But nowhere, to my knowledge, do the main works of the EA movement raise the question: Is the human good really reducible to pleasure (or the mere absence of pain), as utilitarians hold?
In fact, there are many reasons to believe that utilitarianism’s narrow focus on pleasure is prima facie evidence for its inadequacy as a moral theory.
How is that? Because utilitarianism’s failure to interrogate the nature of the good means that it is blind to a range of other moral considerations which may conflict with pleasure.
One of the early proponents of utilitarianism, John Stuart Mill, recognized this problem, which critics at the time harshly expressed by calling his theory a “doctrine of swine.”[6]
In response, Mill revised utilitarianism to take a much broader view of “happiness” into account for purposes of calculating “the greatest good of the greatest number.” Namely, he acknowledged the higher moral attributes of human nature, such as love, loyalty, honor, reverence, respect for the true and the beautiful, and so forth.
However, once you recognize the multiplicity of the good for human beings, you have effectively abandoned the idea that the determination of “happiness” can be reduced to an algorithmic procedure. For this reason, it is far from clear that Mill’s concern saved utilitarianism, as is usually held.
Because the plausibility of utilitarianism (and of EA) rests almost entirely on the identification of the good with pleasure (or happiness)—which any detached observer must find an impoverished and highly partial view of human nature—it seems more likely that Mill’s concessions, properly considered, will be seen to have driven the last nails into utilitarianism’s coffin.
Beyond utilitarianism’s blindness to non-hedonic values, the theory has other problems which are best expressed by introducing insights from the other major moral theories.
I am thinking primarily of deontology—that is, the Kantian system—which holds that there are unconditional moral demands which we may not violate for any reason.
This is not the place to go into the intricacies of Kant’s various formulations of the categorical imperative. However, it is important to emphasize that utilitarianism refuses to recognize any boundaries to its own formula of “the greatest good for the greatest number.” If slitting the throats of newborn babies would bring about the greatest good (according to their lights) for the greatest number, then consistent utilitarians must man up and sharpen their knives.
Finally—and, to me, this is decisive—there is the problem of utilitarians’ impact on the objects of their charitable attentions.
Apparently, calculating “the greatest good for the greatest number” does not extend to worrying about what the recipients of charity will do after the donors get bored and move on to other problems—like the robot apocalypse.
However, from an Aristotelian perspective—or even just a traditional, commonsense perspective—the most good we can do for someone is to help him to become more virtuous, meaning more self-disciplined, and hence more able to subordinate his passing desires to the demands of higher goods, such as love, loyalty, honor, or duty.
Once you realize that education in virtue is what will really produce the greatest good of the greatest number, then the demands of charity take on an entirely new appearance.
By the way, this way of looking at the problem need not violate our ordinary moral intuitions. I may still give a homeless person money without subjecting him to a lecture about the evils of alcohol.
However, if we ask the question, “What is the most good that we can do with our charitable donations to the Global South?,” then the answer must be something along these lines:
“Give to those organizations that best help people to help themselves.”
For example, instead of giving money to a charity to send mosquito nets to an African country, use that money to help an African entrepreneur in the country to set up a factory to make mosquito nets locally.
Not only will this approach help reduce the incidence of malaria, it will also give many people employment and perhaps provide to still others with an example of virtues worthy of emulation.
From this perspective, one might even argue that we would do the most good of all by getting involved in politics and working for the election of public officials who share the traditional, commonsense view of morality. There is not much doubt that public policies based on this viewpoint would do more to lift the Global South out of poverty than individual donations could ever do.
In summary, it is not so much that EA is offensive in and of itself. After all, “Let a hundred flowers bloom,” as Mao Zedong once said.
Rather, what sticks in the craw is the arrogance and the holier-than-thou attitude of the proponents of EA—their conviction, not merely that they are the best people in the world, but that anyone who disagrees with them is morally defective, while there are many reasons to think that their own moral viewpoint is seriously deficient.
———NOTES———
1. Charlotte Alter, “Effective Altruism Promises to Do Good Better. These Women Say It Has a Toxic Culture of Sexual Harassment and Abuse,” Time, February 3, 2023.
2. Ellen Huet, “The Real-Life Consequences of Silicon Valley’s AI Obsession,” Bloomberg, March 7, 2023.
3. Peter Singer, “The Logic of Effective Altruism,” Boston Review, July 1, 2015.
4. Neil Bostrom, “The Vulnerable World Hypothesis,” Global Policy, 10: 455–476, 2019.
5. Émile P. Torres, interview with Nathan J. Robinson, “Why Effective Altruism and ‘Longtermism’ Are Toxic Ideologies,” currentaffairs.org, May 7, 2023.
6. John Stuart Mill, Utilitarianism. London: Parker, Son, and Bourn, 1863. (Originally published in three issues of Fraser’s Magazine in 1861.)