Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom.
"Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis": that's the idea at the foundation of the Effective Altruism (EA) ideology and movement1. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world's most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.
I wrote the notes for a few reasons. One is purely social: many of my friends have strong opinions on EA (some pro-, some anti-, others more neutral). Another is a sense that EA is important as a social movement and (perhaps) as a set of ideas. It's significant that so many smart, idealistic teenagers and 20-somethings respond so strongly to EA. Many report radically changing their lives: changing careers; changing their day-to-day behavior; committing to give a large proportion of their income to charities they describe as "effective". EAs2 also share a lot of unusual language and ways of seeing the world, much of it adapted from welfare economics and moral philosophy.
It's tempting to dismiss this as all "just" fashion, or as a consequence of the (meteoric) rise in funding to EA. But I don't buy it. Many EAs are extraordinarily sincere, and have found tremendous conviction and meaning in EA. It's doing something very important for them, something well beyond being a niche fashion.
When I first learned about EA, my instinctive and poorly thought through response was fairly negative. I've often half-joked that I'm an ineffective altruist, or a chaotic altruist. I like to describe myself as a mutilitarian, using the Zen Buddhist "mu" as my utility function (i.e., a denial of the idea). And yet upon deeper inspection these are cheap dismissals.
In 2011 an EA friend of mine went under the knife, donating a kidney to a stranger. He explained to me that he:
came across some stats on how safe it was to donate and it totally changed my picture. I thought, 1/3,000 risk of death in surgery is like sacrificing yourself to save 3,000 people. I want to be the kind of person who'd do that, and you just have to follow these few steps.
I have EA friends who donate a large fraction of their income to charitable causes. In some cases it's all their income above some fairly low (by rich developed world standards) threshold, say $30k. In some cases it seems plausible that their personal donations are responsible for saving dozens of lives, helping lift many people out of poverty, and preventing many debilitating diseases, often in some of the poorest and most underserved parts of the world. Some of those friends have directly helped save many lives. That's a simple sentence, but an extraordinary one, so I'll repeat it: they've directly helped save many lives.
I'm in stunned awe of all this. And feel a little sheepish about my jokes about ineffective altruism, and grateful that EA friends put up with me. I've tried to live a life matching my personal skills and interests to things which are good for the world. I hope I've done some genuine good, while also enjoying my life. But I've never directly saved a life, as far as I know. I do not think I could donate a kidney: it would violate my sense of somatic integrity too much. At a personal level, I love the sincerity and genuine goodness of my EA and near-EA friends. I simply feel more wholesome after spending time with them. I'm often more honest; I'm sometimes kinder or more open-minded. These are all very good things.
What follows, then, is a collection of observations about EA. It's in part an appreciation: to critique EA you must also understand some of what is good about it. And there is a great deal that can be learned from it by other ideologies. But I'll also dig down and try to understand what bothers me about EA, what I think is wrong, and how I think EA might fruitfully be modified.
Something that's missing from the notes: a direct, first-person account of the good that EA does. I have some reflected sense of this from friends, but wish I knew more. It's impossible to genuinely appreciate EA without it. Malaria bed nets, direct cash transfers, de-worming etc aren't abstractions: they are, in fact, an enormous real world event, making a huge difference in the lives of many people. And that's missing here, just due to my ignorance. Try to keep the fact of this in mind as you read; I've tried to do so as I write.
A caution: I make a lot of generalizations about "what EA does". But EA is not monolithic. This makes it difficult to write without inserting lots of qualifiers. I could do it by saying "Most EAs believe", or quoting leading EAs, and so on. I've instead (mostly) opted to use the general language, with the implicit understanding that there are often EAs who disagree with that particular piece. However, I've tried to note when there is widespread disagreement about a point among the EA community.
I began the notes with a widely-used description of EA, taken from philosopher Will MacAskill, one of the founders of EA: "Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". In practice, I've often heard this abbreviated along the lines of: "Using evidence and reason to do the most good possible". I'll usually use the latter as a shorthand for what EA is about, but keeping in mind the longer description. One caveat about both these descriptions: note that they are inherently maximizing, "benefit others as much as possible", "do the most good possible". In fact, many EAs advocate backing away quite a bit from this maximizing frame. As a result, it makes sense to think of different "strengths" of EA, according to how much a person accepts this maximization approach (or not). We'll return to this, as it's a significant issue not settled by the EA community. And when I use "the most good" framing, it's with the implicit caveat that many EAs back off from "most" in practice.
I mentioned above my friend who donated a kidney in 2011. The moral philosopher Peter Singer, one of the originators of many ideas in EA, describes his amazement4 at learning (in 2004) the story of Zell Kravinsky, a wealthy real estate investor who had donated almost his entire fortune of $45 million, living on $60 thousand a year. But there's something even more remarkable. At first sight it will look very similar to my friend's kidney donation story above. But it's different in an important way:
He still did not think he had done enough to help others, so he arranged with a nearby hospital to donate a kidney to a stranger… Quoting scientific studies that show the risk of dying as a result of making a kidney donation to be only 1 in 4,000, he says that not making the donation would have meant he valued his life at 4,000 times that of a stranger, a valuation he finds totally unjustified.
As extraordinary as my friend's generosity was, there is something further still going on here. Kravinsky's act is one of moral imagination, to even consider donating a kidney, and then of moral conviction, to follow through. This is an astonishing act of moral invention: someone (presumably Kravinsky) was the first to both imagine doing this, and then to actually do it. That moral invention then inspired others to do the same. It actually expanded the range of human moral experience, which others can learn from and then emulate. In this sense a person like Kravinsky can be thought of as a moral pioneer or moral psychonaut5, inventing new forms of moral experience.
Of course, such moral pioneers don't only come from EA. Far from it! They are at the foundation of our civilization. Many of my personal heroes are moral pioneers, including the author of the Sermon on the Mount6, the abolitionist movement, the suffragettes and feminist movement, Martin Luther King and other leaders in the civil rights movement. All these (and many more) engaged in acts of moral imagination that expanded the range of moral experience available to the rest of us to emulate. We may not always agree with them: I don't know, for instance, that I agree with Peter Singer's views on the rights of animals. Singer may be wrong on that. But it's valuable nonetheless as an act of moral invention expanding our potential range of moral experience.
One of the things that's interesting about EA is that it has encouraged many moral pioneers: people willing to rethink fundamental moral questions, and (sometimes) to expand the range of our moral experience. Questions they've seriously asked (and in some cases, acted on the answers): "What if animal lives truly mattered?" "What if a life on the other side of the world mattered just as much as that of a child drowning before your eyes?" "What if an intelligent machine's 'life' mattered just as much as a human being's?" "How should we value the life of a human being living a million years from now?" By taking these questions seriously they can expand our moral horizons.
There's a dark flipside to moral pioneering, memorably pointed out by the political philosopher Hannah Arendt in Eichmann in Jerusalem, her account of the trial of the Nazi war criminal Adolf Eichmann. In Arendt's account, the Nazi's were (in some sense) also moral pioneers, inventing new kinds of crime, which then expanded the likely range of future crime:
Nothing is more pernicious to an understanding of these new crimes, or stands more in the way of the emergence of an international penal code that could take care of them, than the common illusion that the crime of murder and the crime of genocide are essentially the same, and that the latter therefore is “no new crime properly speaking.” The point of the latter is that an altogether different order is broken and an altogether different community is violated… It is in the very nature of things human that every act that has once made its appearance and has been recorded in the history of mankind stays with mankind as a potentiality long after its actuality has become a thing of the past. No punishment has ever possessed enough power of deterrence to prevent the commission of crimes. On the contrary, whatever the punishment, once a specific crime has appeared for the first time, its reappearance is more likely than its initial emergence could ever have been.
Moral reasoning, if taken seriously and acted upon, is of the utmost concern, in part because there is a danger of terrible mistakes. The Nazi example is overly dramatic: for one thing, I find it hard to believe that the originators of Nazi ideas didn't realize that these were deeply evil acts. But a more everyday example, and one which should give any ideology pause, is overly self-righteous people, acting in what they "know" is a good cause, but in fact doing harm. I'm cautiously enthusiastic about EA's moral pioneering. But it is potentially a minefield, something to also be cautious about.
One of the most common lines of "attack" on EA is to disagree with common EA notions of what it means to do the most good. "Are you an EA?" "Oh, those are the people who think you need to give money for malaria bed nets [or AI safety, or de-worming, /etc etc/], but that's wrong because […]." Or: "Will MacAskill says that EAs should consider earning-to-give, but that's wrong because […]". Or: "Science and social justice and creativity and [etc etc etc] are much harder to measure than things like QALYs, so EAs tend to undervalue or ignore them." Or: "EAs are rather credulous7 about the value of RCTs and meta-analysis, you should instead […]". Or: "Look, you can directly increase QALYs all you want, it won't shift you from a low-growth to a high-growth economy. The two are at different levels of causal abstraction".
These statements may or may not be true. Regardless, none of them is a fundamental critique of EA. Rather, they're examples of EA thinking: you're actually participating in the EA project when you make such comments. EAs argue vociferously all the time about what it means to do the most good. What unites them is that they agree they should "use evidence and reason to figure out how to do the most good"; if you disagree with prevailing EA notions of most good, and have evidence to contribute, then you're providing grist for the mill driving improvement in EA understanding of what is good.
In any case, this kind of "critique" accounts for at least half – probably more – of the external criticism of EA I've heard. Most external critics who think they're critiquing EA are critiquing a mirage. In this sense, EA has a huge surface area which can only be improved by critique, not weakened. I think of the pattern as EA judo. And you see it often in discussions with "EA critics". A pleasant, informative example is EA Rob Wiblin interviewing Russ Roberts, who presents himself as disagreeing with EA. But through (most of) the interview, Roberts tacitly accepts the basic ideas of EA, while disagreeing with particular instantiations. And Wiblin practices EA judo, over and over, turning it into a very typical EA-type debate over how to do the most good. It's very interesting and both participants are very thoughtful, but it's not really a debate about the merits of EA.
This is, to me, one of the most attractive and powerful features of EA. It makes it very different to most ideologies, which are often rather static. EA is, in some sense, an attempt to do for the question "what is the good?" what science has done for the question "how does the world work?". Instead of providing an answer it is developing a community that aims to continually improve the answer8.
Because of this, it's worth separating EA-in-practice (a social movement) from EA-the-intellectual-project. If you wish to get at fundamental issues, you ultimately need to focus on the latter, not just the former. As I said: many critiques of EA-in-practice are just part of the core engine of improvement. This does not mean, however, that it's not worth spending time critiquing the surface area of EA-in-practice. "By their fruits ye shall know them" holds for intellectual principles, not just people. If a set of principles throws off a lot of rotten fruit, it's a sign of something wrong with the principles, a reductio ad absurdum. You've probably heard communists and libertarians defend failed communist and free-market experiments by saying: "It wasn't a true communist / free-market experiment". Sometimes they have a point. But if the pattern persists, if the fundamental principles aren't resilient or need a lot of special pleading, it means those principles have something badly wrong with them.
Put another way: when EA judo is practiced too much, it's worth looking for more fundamental problems. The basic form of EA judo is: "Look, disagreement over what is good does nothing directly to touch EA. Indeed, such disagreement is the engine driving improvement in our notion of what is good." This is perhaps true in some God's-eye, omniscient, in-principle philosopher's sense. But EA community and organizations are subject to fashion and power games and shortcomings and biases, just like every other community and organization. Good intentions alone aren't enough to ensure effective decisions about effectiveness9. And the reason many people are bothered by EA is not that they think it's a bad idea to "do good better". But rather that they doubt the ability of EA institutions and community to live up to the aspirations.
These critiques can come from many directions. From people interested in identity politics I've heard: "Look, many of these EA organizations are being run by powerful white men, reproducing existing power structures, biased toward technocratic capitalism and the status quo, and ignoring many of the things which really matter." From libertarians I've heard: "Look, EA is just leftist collective utilitarianism. It centralizes decision-making too much, and ignores both price signals and the immense power that comes from having lots of people working in their own self-interest, albeit inside a system designed so that self-interest (often) helps everyone collectively10." From startup people and inventors I've heard: "Aren't EAs just working on public goods? If you want to do the most good, why not work on a startup instead? We can just invent and scale new technology (or new ideas) to improve the world!11." From people familiar with the pathologies of aging organizations and communities, I've heard: "Look, any movement which grows rapidly will also start to decay. It will become dominated by ambitious careerists and principal agent problems, and lose the sincerity and agility that characterized the pioneers and early adopters12"13.
All these critiques have some truth; they also have significant issues. Without getting into those weeds, the immediate point is that they all look like "merely" practical problems, for which EA judo may be practiced: "If we're not doing that right, we shall improve, we simply need you to provide evidence and a better alternative". But the organizational patterns are so strong that these criticisms seem more in-principle to me. Again: if your social movement "works in principle" but practical implementation has too many problems, then it's not really working in principle, either. The quality "we are able to do this effectively in practice" is an important (implicit) in-principle quality.
Let's go back to the EA principle: "Using evidence and careful reasoning to do the most good possible". It's a very attractive principle in many ways. It's extremely clear. It's highly orienting and meaning-giving, especially if embedded in a social and organizational context that makes convincing recommendations about how to do the most good. Those recommendations don't need to be perfect: they need merely be better than you expect you could do in most other community contexts.
Part of the attraction of the principle is that it takes away choice. One great achievement of modernity is to give people more and more choice, until they get to choose (seemingly) everything14. But vast choice is also bewildering and challenging. Much of the power of EA (and of many ideologies) is to take away much of that choice, saying: no, you have a duty15 to do the most good you can in the world. Furthermore, EA provides institutions and a community which helps guide how you do that good. It thus provides orientation and meaning and a narrative for why you're doing what you're doing.
On Twitter, ex-EA Nick Cammarata made the following comment, which I've heard echoed one-on-one with many EAs and ex-EAs:
my inner voice in early 2016 would automatically convert all money I spent (eg on dinner) to a fractional “death counter” of lives in expectation I could have saved if I’d donated it to good charities. Most EAs I mentioned that to at the time were like ah yeah seems reasonable
Or consider the following remarkable exchange on Twitter, between a non-EA and an EA:
"the optimal amount of optimal charity is not 100%"
"But good EAs take this into account"
"Yes but bad EAs get caught in a misery trap"
"True, but that's not a flaw of EA, that's a flaw of those people."
Or consider the following passage in Peter Singer's book "The Most Good You Can Do":
When [pioneering EA] Julia [Wise] was young she felt so strongly that her choice to donate or not donate meant the difference between someone else living or dying that she decided it would be immoral for her to have children. They would take too much of her time and money. She told her father of her decision, and he replied. “It doesn’t sound like this lifestyle is going to make you happy,” to which she responded, “My happiness is not the point.” Later, when she was with [her husband] Jeff, she realized that her father was right. Her decision not to have a child was making her miserable. She talked to Jeff, and they decided they could afford to raise a child and still give plenty. The fact that Julia could look forward to being a parent renewed her sense of excitement about the future. She suspects that her satisfaction with her life makes her of more use to the world than she would be if she were “a broken-down altruist.”
Everyone has boundaries. If you find yourself doing something that makes you bitter, it is time to reconsider. Is it possible for you to become more positive about it? If not, is it really for the best, all things considered?
…
Julia admits to making mistakes. When shopping, she would constantly ask herself, “Do I need this ice cream as much as a woman living in poverty elsewhere in the world needs to get her child vaccinated?” That made grocery shopping a maddening experience, so she and Jeff made a decision about what they would give away over the next six months and then drew up a budget based on what was left. Within that budget, they regarded the money as theirs, to spend on themselves. Now Julia doesn’t scrimp on ice cream because, as she told the class, “Ice cream is really important to my happiness.”
…
Julia’s and Jeff’s decision to have a child shows that they drew a line beyond which they would not let the goal of maximizing their giving prevent them from having something very important to them. Bernadette Young, Toby Ord’s partner, has described their decision to have a child in a similar way: “I’m happy donating 50 percent of my income over my life, but if I also chose not to have a child simply to raise that amount to 55 percent, then that final 5 percent would cost me more than all the rest. … I’m deciding to meet a major psychological need and to plan a life I can continue to live in the long term.” Neither Julia nor Bernadette is unusual in experiencing the inability to have a child—for whatever reason—as deeply distressing. Having a child undoubtedly takes both money and time, but against this, Bernadette points out, effective altruists can reasonably hope that having a child will benefit the world. Both cognitive abilities and characteristics like empathy have a significant inherited component, and we can also expect that children will be influenced by the values their parents hold and practice in their daily lives. Although there can be no certainty that the children of effective altruists will, over their lifetimes, do more good than harm, there is a reasonable probability that they will, and this helps to offset the extra costs of raising them. We can put it another way: If all those who are concerned to do the most good decide not to have children, while those who do not care about anyone else continue to have children, can we really expect that, a few generations on, the world will be a better place than it would have been if those who care about others had had children?
There is a related attitude toward the arts common in EA. Singer is blunt about this: you can't really justify the arts:
Can promoting the arts be part of “the most good you can do”?
In a world that had overcome extreme poverty and other major problems that face us now, promoting the arts would be a worthy goal. In the world in which we live, however, for reasons that will be explored in chapter 11, donating to opera houses and museums isn’t likely to be doing the most good you can.
I've heard several EAs say they know multiple EAs who get very down or even depressed because they feel they're not having enough impact on the world. As a purely intellectual project it's fascinating to start from a principle like "use reason and evidence to figure out how do the most good in the world" and try to derive things like "care for children" or "enjoy eating ice cream" or "engage in or support the arts"16 as special cases of the overarching principle. But while that's intellectually interesting, as a direct guide to living it's a terrible mistake. The reason to care for children (etc) isn't because it helps you do the most good. It's because we're absolutely supposed to care for our children. The reason art and music and ice cream matter aren't because they help you do the most good. It's because we're human beings – not soulless automatons – who respond in ways we don't entirely understand to things whose impact on our selves we do not and cannot fully apprehend.
Now, the pattern that's been chosen by EA has been to insert escape clauses. Many talk about having a "warm fuzzies" budget for "ineffective" giving that simply makes them feel good. And they carve out ad hoc extension clauses like the one about having children or setting aside an ice cream budget or a dinner budget, and so on17. It all seems to me like special pleading at a frequency which suggests something amiss. You've started from a single overarching principle that seems tremendously attractive. But now you've either got to accept all the consequences, and make yourself miserable. Or you have to start, as an individual, grafting on ad hoc extension clauses. And that turns out to be terribly stressful in its own right. You have thoughtful people like Nick Cammarata in a spin over their dinner. It's not the dinner that's the problem: it's the fact that Cammarata is in a spin. Or Julia Wise, deciding whether to have ice cream – or children.
And it's not surprising: on one side you have a very clear, very powerful principle and superhuman entities (EA organizations + the collective community) sending extremely clear and compelling messages about how to do the most good. But it's an individual level at which people are trying to discover and set boundaries. It's no wonder it's stressful.
This is a really big problem for EA. When you have people taking seriously such an overarching principle, you end up with stressed, nervous people, people anxious that they are living wrongly. The correct critique of this situation isn't the one Singer makes: that it prevents them from doing the most good. The critique is that it is the wrong way to live. They need a different foundation for their life. It may be that it includes some variation of that principle, as a small part of a much larger and very well developed life philosophy. But it must be sharply tempered by some other principle or principles; those principles must have the same kind of clarity and force; it must be apparent how all the parts fit together, so the "most good" principle is firmly bounded by the other principles. And it may well be that the balancing needs to be (in part) delegated to superhuman institutions, that it's too much to ask of most individuals without causing them tremendous stress. But if "the most good" is used as the foundation tentpole for a life philosophy, onto which you ad hoc graft additional clauses, that seems to me a recipe for problems.
An alternate solution, and the one that has, I believe, been adopted by many EAs, has been a form of weak-EA. Strong-EA takes "do the most good you can do" extremely seriously as a central aspect of a life philosophy. Weak-EA uses that principle more as guidance. Donate 1% of your income. Donate 10% of your income, provided that doesn't cause you hardship. Be thoughtful about the impact your work has on the world, and consult many different sources. These are all good things to do! The critique of this form is that it's fine and good, but also hard to distinguish from the common pre-existing notion many people have, "live well, and try to do some good in the world". As Amia Srinivasan puts it18:
But the more uncertain the figures, the less useful the calculation, and the more we end up relying on a commonsense understanding of what’s worth doing. Do we really need a sophisticated model to tell us that we shouldn’t deal in subprime mortgages [/ed: yes/], or that the American prison system needs fixing, or that it might be worthwhile going into electoral politics if you can be confident you aren’t doing it solely out of self-interest? The more complex the problem effective altruism tries to address – that is, the more deeply it engages with the world as a political entity – the less distinctive its contribution becomes. Effective altruists, like everyone else, come up against the fact that the world is messy, and like everyone else who wants to make it better they must do what strikes them as best, without any final sense of what that might be or any guarantee that they’re getting it right.
More worrying than the model’s inability to tell us anything very useful once we move outside the circumscribed realm of controlled intervention is its susceptibility to being used to tell us exactly what we want to hear.
…
Effective altruism takes up the spirit of Singer’s argument but shields us from the full blast of its conclusion… Instead of downgrading our lives to subsistence levels, we are encouraged to start with the traditional tithe of 10 per cent, then do a bit more each year. Thus effective altruism dodges one of the standard objections to utilitarianism: that it asks too much of us. But it isn’t clear how the dodge is supposed to work. MacAskill tells us that effective altruists – like utilitarians – are committed to doing the most good possible, but he also tells us that it’s OK to enjoy a ‘cushy lifestyle’, so long as you’re donating a lot to charity. Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most. (Singer repeats his call for precisely such an overhaul in his recent book The Most Good You Can Do, and Larissa MacFarquhar’s Strangers Drowning is a set of portraits of ‘extreme altruists’ who have answered the call.) The second thought – that we try to make things better – is shared by every plausible moral system and every decent person. If effective altruism is simply in the business of getting us to be more effective when we try to help others, then it’s hard to object to it. But in that case it’s also hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.
There's much I agree with in that excerpt. But I think there's a pretty good retort to Srinivasan's final comment: "in that case it's also hard to see what [EA is] offering in the way of fresh moral insight, still less how it could be the last social movement we'll ever need." Now, if it were a purely intellectual argument, I'd agree with her. But: the EAs have actually gone and done it: created institutions that are actually centered around the idea. And that's valuable and an innovation.
Let's return again to the EA principle: "Effective altruism means using evidence and reason to do the most good possible in the world." I've discussed some in-practice symptoms of implicit issues with this principle; I've also discussed problems setting boundaries on the principle. Let's shift to directly critique the principle itself.
Many of the issues are just the standard ones people use to attack moral utilitarianism. Unfortunately, I am far from an expert on these arguments. So I'll just very briefly state my own sense: "good" isn't fungible, and so any quantification is an oversimplification. Indeed, not just an oversimplification: it is sometimes downright wrong and badly misleading. Certainly, such quantification is often a practical convenience when making tradeoffs; it may also be useful for making suggestive (but not dispositive) moral arguments. But it has no fundamental status. As a result, notions like "increasing good" or "most good" are useful conveniences, but it's a bad mistake to treat them as fundamental. Furthermore, the notion of a single "the" good is also suspect. There are many plural goods, which are fundamentally immeasurable and incommensurate and cannot be combined.
I find these attacks compelling. As a practical convenience and as a generative tool, utilitarianism is useful. But I'm not a utilitarian as a fundamental fact about the world.
(Tangentially: it is interesting to ponder what truth there is in past UN Secretary-General Dag Hammarskjöld's statement that: "It is more noble to give yourself completely to one individual than to labor diligently for the salvation of the masses." This is, to put it mildly, not an EA point of view. And yet I believe it has some considerable truth to it.)
Less centrally, the part of the principle about "using evidence and reason" is striking. There is ongoing change in what humanity means by "evidence and reason", with occasional sharp jumps. Indeed, many of humanity's greatest achievements have been radical changes in what we mean by evidence and reason. 11th century standards of evidence and reason are very different from today's; I expect 31st century standards will be very different again. Of course, this point can be addressed with some minor patching. It can perhaps be dealt with by changing the principle to: "our best current standards of evidence and reasoning to do the most good possible in the world", to emphasize awareness of the fact that these things do change.
These are four subjects I'd like to treat at length, but decided to leave as outside the scope of the current notes. I want to just mention them here, at the risk of confusing the issue with a too-easily-misinterpreted brief account. All four subjects really need a lengthy account:
Illegibility: A common argument against EA is that it undervalues illegible activity. The typical EA response is another form of EA judo, the bureaucrat's war cry: let's make it legible19! We'll just figure out how much good early-stage science / a children's birthday party / new types of sculpture really do. And yet the more forms of activity we make legible, the more the penumbra of illegibility changes and grows: and much of the deepest creative work and the most transformative life changes are made by people in that penumbra20. In many types of work, when the outcomes you get are the outcomes you want – indeed, when they're outcomes you can even understand – you've missed an enormous opportunity. "Evidence and reason" begin to break down, by definition, in the penumbra of illegibility. I also suspect that as a basic personality trait I am happiest in that illegible penumbra, and this is why I've had so much trouble grokking EA: it feels like a foreign language, where there's some starting assumption I just don't get. Conversely, when talking about illegibility with EAs they often look at me like I've grown an extra head. They view illegibility as something to be conquered and minimized; I view it as a fundamental, immovable fact about the way the world works. Indeed, the more illegibility you conquer, the more illegibility springs up, and the greater the need for such work.
"EA-is-a-cult / EA-is-a-religion:" These are common statements, usually used as part of critical attacks. I believe they're often used either thoughtlessly or disingenuously, relying on the perjorative connotations of "cult". It's true, EA-the-movement does have some overlapping features with cults; so does mountaineering, appreciation of the music of Bob Dylan, and many other activities. The substantive part that is worth paying attention to is: as with any strong, attractive, growing movement, EA may attract charismatic scoundrels looking to take advantage of others. That is a genuine issue. And it's well worth guarding against. But I don't think EA is unusually prone to it when compared to any other strong ideology.
Long-termism / x-risk / AI safety: This requires a set of notes of its own. I'm broadly positive about work on x-risk in general; I admire, for instance, Toby Ord's recent book about it. I think little of most work being done on AI safety, although there are a few people doing good work, and adjacent work (on fairness, interpretability, explainability, etc) that is very valuable.
Vibe and aesthetics: A friend points out that EA has a very particular and quite unusual vibe, very different from many other cultures. This seems both true and interesting. I'm not sure what to make of it. This is also true of aesthetic: EA tends toward a very instrumental and particular aesthetic. Interesting to consider in the frame of art: historically, primarily instrumental approaches to art nearly always result in bad art. It'd be lovely to see an EA arts movement that sprang from something non-instrumental!
EA is an inspiring meaning-giving life philosophy. It invites people to strongly connect with some notion of a greater good, to contribute to that greater good, and to make it central in their life. EA-in-practice has done a remarkable amount of direct good in the world, making people's lives better. It's excellent to have the conversational frame of "how to do the most good" readily available and presumptively of value. EA-in-practice also provides a strong community and sense of belonging and shared values for many people. As moral pioneers EA is providing a remarkable set of new public goods.
All this makes EA attractive as a life philosophy, providing orientation and meaning and a clear and powerful core, with supporting institutions. Unfortunately, strong-EA is a poor life philosophy, with poor boundaries that may cause great distress to people, and underserves core needs. EA-in-practice is too centralized, too focused on absolute advantage; the market often does a far better job of providing certain kinds of private (or privatizable) good. However, EA-in-practice likely does a better job of providing certain kinds of public good than do many existing institutions. EA relies overmuch on online charisma: flashy but insubstantial discussion of topics like the simulation argument and x-risk and AI safety have a tendency to dominate conversation, rather than more substantial work. (This does not mean there aren't good discussions of such topics.) EA-in-practice is too allied with existing systems of power, and does little to question or change them. Appropriating the term "effective" is clever marketing and movement-building, but intellectually disingenuous. EA views illegibility as a problem to be solved, not as a fundamental condition. Because of this it does poorly on certain kinds of creative and aesthetic work. Moral utilitarianism is a useful but limited practical tool, mistaking quantification that is useful for making tradeoffs with a fundamental fact about the world.
I've strongly criticized EA in these notes. But I haven't provided a clearly and forcefully articulated alternative. It amounts to saying that someone's diet of ice cream and chocolate bars isn't ideal, without providing better food; it may be correct, but isn't immediately actionable. Given the tremendous emotional need people have for a powerful meaning-giving system, I don't expect it to have much impact on those people. It's too easy to arm-wave the issues away, or ignore them as things which can be resolved by grafting some exception clauses on. But writing the notes both helped me better understand why I'm not EA, and also why I think the EA principle would, with very considerable modification, make a valuable part of some larger life philosophy. But I don't yet understand what that life philosophy is.
I suggest looking at criticism of effective altruism and these four categories of EA critiques. After I finished the first draft of these notes, a competition was announced for critiques of EA; I'll be curious to see the entries. The design of the competition is, perhaps unfortunately, built around pre-existing EA ideas.
Thanks to many people for conversations that have changed or informed how I think about EA, including: Marc Andreessen, Nadia Asparouhova, Alexander Berger, David Chapman, Patrick Collison, Julia Galef, Anastasia Gamick, Danny Goroff, Katja Grace, Spencer Greenberg, Robin Hanson, David Krakauer, Rob Long, Andy Matuschak, Luke Muehlhauser, Chris Olah, Catherine Olsson, Toby Ord, Kanjun Qiu, and Jacob Trefethen. Any good ideas here are due in large part to them. Of course, they're entirely responsible for all the mistakes :-P! My especial thanks to Alexander Berger, Anastasia Gamick, Katja Grace, Rob Long, Catherine Olsson, and Toby Ord: conversations with whom directly inspired these notes. I expect many of them would, however, disagree strongly with much that is in the notes! And thanks to Nadia Asparouhova and David Chapman for providing feedback on a draft of the notes. Thanks to Keller Scholl for pointing out an error in the initial release of the essay.
Helen Toner has a thoughtful rebuttal of the idea that EA is an ideology, arguing that most ideologies aim to provide answers, whereas EA is mostly about asking a question ("how to do the most good?") The essay is very good, but ultimately I'm comfortable using "ideology" to describe EA. There is a strong presumption in EA that you should aim to do the most good, using your best judgement based upon available information and opportunity. In that sense, it is providing an answer. However, as I'll discuss later, one of the most attractive features of EA – and one unusual among ideologies – is that a large chunk of the answer is changing and constantly being renegotiated.↩︎
I will frequently refer to people who "are" EAs. Of course, the question of identity is a tricky one. There's many people – myself included – who are adjacent to the EA community, but not quite in it. (I certainly do not think of myself as an EA.) One of my favorite jokes about the (also EA-adjacent) rationality community is that their membership cry is "I'm not a rationalist, but…" It's not quite as true of EAs, but there's some truth there, too.↩︎
The ideas here are due to conversation with Catherine Olsson and Rob Long.↩︎
Peter Singer, "The Most Good You Can Do" (2015).↩︎
"Moral psychonaut" was suggested to me by Catherine Olsson.↩︎
In these and other examples it's unclear who the original moral pioneer was. Certainly, the author of the Sermon didn't "discover" the ideas alone, they came out of some tradition, some act of collective discovery. It's also true that just because someone is a moral pioneer doesn't make them a good person in an unqualified way! In that sense the term "hero" is perhaps inappropriate.↩︎
And wow, are they ever. This is a personal bugbear, something where I think EAs are way off the reservation. Newton, Darwin, and Einstein didn't arrive at their huge breakthroughs using RCTs and meta-analyses. Nor did Picasso learn to paint that way. RCTs and meta-analysis are a tiny part of the arsenal of science, not the pinnacle. Indeed, methodology in that sense is never the pinnacle.↩︎
I like to amuse myself with the notion that it's a Popperian approach to "the good". Moral conjectures and refutations, the logic of ethical discovery. Incidentally, you might say that "what is the good?" is rightly the purview of ethics and moral philosophy. EA arguably adds (imperfect) real-world experimental and applied components to those subjects.↩︎
Perhaps we need a Center for Effective Effective Altruism? Or Givewellwell, evaluating the effectiveness of effectiveness rating charities.↩︎
A friend noted that some EA organizations had gone through the startup accelerator YCombinator. I asked how that had gone. They paused, and then said with a laugh that they weren't sure, but it was notable that the organizations had become "much more interested in graphs that go up and to the right". (On balance, I'd guess this is positive. I'm not sure, but I enjoy the story.)↩︎
It's interesting to conceive of EA principally as a means of providing public goods which are undersupplied by the market. A slightly deeper critique here is that the market provides a very powerful set of signals which aggregate decentralized knowledge, and help people act on their comparative advantage. EA, by comparison, is relatively centralized, and focused on absolute advantage. That tends to centralize people's actions, and compounds mistakes. It's also likely a far weaker resource allocation model, though it does have the advantage of focusing on public goods. I've sometimes wondered about a kind of "libertarian EA", more market-focused, but systematically correcting for well-known failures of the market.↩︎
This seems to be less true of EA than many (though not all) other organizations and movements. It is, however, concerning that EA organizations (mostly) don't have any expiry date; nor is there much of a competitive model ensuring improved organizations will thrive and outgrow less effective ones. Incidentally, I've heard it said that the first generation of any successful religion is started by a prophet, the second generation is run by a very effective bureaucrat. This is perhaps true in other domains as well.↩︎
It's something of a tangent, but: I personally often find many new EAs a little self-righteous and overconfident, and sometimes overly evangelical, either for EA or for particular cause areas ("why are you wasting your time doing that, you should be working on AI safety", said by someone who thinks they know about AI, but does not, and has no ideas of any value about AI safety). This varies from amusing to mildly annoying to infuriating. This pattern is, however, common to many ideological movements, and I doubt it's particularly bad with EA. You can find similar issues within environmentalism, crypto, libertarianism, most religions, communism, and many other ideologies.↩︎
Except, crucially, participation in the market and subjugation to the government. It's rule-by-technocracy. It's perhaps telling that the former (participation in the market) is also framed in terms of choice. But it introduces some notion of a "natural" set of choices available to you, through notions like the labor market and the market for goods and services. There is nothing natural about them.↩︎
I'm not sure "duty" is the word usually used. But it captures the emotional sense I often get quite well. It's not without joy or elan, but those aren't primary, either.↩︎
I suspect no society, ever, has been healthy that didn't invest significant time and resources in the arts.↩︎
An insightful, humane essay in this vein is Julia Wise's You have more than one goal, and that's fine (2019).↩︎
Amia Srinivasan, Stop the Robot Apocalypse, London Review of Books (2015).↩︎
James Scott, "Seeing Like a State" (1998).↩︎
Cf David Chapman's closely related concept of nebulosity.↩︎