In Why I am not an effective altruist, Erik Hoel criticizes the philosophical core of the EA movement. The problem with effective altruism—which is basically utilitarianism—is that it leads to repugnant conclusions (coined by Derek Parfit in Reasons and Persons):

  • Example: strict utilitarianism would claim that a surgeon, trying to save ten patients with organ failure, should find someone in a back alley, murder them, and harvest all their organs.
  • Example: utilitarianism would claim that there there is some number of “hiccups eliminated” that would justify feeding a little girl to sharks.

For a lot of utilitarian thought experiments, the initial experiment has a clear conclusion (e.g. you should switch the trolley so fewer people die), but then all the variations of it no longer have a clear conclusion, because morality is not mathematics.

The core mistake of utilitarianism is that it assumes that the moral value of all experience is a well-ordered set—that you can line them all up on a single scale, and thus all you need to do to figure out which action is better is some addition and multiplication. In a follow up post, Hoel adds: the mistake of utilitarianism is the view that morality is fungible, that moral value can be measured as mounds of dirt. But in reality, moral actions are not fungible.

(Note: the fact that moral values are not fungible does not preclude us from making comparisons between actions, e.g. a stubbed toe and the holocaust are entirely different categories of evils, but we can still compare them and say the latter is worse. Hoel’s point, though, is that not all moral actions can be compared to each other in an order-preserving manner.)

One way to avoid repugnant conclusions is to add more and more parameters to your utilitarian calculus. Hoel claims this is futile, because you’ll never have enough parameters. (Popper had a term for this: adding ad hoc hypotheses).

Just as how Ptolemy accounted for the movements of the planets in his geocentric model by adding in epicycles (wherein planets supposedly circling Earth also completed their own smaller circles), and this trick allowed him to still explain the occasional paradoxical movement of the planets in a fundamentally flawed geocentric model, so too does the utilitarian add moral epicycles to keep from constantly arriving at immoral outcomes.

Hoel thinks the EA movement has done a lot of good; he just disagrees with its philosophical underpinnings. And it still leads to some repugnant-ish conclusions, e.g. that you should just be a stock broker to make more money and donate it.

One can see repugnant conclusions pop up in the everyday choices of the movement: would you be happier being a playwright than a stock broker? Who cares, stock brokers make way more money, go make a bunch of money to give to charity. And so effective altruists have to come up with a grab bag of diluting rules to keep the repugnancy of utilitarianism from spilling over into their actions and alienating people.

Sam Harris’s criticism

Hoel made a recent appearance on the Sam Harris podcast, in which he and Harris kept butting heads on the same basic point. Harris: all moral theories, even those that are not consequentialism, ultimately make their claims to moral value based on consequences. If you ask a deontologist why they advocate for some principle or other, their argument will be framed in terms of the consequences of people following that principle. Same thing with virtue ethics, or any other moral system.

For Harris, all the criticisms that Hoel makes still boil down to consequences. It’s all just more consequences!

This is technically true, but at some point it begins to sound vacuous. You could describe every moral theorem in terms of eventual consequences, but it’s not especially useful to do so. It’s like saying “It’s all about goodness! Morality is all about goodness!” Hoel made this point in his piece: once you strip utilitarianism enough to no longer do strict mathematics around hedonic units, and instead just say “do the most good you can, where good is defined in a loose, complex, personal way”, you’ve arrived at something no one can disagree with.

Choosing a moral theory requires us to answer the question of how best to think about how we should act. And it just seems that if you think about morality in terms of maximization, you are far more likely to engage in morally questionable behavior than if you just thought about it in terms of, say, adhering to principles, or cultivating virtues. Who are the people we think of as ethical heroes, as role models in the moral life? Gandhi, Mandela, Jesus, the Buddha. Are they rationalistic maximizers? Or are they just, really principled and virtuous people?

There is one place where I do agree with Harris, and it’s around preferring human consciousness over artificial consciousness. Hoel claims that a pitfall of consequentialism is that it doesn’t give us any reason to prefer our own survival over, say, the survival of some alien or artificial intelligence. And to that I would say: if we really come to the conclusion that artificial intelligences can have inner lives as rich and as laden with suffering and happiness as our own, and if they are as concerned with ethics as we are, then why should we prefer our wellbeing over theirs?