The case for narrow utilitarianism
Taken to extremes, utilitarianism is dangerous. The core ideas, though, are sound. A personal reflection.
It hurts to admit this, but it’s true: I was once a Sam Bankman-Fried fanboy.
It’s not like I had “SBF” tattooed on my chest. I’ve never met the guy. I’ve never even traded crypto on FTX, the crypto exchange Bankman-Fried cofounded. Mostly, I just liked what I read and heard about him online.
For example, I remember listening to a podcast episode in which Bankman-Fried spoke to Tyler Cowen. In that conversation, SBF seemingly implied he’d gamble on doubling the Earth with a 51% probability at a 49% risk of destroying the planet. Instead of “The guy is insane,” I somehow came away thinking “What a thoughtful person. Oh, an effective altruist, too!”
Of course, Bankman-Fried has now been found guilty of massive financial fraud.
While I don’t consider myself an effective altruist, I’ve always found the core principles compelling. Using math and data to make the world a better place? Count me in. Effective altruism was a major reason why I admired Bankman-Fried. This wasn’t some random tech bro trying to get rich; SBF seemed genuinely committed to solving the planet’s most pressing problems.
Once I learned about the FTX cofounder’s arrest, I was shaken. How come I didn’t notice any red flags? Where was my BS detector? More worryingly, if the principles of utilitarianism—which underlie effective altruism—seem correct to me, and effective altruism leads to this, don’t I need to review my ethics?
This essay is my attempt to do that review. After much reflection, here’s what I currently believe:
The core ideas of utilitarianism are sound;
Extreme utilitarianism, however, is dangerous. It leads to bad, overconfident decisions;
To guard against these pitfalls, I suggest narrow utilitarianism: Applying utilitarian principles in specific, well-understood domains only.
This post is long. Nevertheless, it’s still just a Substack essay, not an exhaustive philosophical treatment—that would require a book. To keep the length manageable, I simplified things and cut corners. Keep that in mind while reading.
Core utilitarian principles are sound
Let’s start with a definition.
I will define utilitarianism as the ethical principle that choices should maximize the sum of individual utilities. While some people equate utility with pleasure, I think that’s wrong. Instead, I prefer to think of utility as a broad metric of well-being. “More utility” is just more well-being.
To me, the core principles of utilitarianism are these:
Focus on improving total well-being;
The use of quantitative methods for decision-making;
Treating all individuals equally and impartially.
The first principle seems like common sense. Who wouldn’t want more well-being? What’s the alternative? Make people suffer? Violating this principle appears irrational.
With its focus on utility, utilitarianism lends itself to a quantitative approach. At its most abstract, the principle is “maximize total utility.” In practice, utilitarians will use tools such as cost-benefit analysis. In general, I’m a big fan of such methods. If applied correctly, they allow for objective, data-driven decisions. However, excessive use of ethical calculus carries the risk of overconfidence, as I discuss later.
The final principle of treating everyone equally aligns with modern values, such as those enshrined in the United States Declaration of Independence:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
Utilitarianism is radical in its conception of equality. King and beggar, next-door neighbor and remote acquaintance, left-wing socialist and right-wing libertarian — they all carry the same weight in the utilitarian formula.
Extreme utilitarianism is dangerous
On paper, utilitarianism looks pretty great. Who wouldn’t want data-driven decisions that improve well-being, right?
We run into problems once we start putting these lofty ideals to work. The key issue, in my mind, is that we’re fallible creatures. We’re never 100% rational. We make mistakes. We suffer from cognitive biases.
It gets worse than that. My claim is that utilitarian ethics, when taken to an extreme, can amplify our cognitive biases:
It can lead to overconfidence in moral decisions;
It can lead to the rationalization of dubious behavior.
Ethical calculus and overconfidence
Richard Feynman famously said that “[the] first principle is that you must not fool yourself, and you are the easiest person to fool.” It’s especially easy to fool yourself when putting numbers on thorny ethical dilemmas.
Consider the “transplant problem” introduced by Judith Jarvis Thomson:
David is a great transplant surgeon. Five of his patients need new parts— one needs a heart, the others need, respectively, liver, stomach, spleen, and spinal cord—but all are of the same, relatively rare, blood-type. By chance, David learns of a healthy specimen with that very blood-type. David can take the healthy specimen's parts, killing him, and install them in his patients, saving them. Or he can refrain from taking the healthy specimen's parts, letting his patients die.
Should David kill the innocent person?
Most people find murder in this scenario deeply wrong. However, utilitarian ethics appear to dictate it: If David kills the innocent patient, he will save five lives. What gives?
The transplant problem illustrates the dangers of short-sighted, overconfident utilitarianism. It’s extremely myopic to only consider the short-term impact. If surgeons start slaying people to harvest organs, we’re on a slippery slope to mayhem. It’s also overconfident: What if there’s an error in matching the blood type of the healthy person? In that case, David would have killed an innocent man and saved no one.
Our moral intuition against murdering an innocent person is correct. However, that’s not because the transplant problem disproves utilitarianism. It’s because the right utilitarian calculation isn’t as simple as “5 – 1 = 4 > 0.” The correct equation is far more intricate.
My thinking here is heavily influenced by Friedrich von Hayek and his prescient warnings on the complexity of social systems. For example, in his Nobel Prize speech, Hayek said that
[…] in the social sciences often that is treated as important which happens to be accessible to measurement.
It’s a kind of streetlight effect. In the transplant problem, it’s straightforward to count the lives immediately saved or lost by a decision. Quantifying less direct factors, such as the long-term impact on trust, is much harder. While we could quantify these elements, it’s more convenient to just ignore them. The primary issue here isn’t the omission of these variables; it’s the failure to adjust the level of confidence in our conclusions. That adjustment, however, is key to avoiding overconfidence.
The perils of rationalization
Imagine this scenario. You are intensely working on a book manuscript that you believe will be your magnum opus. Suddenly, the phone rings: It’s an old friend. Unfortunately, the friend is not doing well. She has received a terminal diagnosis and would like to see you one last time. Do you put your manuscript aside and meet your friend?
That was a real-life choice faced by Derek Parfit, a world-renowned ethicist and a major influence on effective altruists:
In 2007, the American philosopher Susan Hurley was dying of cancer. Twenty years earlier, she had been the first female fellow of All Souls College, Oxford. While there, she had been romantically involved with her philosopher colleague Derek Parfit. The romance faded but had turned into a deep friendship. Now, aware of her terminal illness, Hurley journeyed back to Oxford to say goodbye. Hurley and a mutual friend, Bill Ewald, asked Parfit to go to dinner with them. Parfit—by then an esteemed ethicist—refused to join them, protesting that he was too busy with his book manuscript, which concerned “what matters.” Saddened but undeterred, Hurley and Ewald stopped by Parfit’s rooms after dinner so that Hurley could still make her goodbye. Shockingly, Parfit showed them the door, again insisting that he could not spare the time.
In short, Parfit deemed his work more important than seeing a dying friend.
Now, can we actually measure the impact of an extra hour on the manuscript vs an hour spent with the friend? Not really. Why, then, make the argument that work on the book is really “what matters?” To me, the answer is psychological: This argument helps reduce cognitive dissonance. Yes, I failed to see a dying friend, but that’s OK since I was working on my magnum opus.
I don’t mean to pick on one of the most renowned ethicists of the twentieth century. Humans are extremely creative at rationalizing dubious behavior, and that definitely includes me. My point is just that utilitarianism provides lots of fertile ground for rationalization. Even if your decision has an immediate cost, you can always argue that it’s in the name of some “greater good,” even if that “greater good” is something highly abstract and uncertain, like a book having more impact because you spent an hour longer writing it.
A hardcore utilitarian may counter that if you use utilitarian principles incorrectly, that’s on you, not on utilitarianism. That’s a valid point. Nevertheless, I don’t think we can let utiliarianism off the hook so easily. Utilitarianism, with its focus on total well-being, seems almost designed to provide us with endless justifications for dubious behavior. Say what you will about common-sense ethics, but they leave much less wiggle room for rationalization.
Proposed solution: narrow utilitarianism
To avoid the dangers of rationalization and overconfidence, I suggest narrow utilitarianism: Applying utilitarianism within specific choice domains only.
Here’s a concrete example. Let’s say you want to help reduce poverty in your city. I think you should go about that in the most effective way possible. Is that volunteering at a local soup kitchen? Directly giving money to the homeless on the street? Advocating for policy change? Evidence-based choices in this narrow domain are perfectly sensible.
However, what about the choice “donate to reduce poverty in your city” vs “donate to a local university”? These options are so different that it stretches the imagination to think that we know, with high confidence, how to compare them:
Short-term vs long-term: Donating to reduce poverty provides immediate relief to those in need. In contrast, donating to a university is a long-term investment in societal progress through education and research;
Direct vs indirect: Giving money to reduce poverty in your city has a direct impact on people’s lives. On the other hand, donating to a university has only an indirect effect. Such a donation may contribute to societal improvements in the future, including reducing poverty. But it also may not;
Easily measurable vs hard to quantify: The impact of donating to alleviate poverty can be more easily measured (e.g., the number of meals provided). In contrast, the benefits of a university donation are tough to quantify.
Again, if you decide to donate to a university, it’s wise to make sure your money is spent to improve education and research, not build an extravagant sports center. But I don’t trust our ability to quantitatively compare very different domains.
Narrow utilitarianism is one potential solution to rationalization and overconfidence. Deontological ethics is another. In the example of a dying friend, if you follow the common-sense morality of “You need to be there for your friends,” you successfully avoid rationalization. No need to do any complicated utilitarian math, either.
There are two obvious criticisms of the approach I’m suggesting. First, how do you decide what’s “narrow enough”? Second, since we will inevitably need to make comparisons across domains, how do we make such choices? Do we go on vibes?
Yes, we go on vibes. Not because going on vibes is good but because that’s all we have. We should do all we can to gather good data and make reasoned judgments. However, let’s be clear-eyed that the real world is hugely complex, and we don’t understand it all that well.
As for what constitutes a “narrow” domain, I think you need to answer that on a case-by-case basis. The “narrowness” of a domain is some function of how well we understand the problem at hand, the quality of data we have, and so on. Over time, as we learn more about how the world works, domains will expand, and we will be able to make broader utilitarian judgments. When dealing with controversial, little-understood topics, though, more narrow is probably better.
Summing it all up
In light of the Sam Bankman-Fried scandal, should we abandon utilitarianism? To many, the answer is a clear “yes.”
My take is more nuanced. Utilitarianism is maybe the most practically helpful framework for making ethical decisions. Throwing it away seems irrational.
However, we need to be careful in real-world applications. We need to employ utilitarian principles with caution and humility. When the results of a utilitarian calculation clash with our moral intuitions, we need to check our ethical math.
Did I just write 2,000+ words that add up to “moderation is key?” Yes, I guess. If you want to be charitable, the essay provides two psychological failure modes for utilitarianism: rationalization & overconfidence. These are empirically testable hypotheses. While I haven’t found any studies that test them directly, someone could run those experiments in the future.
As for me, SBF made me less utilitarian. And, you know, a little more skeptical of crypto bros promising to fix the world. In the real world, gambling on 51/49 odds is not that appealing, and neither is simplistic, extreme utilitarianism.