Hide table of contents

Mainstream media coverage of the FTX crash frequently suggests that a utilitarian ethic is partially to blame for the irresponsible behavior of top executives. However, consequentialist reasoning - even in its most extreme "ends justify the means" form - does not endorse committing crimes with the goal of making money to donate to charity.

Disclaimers

  • This post is not about FTX. I want to abstract away from that specific circumstance and make a broader point about consequentialism and applied ethics. These arguments are relevant whether or not fraud was committed by FTX leadership.
  • This post is nothing revolutionary; I just think these arguments need to be reiterated succinctly.
  • I do not consider myself a hardcore consequentialist. In general, I find it strange to believe that a single ethical theory could/should possibly guide all aspects of one's life.
  • I am not a trained philosopher; please use the comments if my understanding of consequentialism is flawed.

Main claim

In my opinion, the heart (and most interesting feature) of consequentialism is determining the downstream consequences of an action, especially those consequences which influence others' actions. However, this calculus is rarely mentioned in popular discourses on utilitarianism. When somebody brings up the drowning child problem, they don't ask you to consider how your decision will impact the future of the pond's availability for public bathing. That issue is hardly relevant to whether or not you choose to save the child. But real-life decisions are not thought experiments, and if we want to be serious about consequentialism, downstream effects are crucial to every moral calculus.

This is not a novel idea within consequentialist thought. Consider the famous transplant thought experiment. The experiment imagines that a healthy patient walks into a hospital, and the doctor must decide whether to kill her and harvest her organs to save five dying patients. The most intuitive consequentialist response is: "I don't care if it saves five lives; if hospitals begin killing healthy patients our entire health system will crumble."

The same intuitive response should also apply to breaking the law in order to make money to later donate to charity. Off the top of my head, here are a few downstream consequences which make that decision a bad idea:

  • You're caught and you never get the chance to donate the money because you are forced to forfeit it.
  • You ruin your reputation and lose opportunities to perform good actions in the future.
  • Your moral calculus was incorrect, and the illegal action does more harm than your donation does good.
  • If you are part of a movement that advocates doing good in the world, the exposure of your actions causes harm to that larger movement. 

These are all consequentialist arguments[1] - they rely on expected value calculations not rights violations or virtue ethics. Taken together they demonstrate why, in the vast majority of imaginable circumstances, the ends simply do not justify the means when it comes to breaking the law with the goal of making money to give away.

Counterarguments

Naive vs. sophisticated consequentialism

I've been talking a lot about "downstream consequences". If you've spent some time in EA circles, you might object that I've only considered "sophisticated consequentialism", but "naive consequentialism" might support immoral behavior to benefit some greater good. 

I disagree. The EAForum post on naive vs. sophisticated consequentialism states that:

Naive consequentialism is the view that, to comply with the requirements of consequentialism, an agent should at all times be motivated to perform the act that consequentialism requires. By contrast, sophisticated consequentialism holds that a consequentialist agent should adopt whichever set of motivations will cause her to in fact act in ways required by consequentialism.

Given this definition, my entire argument has been, counterintuitively, based off naive, not sophisticated, consequentialism. Moreover, my argument stands under the above definition of sophisticated consequentialism, too, because it's hard to imagine a set of motivations which include committing crimes and also lead to the actualization of ideal consequences.

But sometimes naive consequentialism is defined another way. The same EAForum post states that an even more naive consequentialist does not "consider less direct, less immediate, or otherwise less visible consequences". Interestingly, this definition makes the illegal behavior even more immoral. Because, under this form of naive consequentialism, you cannot consider the downstream consequences of your action, you cannot consider the fact that you will later donate the money to help people. The only consequences you can take into account are the immediate effects of the illegal action itself, and in all relevant cases, those will be bad.

Therefore, in both its naive and sophisticated forms, consequentialism does not endorse the illegal behavior.

It's the ideas that matter, not whether they were applied correctly

You might object that, even accepting the conclusion that a good consequentialist wouldn't commit the crime, what matters more is that actors might misconstrue consequentialism and use it as moral backing for their fraudulent behavior.

I agree that this is a real problem, but I don't see it as a valid objection to my claims in this post. Anybody can misconstrue any theory and "use" it to "endorse" any action. In other words, a theory is not inherently wrong just because it can be incorrectly understood and then leveraged to justify harm.

That being said, the EA movement is broadly consequentialist, so we should examine our own theoretical endorsements under a broadly consequentialist framework. If we determine that publicly advocating consequentialism directly causes many people to act immorally "in the name of consequentialism", we should either 1. change our messaging or 2. stop advocating consequentialism even if it's still what we truly believe[2].

Conclusion

I didn't write this post to advocate for consequentialism. I wrote it because I think consequentialism should be taken seriously as a moral theory. And consequentialism taken seriously does not entail that any ends justify any means. Consequentialism is so interesting precisely because it asks us to at least consider the ends when we examine the means. But when the means are potentially catastrophic, they are unlikely to be justified by any ends, no matter how good.

  1. ^

    Some of these arguments might reasonably used against earning to give more generally, especially for those who choose morally questionable career choices, but that's not relevant to this discussion.

  2. ^

    I wrote more about this strange conclusion in Part 2 of this post.

Comments9


Sorted by Click to highlight new comments since:

Just disagree with this :

"I do not consider myself a hardcore consequentialist. In general, I find it strange to believe that a single ethical theory could/should possibly guide all aspects of one's life."

How is it difficult to believe that trying to promote good conscious experiences and minimize bad conscious experiences could be the key guide to one's behavior? A lot of EAs, myself included, consider this to be the ultimate goal for our actions... Of course, we need many other areas of study and theory to guide in specific areas.

I understand that you disagree with hardcore consequentialism, but I don't see why you think it is strange for others to adopt it. This is especially true when you acknowledge the complexity in consequentialist decision-making, as you did in this post.

Thanks for the insight. Fortunately, you don't have to agree with this disclaimer in my post for the rest of the argument to remain sound. 

That being said, I find it perfectly reasonable for one's actions to be primarily (or even almost entirely) guided by consequentialist reasoning. However, I cannot understand never considering reasons stemming from deontology or virtue ethics. For example, it's impossible for me to imagine condemning a gross rights violation purely based on its consequences without even considering that perhaps violating personal rights has some intrinsic dis-value.

I believe that rights have value insofar as they promote positive conscious states and prevent negative conscious states. Their value or disvalue would be a function of whether they make lives better. Assigning weight to them beyond that is simply creating a worse world.

I do, however, find the assignment of intrinsic value, imaginable, though mistaken. I do not take umbrage so much at you disagreeing with me so much as you finding my view unimaginable.

That's a very fair point - unimaginable is the wrong word. I guess I'll say I find it curious.

To use a stronger example, suppose a dictator spends all day violating the personal rights of her subjects and by doing so increases overall well-being. I find it curious to believe she's acting morally. You don't need to believe in the intrinsic badness of rights violations to hold this point of view. You just have to believe that objective moral truth cannot be fully captured using a single, tidy theory. Moral/ethical life is complex, and I think that even if you are committed to one paradigm, you still ought to occasionally draw from other theories/thinkers to inform your moral/ethical decision making.

This agrees with what you said in your first comment: "We need many other areas of study and theory to guide in specific areas." As long as this multifaceted approach is at least a small part of your overall theory, I can definitely imagine holding it, even if I don't agree.

I think the complexity arises in evaluating the value and disvalue of different subjective states as well as determining what courses of action, considering all aspects involved, have the highest expected value.

You discuss the example of the despot regularly violating rights of subjects, yet increasing utility. Such a scenario seems inherently implausible, because if rights are prudently delineated, general respect for them, in the long run, will tend to cultivate a happier, more stable world (I.e, higher expected utility). And perhaps incursions upon these rights would be warranted in some situations. For instance, perhaps the public interest may allow someone's property rights to be violated if there is a compelling public interest (eminent domain). This is why we have exceptions to rights (I. E.- free speech and instigating imminent violence). If the rights you are advancing tend to lower the welfare of conscious beings, I would think such formulation of rights is immoral.

You are correct that moral life is complex, but I think the complexity comes down to how we can navigate ourselves and our societies to optimize conscious experience. If you are incorporating factors into your decisions that don't ultimately boil down to improving conscious experience, in my view, you are not acting fully morally.

This post argues against a strawman - it's not credible that utilitarianism endorses frauding to give. It's also not quite a question of whether Sam "misconstrued" utilitarianism, in that I doubt that he did a rational calculus on whether fraud was +EV, and he denies doing so.

The potential problem, rather, is that naive consequentialism/act utilitarianism removes some of the ethical guardrails that would ordinarily make fraud very unlikely. As I've said: In order to avoid taking harmful actions, an act utilitarian has to remember to calculate, and then to calculate correctly. (Whereas rules are often easier to remember and to properly apply.) The way Sam tells it, he became "less grounded" or "cocky", leading to these mistakes. Would this have happened if he followed another theory? We can't know, but we should be clear-eyed about the fact that hardcore utilitarians, despite representing maybe 1/1M of the world's population, are responsible for maybe 1/10 of the greatest frauds, i.e. they're over-represented by a factor of 100k, in a direction that would be pretty expected, based on the (italicised) argument above (which must surely have been made previously by moral philosophers). For effective altruists, we can lop off maybe one order of magnitude, but it doesn't look great either.

I disagree that I argue against a strawman. The media's coverage of Bankman-Fried frequently implies that he used consequentialism to justify his actions. This, in turn, implies that consequentialism endorses fraud so long as you give away your money. Like I said, the arguments in the post are not revolutionary, but I do think they are important. 

You give no evidence for your claim that hardcore utilitarians commit 1/10 of the "greatest frauds". I struggle to even engage with this claim because it seems so speculative. But I will say that I agree that utilitarianism has been (incorrectly) used to justify harm. As I stated: 

"the EA movement is broadly consequentialist, so we should examine our own theoretical endorsements under a broadly consequentialist framework. If we determine that publicly advocating consequentialism directly causes many people to act immorally "in the name of consequentialism", we should either 1. change our messaging or 2. stop advocating consequentialism even if it's still what we truly believe"

Part of my motivation for making this post was helping consequentialists think about our actions - specifically those around the idea of earning to give. In other words, the post is intended to clarify some "ethical guardrails" within a consequentialist framework.

You give no evidence for your claim that hardcore utilitarians commit 1/10 of the "greatest frauds". I struggle to even engage with this claim because it seems so speculative.

I mean that the dollar value of lost funds would seem to make it one of the top ten biggest frauds of all time (assuming that fraud is what happened). Perusing a list on Wikipedia, I can only see four times larger sums were defrauded: Madoff, Enron, Worldcom, Stanford.

Okay, I see now. I read that as "one-tenth" not one out of 10.

I'm on board with your lack-of-guardrails argument against utilitarianism. I hope arguments like the one made in this post help to construct them so we don't end up with another catastrophe. 

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Recent opportunities in Building effective altruism