This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it's especially relevant to the kind of reflection that is currently happening in this community, and because I think it's more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.

In the wake of revelations about FTX and Sam Bankman-Fried's behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before 'next time'. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little 'deontology' in Effective Altruism. As this diagnosis goes, Bankman-Fried's behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more 'deontological'.

I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.

The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.

However, morally thoughtful people tend to have good ‘intuitions’ about everyday cases: it is these that common-sense morality was designed to handle. We know that it’s wrong to take someone else’s money and not pay it back; we know that it’s typically wrong to lie solely for our own benefit; we understand that it’s good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won’t typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?’—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn’t cut you up.

This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better if you acted according to these maxims. The same kind of reasoning applies to ideas like ‘integrity’ and ‘reputation’; the entire point is that you can predict ahead of time the kind of situations you are likely to face, so can draw on the resources of moral philosophy to come up with strategies for facing them responsibly. (Yes, for those who speak fluent LessWrong, this is essentially just self-modification.) The steady, predictable nature of these kinds of cases is precisely why everyday moral thinking is so well-suited to them.

The problem is more specific to circumstances of power, because everyday moral thinking is not all that well-suited to the high-stakes choices that you have to make when you have power. Very often, these choices force you to make trade-offs and moral compromises that would be unacceptable by everyday standards. Further, these situations often involve pervasive uncertainty, with no ‘given’ probabilities or regular laws that can be exploited to structure your decision-making, only your own judgment and best guesses. With no single obvious reference class and a huge degree of unknown unknowns, it’s not clear ahead of time what situations you will even face—and, as a result, not clear at all what the relevant side-constraints should be. You just don’t know which types of moral reasoning will be helpful or necessary, and which would stop you from making the necessary trade-offs.

The possibility of motivated reasoning makes this problem even more serious. Many people who do care about side-constraints nonetheless find ways to justify obvious-seeming violations, through rhetorical redescription and motivated reasoning, if they are put in contexts that provide them with the opportunity to do so. This means that a proposed deontological rule must be specific enough to avoid this problem, while also general enough to actually be workable. The maxim ‘never commit murder’ is nice, but wouldn’t have prevented a fiasco like Bankman-Fried’s; the maxim ‘do not misuse customer funds’ is better, but far too open to motivated reasoning about the meaning of ‘misuse’; and while a maxim of ‘do not use customer funds to prop up your insolvent hedge fund while just hoping to make the money back later’ is perfect, you can only formulate it after the fact.

To be sure, it seems as though Bankman-Fried didn’t take side-constraints particularly seriously. But this doesn’t necessarily mean that he would have done any better had he taken them seriously. Intending to obey deontological constraints is no more a guarantee that you will obey them than intending to maximise utility is a guarantee that you will maximise utility. What’s needed is a more systematic analysis of how to act in situations of power, potentially including proposals for reform at the institutional level—not simplistic rules drawn from intuition.

Comments1


Sorted by Click to highlight new comments since:

I think this idea of the role of power in the question of deontological vs consequentialist reasoning is interesting.  I don't have a lot of background in formal ethics, so I'm not sure how I would classify my own ethical camp, but generally I've always thought that deontological values can be taken into consideration in a consequentialist framework--when we are asking whether it is acceptable to violate a general rule for the "greater good," we should consider the consequences of eroding the sense of integrity, honesty, human rights, etc. in our society.  In cases where bending a rule can do a lot of good (i.e. lying to get money for life-saving medical care), this seems perfectly fine to me.  But when someone is too quick to justify breaking intuitive moral rules, they are probably undervaluing the harm they are doing by eroding the values that underlie those rules.  I'm sure I'm not the first person to have this thought (feel free to let me know if there's a name for this position, as I'd be curious to know).   

The question this post raises for me is whether there are circumstances where the deontological rules have more or less weight.  And power seems like a relevant criteria.  If your actions are extremely influential, then it may be more likely that you'll face decisions where the immediate consequences are simply of far more importance than the deontological rules you may be bending or breaking. I certainly wouldn't want a world leader unwilling to fall short of absolute honesty when dealing with terrorist threats, for instance.  Or the choices available might simply be so influential and complex that there is no pure choice--for instance, setting healthcare policies where there aren't enough resources to save everyone, and so any decision will involve a choice to let people die.  

But ultimately, I don't think power fundamentally changes the paradigm.  For one thing, the more influential a person is, the more consequential their violations of deontological rules will be, which cautions against being too quick to think that a high-stakes decision shouldn't be constrained by the sorts of moral rules we apply to more mundane decisions.  In this case, it is obvious that the choices made by FTX have been extremely harmful and eroded a public sense of trust and integrity.  

In addition, there's a difference between people who are entrusted to make difficult moral tradeoffs, and people who are not.  Where someone is elected to determine policy, they may have to make high-stakes decisions and moral tradeoffs that can't totally align with intuitive deontological rules.  In other words, they're given a license to make those calls.  But it's a different case where, as here, nobody entrusted FTX to make the decision whether to use customer funds the way it did.  Assuming for the sake of this discussion that FTX made that decision because it believed the ends justified the means, FTX wasn't just making that judgment call--it was also deciding that it should be the decision-maker.  To me, that is the real problem.  Because FTX wasn't just contributing to a world where people are lied to for the greater good, or where people's wealth is gambled without their for the greater good.  It was also contributing to a world where everyone makes that decision for themselves rather than deferring to the rules society has decided to impose.  That world would be utter chaos, with everyone in a position of power,  however they got it, deciding to substitute their judgment for the judgment of society.  

I'm not saying nobody should ever make a decision they weren't entrusted with.  If I had the chance to assassinate a president about to hit the red button and start a nuclear apocalypse, I wouldn't worry too much about the arrogance inherent in making that decision myself.  And I hope others in that situation would do the same.  But that's in part because I'm really, really, really confident that it's the right call.  And that confidence is strengthened by fact that I don't think the applicable laws (don't kill the president) properly address the situation where it's the only way to save the world from certain doom.  And I think that if I could ask society for permission, I would get it, but I just don't have time.  But in FTX's situation, it wasn't dealing with some unforeseeable circumstance, and there is no reason to think that society would have approved of its choice if asked.  And nobody deliberately entrusted FTX with the authority to make these moral tradeoffs.  So the only justification is that FTX simply knew better than everyone else, and when that is the only justification, I would guess in the vast majority of real-world cases it is an example of arrogance and motivated reasoning, not of a super-intelligent entity saving society from its own misguided sense of morality.  

In short, I guess what I'm saying is that I agree that the precise intuitions that guide our mundane daily decisions are less applicable to people in positions of power.  But there are still intuitive rules that do apply--Were you entrusted to make this sort of decision?  Would fully informed people be likely to agree that the benefits of your choice outweigh the harm done? Are you breaking a rule that clearly wasn't established with the situation you're facing in mind?  Or are you just deciding that you know better than everyone else and that the consequences are so important that it justifies not only your arrogance, but the real harm done to society by promoting the idea that individuals should override social judgment calls with their own?

Ultimately, I agree that the answer isn't simply more deontology.  Rather, it's a greater respect for the moral values of the society we live in and the harm we cause if we violate them, as well as more humility with regards to our ability to determine when we know better and are therefore justified in breaking the rules.  I won't pretend that there's any side constraint or rule that I would never break, no matter the positive consequences of doing so.  For instance, if the entire world voted in favor of nuclear Armageddon, I'd still try to stop it.  But I can still say with confidence that, unless I'm entrusted with a position of power where I'm not expected or able to strictly adhere to those side constraints, I think it's extremely unlikely that a real-world situation would arise where I would feel justified doing so.  

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region