Effective altruism can be compatible with most moral viewpoints, and there is nothing fundamental about effective altruism that requires it to be a near-exclusive focus. There is, however, an analysis of effective altruism that seems to implicitly disagree, sneaking in a near-complete focus on effectiveness via implicit assumptions in analysis. I think that this type of analysis, which is (incorrectly) assumed by many people I have spoken to in the community, is not a necessary conclusion, and in fact runs counter to the assumptions for fiscal effective altruism. Effective careers are great, but making them the only "real" way to be an Effective Altruist should be strongly rejected.

The mistaken analysis goes as follows; if we are balancing priorities, and take a consequentialist view, we should prioritize our decisions on the basis of overall impact. However, effective altruism has shown that different interventions differ in their impact by orders of magnitude. Therefore, if we give any non-trivial weight to improving the world, then it is such a large impact, it will overbalance other considerations.

This can be illustrated with a notional career choice model. In this model, someone has several different goals. Perhaps they wish to have a family, and think that impacts on their family is almost half of the total reason to pick a career, while their personal happiness is another almost-half. Finally, in line with the financial obligation to give 10% of their income, they “tithe” their career choice, assigning 10% of the weight to their positive impact on the broader world. Now, they must choose between an “effective career,” or working a typical office job.

FactorFamilyPersonal HappinessBeneficenceOverall
Weight45%45%10%100%
PriorityRatingImpactRatingImpactRatingImpactRatingImpact
Effective Career3/1032/1029/1010003.15102.5
Office Job9/1099/1091/1018.28.2


 

As the table illustrates, there are two ways to weigh the options; either rating how preferred the option is as a preference, or assessing its impact. The office job effectively only has impact via donations, while the effective career addresses a global need. The first method leads to choosing the office job, the second to choosing the effective career. In this way, the non-trivial weight put on impact becomes overwhelming. (I will note that this analysis often fails to account for another issue that Effective Altruism used to focus on more strongly, that of replaceability. But even assuming a neglected area, where replaceability is negligible, the totalizing critique obtains.)

The equivalent fiscal analysis certainly fails; dedicating 10% of your money to effective causes does not imply that if the cause is very effective, it requires you to give more than 10% of your money. This is not to say that the second analysis is confused - but it does require accepting that under a sufficiently utilitarian viewpoint, where your decisions weight your benefit against others, even greatly prioritizing your own happiness creates a nearly-totalizing obligation to others. And that's not what Effective Altruism generally suggests.

And to be clear, my claim is not particularly novel. To quote a recent EA Forum post from 80,000 hours: “It feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.”

It seems important, however, to clarify that in many scenarios it simply is not the case that an effective career requires anything like the degree of sacrifice that the example above implies. While charities and altruistic endeavors often pay less than other jobs, the extent of the difference is usually a fractional amount of income, not an order of magnitude difference. And charitable organizations are often as good as or better than commercial enterprises in terms of collegiality, flexibility, and job satisfaction. Differences in income certainly matter for personal satisfaction, but for many people, effective careers should be seen as a reasonable trade-off, and not as either the only morally acceptable choice, or an obviously inferior choice.

I think that many people who are new to EA, and those who are very excited about it, sometimes make a mistake in how they think about prioritizing, and don't pay enough attention to their own needs and priorities for their careers. Having a normal job and giving 10% of your income is a great choice for many Effective Altruists. Having a job at an effective organization is a great choice for many other Effective Altruists. People are different, and the fact that some work at EA orgs certainly doesn't prove they are more committed to the cause, or better people. It just means that different people do different things, and in an inclusive community focused on effectiveness and reasoning, we should be happy with the different ways that different people contribute. 

Comments8


Sorted by Click to highlight new comments since:

I don't really disagree with you (ex: 2016, 2022) but have you seen EA writing or in-person discussion advocating choosing an impactful job where you'd rate your happiness 2/10 over a less impactful one where you'd rate it 9/10?

I have seen a few people in EA burn out at jobs they dislike because they feel too much pressure and don't prioritize themselves at all, and I've seen several people trying to find work in AI safety because it's the only effective thing to do on the issue that they were told was most important, despite not enjoying it. Neither of those is as extreme as the notional example, but both seem to be due to this class of reasoning.

(never spoke about this with anyone, but) I think about this like the classic balance between utilitarianism and deontology, "Go three-quarters of the way from deontology to utilitarianism and then stop".

I mean: Yeah, having a high impact is important, but don't throw out "enjoying life" and other things that might not be easily quantifiable. We're only humans, if we try to forcefully give one consideration 1000x the weight of another, it totally might mess up our judgement in some bad way.

[this doesn't feel so well phrased, hopefully it still makes some sense. If not, I'll elaborate]

I think this is roughly right.

(That said, it's a balance, and three-quarters of the way to 100% EA dedicate-ism will sometimes feel and look quite a lot like crazy sacrifice, IMO.)

Yeah I have no idea if 75% is the correct constant. I mainly read this as "definitely not 100% and also not 99%"

[not a philosopher]

Yes - this post comes drafting  a larger post I'm writing trying to deconstruct EA ideas more generally, and the way that EA really isn't the same as utilitarianism.

For EA s starting out, there should be some focus on just doing good and not necessarily trying to aggressively optimize for doing good better, especially if you don't have a lot of credibility in that space.

Also, at the end of the day EA is a just a principle/value system which you can rely on in pretty much any career you end up making. The part about EA being a support system and a place to develop your values is often left out and as a result a lot of early stage exicted EAs just want to "get into " or "get stuff" out of EA

I think that "focus on just doing good and not necessarily trying to aggressively optimize for doing good better" is the wrong approach. Doing something to feel like you did something without actually trying is, in some ways, far worse than just admitting you're not doing good at present, and considering whether you want to change that.

And "A is a just a principle/value system which you can rely on in pretty much any career you end up making" sounds like it's missing the entire point of career choice.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 7m read
 · 
Tl;dr: In this post, I introduce a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed o
 ·  · 2m read
 · 
Is now the time to add to RP’s great work?     Rethink’s Moral weights project (MWP) is immense and influential. Their work is the most cited “EA” paper written in the last 3 years by a mile - I struggle to think of another that comes close. Almost every animal welfare related post on the forum quotes the MWP headline numbers - usually not as gospel truth, but with confidence. Their numbers carry moral weight[1] moving hearts, minds and money towards animals. To oversimplify, if their numbers are ballpark correct then... 1. Farmed animal welfare interventions outcompete human welfare interventions for cost-effectiveness under most moral positions.[2] 2.  Smaller animal welfare interventions outcompete larger animal welfare if you aren’t risk averse. There are downsides in over-indexing on one research project for too long, especially considering a question this important. The MWP was groundbreaking, and I hope it provides fertile soil for other work to sprout with new approaches and insights. Although the concept of “replicability”  isn't quite as relevant here as with empirical research, I think its important to have multiple attempts at questions this important. Given the strength of the original work, any new work might be lower quality - but perhaps we can live with that. Most people would agree that more deep work needs to happen here at some stage, but the question might be is now the right time to intentionally invest in more?   Arguments against more Moral Weights work 1. It might cost more money than it will add value 2. New researchers are likely to land land on a similar approaches and numbers to RP so what's the point?[3] 3. RP’s work is as good as we are likely to get, why try again and get a probably worse product? 4. We don’t have enough new scientific information since the original project to meaningfully add to the work. 5. So little money goes to animal welfare work  now anyway, we might do more harm than good at least in the short t