Like many of you, I have struggled with this. It is a complex decision with a lot of uncertainty. For the interim, I found 80,000 hours advice helpful. Their career guide said something to the effect of because I am young, I should TRY multiple fields of work before making a decision.

I actually changed my career plans senior year because of the strong argument EA makes for earning to give. Still, I wonder if earning to give is all it's cracked out to be. Macklemore told his son:

Don't try to change the world, find something that you love, And do it every day

Do that for the rest of your life, And eventually, the world will change

The following is taken from an article with a self-centered perspective. It still applies to the EA career dilemma:

What’s more, as the years pass, you will almost surely develop deep expertise at whatever it is you’ve been doing. At that point, even if few people in any one location place high value on what you do, you may find that your services become extremely valuable economically. That’s because technology has steadily extended the geographic reach of those who are best at what they do. If even a tiny fraction of a sufficiently large group of buyers cares about your service, you may be worth a fortune. There is, of course, no guarantee that you’ll become the best at what you choose to do, or that even if you do you’ll find practical ways to extend your reach enough to earn a big paycheck.

I usually don't care for high risk, high reward scenarios, but I wonder if following your passion through direct work is one of those scenarios. I know with certainty that I could increase my income by at least 8M simply by returning to a career in software or corporate management. I even have some good memories of this sort of work, so it's not like they're terrible jobs. Still, I think there is a point missing in this discussion. Among those who had a large effect on the world, were they pursuing their passion or earning to give?

Moreover, the significant psychological benefit to yourself surely has many ripple effects that increase your impact outside of your career. Maybe that's just wishful thinking.

No offense, but I am most interested in people who have experience with earning to give (>30% of income) and/or following their passion for altruism through direct work.

0

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

Don't try to change the world, find something that you love, And do it every day

Do that for the rest of your life, And eventually, the world will change

Taken literally, this is clearly untrue. If I love surfing, and go do that, I'm hardly going to change the world in a significant way.

However, it's gesturing at something true: it's really good to be good at your job. You'll have more impact, better career capital, and be happier. And one component of being good at your job is to find something you're intrinsically motivated by (i.e. love doing for its own sake). So, finding something that's intrinsically motivating is important for a high impact career.

We write about this here: https://80000hours.org/career-guide/personal-fit/

This is why we only recommend people pursue earning to give if they have high personal fit with the career.

Also bear in mind that if you're very interested in effective altruism, I think earning to give has become a little overrated, mainly for these reasons: https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/

I'm doing earning to give as a data scientist working in a marketing agency. I wouldn't say marketing or data science is my passion, but I do get to directly contribute to my favorite charities by making a lot of donations.

Also, another aspect that is forgotten in this conversation is that it isn't an either-or. My job is 30-50 hours of work each week (depending on the week and the amount of client requests we get). But my productive capacity is much closer to 60 hours a week, so each week I have 10-30 hours of free time that I can use to work pro bono for various EA orgs, which definitely is my passion.

We should avoid the temptation to think it's an all-or-nothing between direct work now until retirement and earning-to-give from now until retirement. (Not saying that was exactly your view.)

Here's one example of something in between these extremes. One can work at for-profit jobs as a means of skilling up such that your talents can be used for direct work projects during non-work hours and/or later on in one's career. And meanwhile one can earn-to-give in the short term, remaining agnostic about the long-term path.

Peter Hurford has an interesting profile making similar points, here. I love the term 'exploration value': https://80000hours.org/career-guide/member-stories/peter-hurford/ https://80000hours.org/2014/10/update-on-peters-career-story/

You can also do some direct work while also doing ETG. :)

I did earning to give for 18 months in a job that I thought I would really enjoy but after 12 months realised I didn't. I'm now doing a PhD.

I think personal fit is pretty important, but at the end of the day it's still just another thing to consider, and not the be all end all. I think its a pretty valid point that you will perform better in a role that you enjoy and thus advance further and have more impact, but if you're really trying to maximise impact there are limits to that (e.g. Hurford's example about surfing, unless surfing to give can be a thing).

So you should probably pick a job that you enjoy, but it's unlikely that the career where you will have the greatest marginal impact is also the career that you most enjoy. If it is, you're very lucky indeed. Otherwise, I would suggest finding some kind of balance.

I've been earning to give for a few years.

I'm not quite sure what the relevance of the second quote is supposed to be; it seems to argue for developing expertise in an area and is agnostic on whether that area should be 'direct' or 'indirect', since it's self-centred in the first place. A hint in what you're getting at might be in your title; you conflate 'follow your passion' with 'direct work'. I submit that while probably more people are passionate about charity work than, say, working in finance, there are far more people who are passionate about neither.

Also, even if you are passionate about an area now, whether you will remain passionate for long enough to develop the expertise described is still in question; this seems like an end-of-history illusion*. This makes the message of the first quote dubious to me; what happens when what you love changes? Which is one of the reasons 80k recommends against 'following your passion' as good career advice for young people especially.

With all that said, if you are an excellent fit for an area (you're good at it, you enjoy it) and it happens to be an area which fits neatly into high-impact direct work or high-donation earning-to-give, then I'd generally recommend people do that. While their passions are likely to change, their current favoured areas are probably a better guide to what they will like in 15 years than picking at random. And that's what I'm doing. But those are the easy cases ;) Everyone else has to think a bit harder unfortunately, and that's where 80k comes in.

https://en.wikipedia.org/wiki/End-of-history_illusion

"The end-of-history illusion is a psychological illusion in which individuals of all ages believe that they have experienced significant personal growth and changes in tastes up to the present moment, but will not substantially grow or mature in the future.[1] Despite recognizing that their perceptions have evolved, individuals predict that their perceptions will remain roughly the same in the future."

Regarding the second quote, pretend you're debating between a job you love and a job that pays double. The quote is saying that if you really love the job, you may wind up being paid comparably anyway, because people who are passionate about their work tend to be the best, and tend to be paid way more than average.

@cdc482 I share your concerns, suspect many others to as well, and appreciate the honesty of this post.

I think a lot of whether it's worth taking higher-risk-higher-reward paths toward doing good depends on a lot of specifics. Specifics such as those covered in 80K's framework (https://80000hours.org/articles/framework/).

In particular, the question about earning vs. working on the front lines has to do with what sort of needs your cause has, and your would-be 'role impact'. Is the cause more funding-constrained, research-constrained, talent-constrained in other ways? If the constraints involve certain talents, do you have, and/or could you (further) cultivate the needed talents? Also, do you have solid backup options if the risky plan doesn't work out?

I'll tell my own EA story a bit in case you can relate. In my case, I'm relatively set---but not dead-set---on making animal advocacy my primary cause for the majority of my life. I'm earning-to-give-and-skill-up as a software developer, at least in the short term, for the following reasons:

  • because I understand the animal protection movement to be more funding-constrained than starving for very particular talents that I currently have;
  • to skill up on tech talents that can be potentially useful to any movement;
  • to give myself a solid for-profit career option in the event that I tried something else;
  • given the high 'exploration value' of seeing about doing tech entrepreneurship somewhere down the line;
  • to give myself time to assess whether there are better causes than animal protection (x-risk is an enticing cause and I still want to think/learn more about issues like tractability, the importance of values-spreading, etc.).

Here is a provocative piece that challenges people to think outside of the box of merely earning-to-give long term: https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or
Recent opportunities in Career choice
54