by aog
4 min read 4

74

I used to expect 80,000 Hours to tell me how to have an impactful career. Recently, I've started thinking it's basically my own personal responsibility to figure it out. I think this shift has made me much happier and much more likely to have an impactful career.

80,000 Hours targets the most professionally successful people in the world. That's probably the right idea for them - giving good career advice takes a lot of time and effort, and they can't help everyone, so they should focus on the people with the most career potential.

But, unfortunately for most EAs (myself included), the nine priority career paths recommended by 80,000 Hours are some of the most difficult and competitive careers in the world. If you’re among the 99% of people who are not Google programmer / top half of Oxford / Top 30 PhD-level talented, you might have a very tough time succeeding in these career paths as outlined by 80,000 Hours. 

So how can the vast majority of people have an impactful career? My best answer: A lot of independent thought and planning. Your own personal brainstorming and reading and asking around and exploring, not just following stock EA advice. 80,000 Hours won't be a gospel that'll give all the answers; the difficult job of finding impactful work falls to the individual.

I know that's pretty vague, much more an emotional mindset than a tactical plan, but I'm personally really happy I've started thinking this way. I feel less status anxiety about living up to 80,000 Hours's recommendations, and I'm thinking much more creatively and concretely about how to do impactful work.

More concretely, here's some ways you can do that:

  • Think of easier versions of the 80,000 Hours priority paths. Maybe you'll never work at OpenPhil or GiveWell, but can you work for a non-EA grantmaker reprioritizing their giving to more effective areas? Maybe you won't end up in the US Presidential Cabinet, but can you bring attention to AI policy as a congressional staffer or civil servant? (Edit: I forgot, 80k recommends congressional staffing!) Maybe you won't run operations at CEA, but can you help run a local EA group?
  • The 80,000 Hours job board actually has plenty of jobs that aren’t on their priority paths, and I think some of them are much more accessible for a wider audience.
  • 80,000 Hours tries to answer the question “Of all the possible careers people can have, which ones are the most impactful?” That’s the right question for them, but the wrong question for an individual. For any given person, I think it’s probably much more useful to think, “What potentially impactful careers could I plausibly enter, and of those, which are the most impactful?” Start with what you already have - skills, connections, experience, insights - and think outwards from there: how you can transform what you already have into an impactful career?
  • There are tons of impactful charities out there. GiveWell has identified some of the top few dozen. But if you can get a job at the 500th most effective charity in the world, you’re still making a really important impact, and it’s worth figuring out how to do that.
  • Talk to people working in the most important problems who aren't top 1% of professional success - seeing how people like you have an impact can be really motivating and informative.
  • Personal donations can be really impactful - not earning to give millions in quant trading, just donating a reasonable portion of your normal-sized salary, wherever it is that you work.
  • Convincing people you know to join EA is also great - you can talk to your friends about EA, or attend/help out at a local EA group. Converting more people to EA just multiplies your own impact.

Don't let the fact that Bill Gates saved a million lives keep you from saving one. If you put some hard work into it, you can make a hell of a difference to a whole lot of people.

...

This is a repost of an old comment of mine. I spent a while writing and rewriting detailed elaborations of the comment, but I've finally accepted that those versions will not be published anytime soon, so I've just decided to repost the comment as-is. 

In the last 18 months, I think the EA career situation has changed substantially. Thanks to active efforts by 80,000 Hours and many others in the EA community to tailor career advice for a broader audience, there seems to be much less frustration with the availability of career options, at least as evidenced by EA Forum posts on the topic. 

Here's a few recent publications I've found very useful on the topic:

  • Arden Koehle's writing for 80,000 Hours, including this post about why there cannot be one "big list" of the world's most impactful careers, and instead everyone should have their own personal list. 
  • 80,000 Hours' lists of important problem areas and impactful career ideas beyond the scope of what 80,000 Hours usually focuses on (also written by Arden Koehle!). 
  • SHOW: A framework for shaping your talent for direct work, written by Ryan Carey and Tegan McCaslin about how to develop your career capital. 
  • The work on the EA Local Career Advice Network by Vaidehi Agarwalla and many others. Here's a bunch more great resources they compiled. 
  • This post by ShayBenMoshe is a great example of detailed, in-depth career planning--probably the result of dozens of hours of writing, plus much more time spent thinking about and carrying out the plan. 

The point I'd emphasize the most is the title of this article: Plan your career on paper. If you are stressing out about your career, I'd recommend writing down what you want in a career, what problems you think are the most important, what careers could address those problems, how you might enter those careers, and working both backwards from your goals and forwards from your current career capital to figure out your work-in-progress career plan. For the longest time, I thought I could do this passively in my head just by reading about EA online, but since writing down my thoughts I've understood my own situation much better and stressed about it much less. 80,000 Hours has always recommended this approach, and has recently authored some great new resources to help you get started

Thank you to Aaron Gertler, Ryan Carey, Brenton Mayer, Michelle Hutchinson, Khorton, and many others for feedback and encouragement here. 

Comments4


Sorted by Click to highlight new comments since:

I like this post, and your conclusion really resonates with me. One more resource that I think is helpful to point people to is Ozy Brennan's Career Advice for the Everyday Effective Altruist. 

This is an interesting post and I have been seeing similar critiques in the past year. I wrote something similar but much less articulate once. I think the community is ready for practical advice, career options, and solutions for the not extremely outstanding masses.

Like advice for EAs with low GPAs and weak CVs, or advice on how to compare any two very specific options.

Your post is a very good starting point.

If you’re among the 99% of people who are not Google programmer / top half of Oxford / Top 30 PhD-level talented, you might have a very tough time succeeding in these career paths as outlined by 80,000 Hours. 

As a counterpoint, I think some of the most impactful roles are extremely neglected, so much so that even 80k might not have an article about them. A few AI safety field building projects come to mind when I backchain on preventing an AI catastrophe. And I think these projects require specialized skillsets more than they require the general competence that gets someone into Google / top half of Oxford.

Yeah I agree. My perceptions have changed a bit since writing this post, at least in technical AI Safety there seems to be a pretty good on-ramp of education -> junior roles -> hopefully senior roles??? Haven’t gotten there yet ¯_(ツ)_/¯

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Hi everyone! I’m Caitlin, and I’ve just kicked off a 6-month, full-time career-transition grant to dive deep into AI policy and risk. You can learn more about my work here. What I’m Building I’m launching a TikTok and Instagram channel, @AICuriousGirl, to document my journey as I explore AI governance, misalignment, and the more tangible risks like job displacement and misuse. My goal is to strike a tone of skeptical optimism[1], acknowledging the risks of AI while finding ways to mitigate them. How You Can Help My friends and roommates have already given me invaluable feedback and would be thrilled to hear I'm not just relying on them anymore to be reviewers. I’m now seeking: * Technical and policy experts or other communicators who can * Volunteer 10-15 minutes/week to review draft videos (1-2 minutes in length) and share quick thoughts on: * Clarity * Accuracy * Suggestions for tighter storytelling First Drafts Below are links to my first two episodes. Your early feedback will shape both my content style and how I break down complex ideas into 1- to 2-minute TikToks. 1. Episode 1: What is this channel? 2. Episode 2: What jobs will be left? (Please note: I’ll go into misuse and misalignment scenarios in future videos.) Why TikTok? Short-form video platforms are where many non-technical audiences spend their time, and I’m curious whether they can be a vehicle for thoughtful discussion about AI policy.   If you’re interested, please reply below or DM me, and thank you in advance for lending your expertise! — Caitlin   1. ^ This phrase is not good, please help me think of a better one and I will buy you a virtual coffee or sth.