Hide table of contents

TL;DR

In a sentence: 
We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up.

In more detail:

We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.  

During 2025, we are prioritising:

  1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well
  2. Communicating why and how people can contribute to reducing the risks
  3. Connecting our users with impactful roles in this field
  4. And fostering an internal culture which helps us to achieve these goals

We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.  

This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions.

Why we’re updating our strategic direction

Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes.

We think we should consolidate our effort and focus because:  

  • We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI development and the speed of recent AI progress. We don’t aim to fully defend this claim here (though we plan to publish more on this topic soon in our upcoming AGI career guide), but the idea that something like AGI will plausibly be developed in the next several years is supported by:
  • We are in a window of opportunity to influence AGI, before laws and norms are set in place.
  • 80k has an opportunity to help more people take advantage of this window. We want our strategy to be responsive to changing events in the world, and we think that prioritising reducing risks from AI is probably the best way to achieve our high-level, cause-impartial goal of doing the most good for others over the long term by helping people have high-impact careers. We expect the landscape to move faster in the coming years, so we’ll need a faster moving culture to keep up.

While many staff at 80k already regarded reducing risks from AI as our most important priority before this strategic update, our new strategic direction will help us coordinate efforts across the org, prioritise between different opportunities, and put in renewed effort to determine how we can best support our users in helping to make AGI go well.

How we hope to achieve it

At a high level, we are aiming to:

  1. Communicate more about the risks of advanced AI and how to mitigate them
  2. Identify key gaps in the AI space where more impactful work is needed
  3. Connect our users with key opportunities to positively contribute to this important work

To keep us accountable to our high level aims, we’ve made a more concrete plan. It’s centred around the following four goals:

  1. Develop deeper views about the biggest risks of advanced AI and how to mitigate them
    1. By increasing the capacity we put into learning and thinking about transformative AI, its evolving risks, and how to help make it go well.
  2. Communicate why and how people can help
    1. Develop and promote resources and information to help people understand the potential impacts of AI and how they can help.
    2. Contribute positively to the ongoing discourse around AI via our podcast and video programme to help people understand key debates and dispel misconceptions.
  3. Connect our users to impactful opportunities for mitigating the risks from advanced AI
    1. By growing our headhunting capacity, doing active outreach to people who seem promising for relevant roles, and driving more attention to impactful roles on our job board.
  4. Foster an internal culture which helps us to achieve these goals
    1. In particular, by moving quickly and efficiently, by increasing automation where possible, and by growing capacity. In particular, increasing our content capacity is a major priority.

Community implications

We think helping the transition to AGI go well is a really big deal — so much so that we think this strategic focusing is likely the right decision for us, even through our cause-impartial lens of aiming to do the most good for others over the long term.

We know that not everyone shares our views on this. Some may disagree with our strategic shift because:

  • They have different expectations about AI timelines or views on how risky advanced AI might be.
  • They’re more optimistic about 80,000 Hours’ historical strategy of covering many cause areas rather than this narrower strategic shift, irrespective of their views about AI.

We recognise that prioritising AI risk reduction comes with downsides and that we’re “taking a bet” here that might not end up paying off. But trying to do the most good involves making hard choices about what not to work on and making bets, and we think it is the right thing to do ex ante and in expectation — for 80k and perhaps for other orgs/individuals too.  

If you are thinking about whether you should make analogous updates in your individual career or organisation, some things you might want to consider:

  • Whether how you’re acting lines up with your best-guess timelines
  • Whether — irrespective of what cause you’re working in — it makes sense to update your strategy to shorten your impact-payoff horizons or update your theory of change to handle the possibility and implications of TAI
  • Applying to speak to our advisors if you’re weighing up an AI-focused career change
  • What impact-focused career decisions make sense for you, given your personal situation and fit
    • While we think that most of the very best ways to have impact with one’s career now come from helping AGI go well, we still don’t think that everyone trying to maximise the impact of their career should be working on AI.

On the other hand, 80k will now be focusing less on broader EA community building and will do little to no investigation into impactful career options in non-AI-related cause areas. This means that these areas will be more neglected, even though we still plan to keep our existing content up. We think there is space for people to create new projects in this space, e.g. an organisation focused on biosecurity and/or nuclear security careers advice outside where they intersect with AI. (Note that we still plan to advise on how to help biosecurity go well in a world of transformative AI, and other intersections of AI and other areas.) We are also glad that there are existing organisations in this space, such as Animal Advocacy Careers and Probably Good, as well as orgs like CEA focusing on EA community building.

Potential questions you might have

What does this mean for non-AI cause areas?

Our existing written and audio content isn’t going to disappear. We plan for it to still be accessible to users, though written content on non-AI topics may not be featured or promoted as prominently in the future. We expect that many users will still get value from our backlog of content, depending on their priorities, skills, and career stage. Our job board will continue listing roles which don’t focus on preventing risks from AI, but will raise its bar for these roles.

But we’ll be hugely raising our bar for producing new content on topics that aren’t relevant for making the transition to AGI go well. The topics we think are relevant here are relatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurity. When deciding what to work on, we’re asking ourselves “How much does this work help make AI go better?”, rather than “How AI-related is it?” 

We’re doing this because we don’t currently have enough content and research capacity to cover AI safety well and want to do that as a first priority. Of course, there are a lot of judgement calls to make in this area: which podcast guests might bring in a sufficiently large audience? What skills and cause-agnostic career advice is sufficiently relevant to making AGI go well? Which updates, like our recent mirror bio updates, are above the bar to make even if they’re not directly related to AI? One decision we’ve already made is going ahead with traditionally publishing our existing career guide, since the content is nearly ready, we have a book deal, and we think that it will increase our reach as well as help people develop an impact mindset about their careers — which is helpful for our new, more narrow goals as well.

We don't have a precise answer to all of these questions. But as a general rule, it’s probably safe to assume 80k won’t be releasing new articles on topics which don’t relate to making AGI go well for the foreseeable future.

How big a shift is this from 80k’s status quo?

At the most zoomed out level of “What does 80k do?”, this isn’t that big a change — we’re still focusing on helping people to use their careers to have an impact, we’re still taking the actions which we think will help us do the most good for sentient beings from a cause-impartial perspective, and we’re still ranking risks from AI as the top pressing problem.

But we’d like this strategic direction to cause real change at 80k — significantly shifting our priorities and organisational culture to focus more of our attention on helping AGI go well.

The extent to which that’ll cause noticeable changes to each programme's strategy and delivery depends on the team’s existing prioritisation and how costly dividing their attention between cause areas is. For example:

  • Advising has already been prioritising speaking to people interested in mitigating risks from AI, whereas the podcast has been covering a variety of topics.
  • Continuing adding non-AGI jobs to our job board doesn’t significantly trade off with finding new AGI job postings, whereas writing non-AGI articles for our site would need to be done at the expense  of writing AGI-focused articles.

Are EA values still important?

Yes!

As mentioned, we’re still using EA values (e.g. those listed here and here) to determine what to prioritise, including in making this strategic shift.

And we still think it’s important for people to use EA values and ideas as they’re thinking about and pursuing high-impact careers. Some particular examples which feel salient to us:

  • Scope sensitivity and thinking on the margin seem important for having an impact in any area, including helping AGI go well.
  • We think there are some roles / areas of work where it’s especially important to continually use EA-style ideas and be steadfastly pointed at having a positive impact in order for it to be good to work in the area. For example, in roles where it’s possible to do a large amount of accidental harm, like working at an AI company, or roles where you have a lot of influence in steering an organisation's direction.
  • There are also a variety of areas where EA-style thinking about issues like moral patienthood, neglectedness, leverage, etc. are still incredibly useful – e.g. grand challenges humanity may face due to explosive progress from transformatively powerful AI.

We have also appreciated that EA’s focus on collaborativeness and truthseeking has meant that people encouraged us to interrogate whether our previous plans were in line with our beliefs about AI timelines. We also appreciate that it’ll mean that people will continue to challenge our assumptions and ideas, helping us to improve our thinking on this topic and to increase the chance we’ll learn if we’re wrong.

What would cause us to change our approach?

This is now our default strategic direction, and so we'll have a reasonably high threshold for changing the overall approach.  

We care most about having a lot of positive impact, and while this strategic plan is our current guess of how we'll achieve that, we aim to be prepared to change our minds and plans if the evidence changes.

Concretely, we’re planning to identify the kinds of signs that would cause us to notice this strategic plan was going in the wrong direction in order to react quickly if that happens. For example, we might get new information about the likely trajectory of AI or about our ability to have an impact with our new strategy that could cause us to re-evaluate our plans.

The goals, and actions towards them, mentioned above are specific to 2025, though we intend the strategy to be effective for the foreseeable future. After 2025, we’ll revisit our priorities and see which goals and aims make sense going forward.

167

24
23
7

Reactions

24
23
7

More posts like this

Comments97
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm not sure exactly what this change will look like, but my current impression from this post leaves me disappointed. I say this as someone who now works on AI full-time and is mostly persuaded of strong longtermism. I think there's enough reason for uncertainty about the top cause and value in a broad community that central EA organizations should not go all-in on a single cause. This seems especially the case for 80,000 Hours, which brings people in by appealing to a general interest in doing good.

Some reasons for thinking cause diversification by the community/central orgs is good:

  • From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
  • Existential risk is not most self-identified EAs' top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
  • Organizations like 80,000 Hours set the tone for the community, and I think there's good rule-of-thumb reasons to
... (read more)

Hey Zach,

(Responding as an 80k team member, though I’m quite new)

I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements have a lot to offer, and there are some lessons to be learned about this from the FTX era)

At the same time, when I look around the EA community, I want to see a set of institutions, organizations, funders and people that are live players, responding to the world as they see it, making sure they aren’t missing the biggest thing currently happening. (or, if like 80k they are an org where one of its main jobs is communicating important things, letting their audiences miss it.) Most importantly, I want people to act on their beliefs (with appropriate incorporation of heuristics, rules of thumb, outside views, etc). And to the extent tha... (read more)

Thanks @ChanaMessinger I appreciate this comment, and think that your kind of tone here is healthier  than the original announcement. Your well written one sentence captures many of the important issues well.

"It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn)."

FWIW I think a clear mistake is the poor communication here. That the most obvious and serious potential community impacts have been missed and the tone is poor. If this had been presented in a way that it looked like the most serious potential downsides were considered, I would both feel better about it and be more confident that 80k has done a deep SWAT analysis here rather than the really basic framing of the post which is more like...

"AI risk is really bad and urgent let's go all in"

This makes the decision seem not only insensitive but also poorly thought through which in sure is not the case. I imagine the chief concerns of the commenters were discussed at the highest level.

I'm assuming there are comms people at 80k and it surprises me that this would slip through like this.

Thanks for the feedback here. I mostly want to just echo Niel's reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability's sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I'd also done more to help it demonstrate the thought we've put into the tradeoffs involved and awareness of the costs. For what it's worth, & we don't have dedicated comms staff at 80k - helping with comms is currently part of my role, which is to lead our web programme.

From an altruistic cause prioritization perspective, existential risk seems to require longtermism

No it doesn't! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.

When I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)

Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.

1
Greg_Colbourn ⏸️
I'm not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/QALY!?)

By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it's also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.

I included the qualifier "From an altruistic cause prioritization perspective" because I think that from an impartial cause prioritization perspective, the case is different. If you're comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.

-2
Greg_Colbourn ⏸️
It's not "longtermist" or "fanatical" at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]). 1. ^ Indeed, there are many non-EAs who care a great deal about this issue now. 2. ^ I mention this as it's a welfarist consideration, even if one doesn't care about death in and of itself. 3. ^ Ripped apart by self-replicating computronium-building nanobots, anyone?

Strongly endorsing Greg Colbourn's reply here. 

When ordinary folks think seriously about AGI risks, they don't need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.

They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.

-1
Greg_Colbourn ⏸️
I'm not that surprised that the above comment has been downvoted to -4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It's a form of avoidance. These things aren't nice to think about. But it's close now, so it's reasonable for it to feel viscerally real. I guess it won't be EA that saves us (from the mess it helped accelerate), if we do end up saved.

The comment you replied to

  • acknowledges the value of x-risk reduction in general from a non-longtermist perspective
  • clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail

Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn't responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.

So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.

5
Greg_Colbourn ⏸️
Thanks for the explanation.  Whilst zdgroff's comment "acknowledges the value of x-risk reduction in general from a non-longtermist perspective" it downplays it quite heavily imo (and the OP comment does even more, using the pejorative "fanatical"). I don't think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence. I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
5
yanni kyriacos
You don't need EAs Greg - you've got the general public!

Adding a bit more to my other comment:

For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I'm not totally sure - EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).

From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.

I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn't accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)

Existential risk is not most self-identified EAs' top cause, and about 30% of self-identified EAs say they wo

... (read more)
4
zdgroff
  I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it's plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.

Hey Zach. I'm about to get on a plane so won't have time to write a full response, sorry! But wanted to say a few quick things before I do.

Agree that it's not certain or obvious that AI risk is the most pressing issue (though it is 80k's best guess & my personal best guess, and I don't personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues -- wherever they think they can have the biggest positive impact.

However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!

In particular, from a web specific perspective, I feel that the website doesn't feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of ... (read more)

7
zdgroff
  Yeah, FWIW, it's mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.

“In my role at CEA, I embrace an approach to EA that I (and others) refer to as “principles-first”. This approach doubles down on the claim that EA is bigger than any one cause area. EA is not AI safety; EA is not longtermism; EA is not effective giving; and so on. Rather than recommending a single, fixed answer to the question of how we can best help others, I think the value of EA lies in asking that question in the first place and the tools and principles EA provides to help people approach that question.”

Zach wrote this last year in his first substantive post as CEO of CEA, announcing that CEA will continue to take a “principles-first” approach to EA. (I’m Zach’s Chief of Staff.) Our approach remains the same today: we’re as motivated as ever about stewarding the EA community and ensuring that together we live up to our full potential.

Collectively living up to our full potential ultimately requires making a direct impact. Even under our principles-first approach, impact is our north star, and we exist to serve the world, not the EA community itself. But Zach and I continue to believe there is no other set of principles that has the same transformative potential t... (read more)

To the extent that this post helps me understand what 80,000 Hours will look like in six months or a year, I feel pretty convinced that the new direction is valuable—and I'm even excited about it. But I'm also deeply saddened that 80,000 Hours as I understood it five years ago—or even just yesterday—will no longer exist. I believe that organization should exist and be well-resourced, too.

Like others have noted, I would have much preferred to see this AGI-focused iteration launched as a spinout or sister organization, while preserving even a lean version of the original, big-tent strategy under the 80K banner, and not just through old content remaining online. A multi-cause career advising platform with thirteen years of refinement, SEO authority, community trust, and brand recognition is not something the EA ecosystem can easily replicate. Its exit from the meta EA space leaves a huge gap that newer and smaller projects simply can't fill in the short term.

I worry that this shift weakens the broader ecosystem, making it harder for promising people to find their path into non-AI cause areas—some of which may be essential to navigating a post-AGI world. Even from within an AGI-focused... (read more)

Hey Rocky —

Thanks for sharing these concerns. These are really hard decisions we face, and I think you’re pointing to some really tricky trade-offs.

We’ve definitely grappled with the question of whether it would make sense to spin up a separate website that focused more on AI. It’s possible that could still be a direction we take at some point. 

But the key decision we’re facing is what to do with our existing resources — our staff time, the website we’ve built up, our other programmes and connections. And we’ve been struggling with the fact that the website doesn’t really fully reflect the urgency we believe is warranted around rapidly advancing AI. Whether we launch another site or not, we want to honestly communicate about how we’re thinking about the top problem in the world and how it will affect people’s careers. To do that, we need to make a lot of updates in the direction this post is discussing.

That said, I’ve always really valued the fact that 80k can be useful to people who don’t agree with all our views. If you’re sceptical about AI having a big impact in the next few decades, our content on pandemics, nuclear weapons, factory farming — or our general career advice ... (read more)

Minor point, but I’ve seen big tent EA as referring to applying effectiveness techniques on any charity. Then maybe broad current EA causes could be called the middle-sized tent. Then just GCR/longtermism could be called the small tent (which 80k already largely pivoted to years ago, at least considering their impact multipliers). Then just AI could be the very small tent.

7
Mo Putera
(Tangent: "big tent EA" originally referred to encouraging a broad set of views among EAs while ensuring EA is presented as a question, but semantic drift I suppose...)
4
Denkenberger🔸
I was referring to this earlier academic article. I've also heard of discussion along a similar vein in the early days of EA.
3
Rockwell
Thanks! I wasn't sure the best terminology to use because I would never have described 80K as "cause agnostic" or "cause impartial" and "big tent" or "multi-cause" felt like the closest gesture to what they've been.

I think this is going to be hard for university organizers (as an organizer at UChicago EA). 

At the end of our fellowship, we always ask the participants to take some time to sign up for 1-1 career advice with 80k, and this past quarter myself and other organizers agreed that we felt somewhat uncomfortable doing this given that we knew that 80k was leaning a lot on AI -- as we presented it as merely being very good for getting advice on all types of EA careers. This shift will probably make it so that we stop sending intro fellows to 80k for advice, and we will have to start outsourcing professional career advising to somewhere else (not sure where this will be yet). 

Given this, I wanted to know if 80k (or anyone else) has any recommendations on what EA University Organizers in a similar position should do (aside from the linked resources like Probably Good). 

  1. Another place people could be directed for career advice: https://probablygood.org/
  2. Since last semester, we have made career 1-on-1s a mandatory part of our introductory program.

    1. This semester, we will have two 1-on-1s
      1. The first one will be a casual conversation where the mentee-mentor get to learn more about each other
      2. The second one will be more in-depth, where we will share this 1-on-1 sheet (shamelessly poached from the 80K), the mentees will fill it out before the meeting, have a ≤1 hour long conversation with a mentor of their choice, and post-meeting, the mentor will add further resources to the sheet that may be helpful.

    The advice we give during these sessions ends up being broader than just the top EA ones, although we are most helpful in cases where:

    — someone is curious about EA/adjacent causes
    — someone needs graduate school related questions
    — general "how to best navigate college, plan for internships, etc" advice

    Do y'all have something similar set up? 

2
MichaelStJules
Also, for those interested in animal welfare specifically: https://www.animaladvocacycareers.org/ Seems fine to direct people to 80,000 Hours for AI/x-risk, Animal Advocacy Careers for animal welfare and Probably Good more generally.

As a (now ex-) UChicago organizer and current Organizer Support Program mentor (though this is all in my personal capacity), I share Noah's concerns here.

I see how reasonable actors in 80k's shoes could come to the conclusions they came to, but I think this is a net loss for university groups, which disappoints me — I think university groups are some of the best grounds we have to motivate talented young people to devote their careers to improving the world, and I think the best way to do this is by staying principles-first and building a community around the core ideas of scope sensitivity, scout mindset, impartiality, and recognition of tradeoffs.

I know 80k isn't disavowing these principles, but the pivot does mean 80k is de-emphasizing them.

All this makes me think that 80k will be much less useful to university groups, because it

a) makes it much tougher for us to recommend 80k to interested intro fellows (personalized advising, even if it's infrequently granted, is a powerful carrot, and the exercises you have to complete to finish the advising are also very useful), and b) means that university groups will have to find a new advising source for their fresh members who haven't picked a cause-area yet.

6
Jess Binksmith
Thanks for raising this Noah.  In addition to the ideas raised above, some other thoughts:  * Giving fellowship members a menu of career-coaching options they could apply to (like the trifecta Conor mentions here, who all offer career advising) * Consider encouraging people to sign up to community and networking events, like EAG/x’s * Directing folks to 80k resources with more caveats about which places you think we might be helpful for your group, and what things we might be overlooking * We think that lots of our resources like our career guide and career planning template should still be useful irrespective of cause prioritisation, and caveating might help allay your worries about misconstruing our focus. * We also hope that our explicit focusing on AGI can help our own site / resources be more clear and transparent about our views on what’s most pressing. 
7
ChanaMessinger
I hear this; I don't know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don't at all think you're alone there), I hope there's something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it. Appreciate your work as a university organizer - thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago). Sorry I don't have much in the way of other recommendations; I hope others will post them.
6
NickLaing
Even though we might have been concerned about the prioritisation, it still made sense to refer to 80k because it still at least gave the impression of openness to a range of causes. Now even if the good initial advice remains, all roads lead to AI so it feels like a bit of a bait and switch to send someone there when the advice can only lead one way from 80ks perspective.  Yes it's more "straightforward" and clear, but it's also a big clear gap now on the trusted, well known non-AI career advice front. Uni groups will struggle a bit but hopefully the career advice marketplace continues to improve
8
Neel Nanda
Huh, I think this way is a substantial improvement - if 80K had strong views about where their advice leads, far better to be honest about this and let people make informed decisions, than giving the mere appearance of openness
2
akash 🔸
From the update, it seems that: * 80K's career guide will remain unchanged * I especially feel good about this, because the guide does a really good job of emphasizing the many approaches of pursuing an impactful career * n = 1 anecdotal point: during tabling early this semester, a passerby mentioned that they knew about 80K because a professor had prescribed one of the readings from the career guide in their course. The professor in question and the class they were teaching had no connection with EA, AI Safety, or our local EA group. * If non-EAs also find 80K's career guide useful, that is a strong signal that it is well-written, practical, and not biased to any particular cause * I expect and hope that this remains unchanged, because we prescribe most of the career readings from that guide in our introductory program * Existing write-ups on non-AI problem profiles will also remain unchanged * There will be a separate AGI career guide * But the job board will be more AI focused Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in "making AI go well" or mid/senior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.
5
NickLaing
That's interesting and would be nice if it was the case. That wasn't the vibe I got from the announcement but we will see.

Thanks for sharing this update. I appreciate the transparency and your engagement with the broader community!

I have a few questions about this strategic pivot:

On organizational structure: Did you consider alternative models that would preserve 80,000 Hours' established reputation as a more "neutral" career advisor while pursuing this AI-focused direction? For example, creating a separate brand or group dedicated to AI careers while maintaining the broader 80K platform for other cause areas? This might help avoid potential confusion where users encounter both your legacy content presenting multiple cause areas and your new AI-centric approach.

On the EA pathway: I'm curious about how this shift might affect the "EA funnel" - where people typically enter effective altruism through more intuitive cause areas like global health or animal welfare before gradually engaging with longtermist ideas like AI safety. By positioning 80,000 Hours primarily as an AI-focused organization, are you concerned this might make it harder for newcomers to find their way into the community if AI risk arguments initially seem abstract or speculative to them?

On reputational considerations: Have you weighed t... (read more)

Hi Håkon, Arden from 80k here.

Great questions.

On org structure:

One question for us is whether we want to create a separate website ("10,000 Hours?"), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That's something we're still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we're not currently thinking about making an entire new organisation.

Why not?

For one thing, it'd be a lot of work and time, and we feel this shift is urgent.

Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)

What would be the reason for keeping one 80k site instead of making a 2nd separate one?

  1. As I wrote to Zach above, I think the site currently doesn't represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that's key information, so want it to be included very prominently.
  2. As a commenter noted below, it'd take time and
... (read more)

Have you weighed the potential reputational risks if AI development follows a more moderate trajectory than anticipated? If we see AI plateau at impressive but clearly non-transformative capabilities, this strategic all-in approach could affect 80,000 Hours' credibility for years to come. 

I feel like this argument has been implicitly holding back a lot of EA focus on AI (for better or worse), so thanks for putting it so clearly. I always wonder about the asymmetry of it: what about the reputational benefits that accrue to 80K/EA for correctly calling the biggest cause ever? (If they're correct)

9
ChanaMessinger
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit. That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!) One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good. I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
8
Simon Newstead
From a practical point of view, if all the traffic and search/other reputation is to 80k website, and the timelines are perceived to be short, I could imagine it makes sense to the team to directly adjust the focus of the website rather than take the years to build up a separate, additional brand.
8
akash 🔸
Makes sense. Just want to flag that tensions like these emerge because 80K is simultaneously a core part of the movement and also an independent organization with its goals and priorities. 

I'm a little sad and confused about this.

First I think it's a bit insensitive that a huge leading org like this would write such a significant post with almost no recognition that this decision is likely to hurt and alienate some people. It's unfortunate that the post is written in a warm and upbeat tone yet is largely bereft of emotional intelligence and recognition of potential harms of this decision. I'm sure this is unintentional but it still feels tone deaf. Why not acknowledge the potential emotional and community significance of this decision, and be a bit more humble in general? Something like...

"We realise this decision could be seen as sidelining the importance of many people's work and could hurt or confuse some people. We encourage you to keep working on what you believe is most important and we realize even after much painstaking thought we're still quite likely to be wrong here.'

I also struggle to understand how this is the best strategy as an onramp for people to EA - assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you're sole goal is to get as ma... (read more)

I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause.  As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.  

A couple comments on other parts of your post in case it’s helpful:

I also struggle to understand how this is the best strategy as an onramp for people to EA - assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you're sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.

Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relation... (read more)

5
NickLaing
Thanks for the thoughtful reply really appreciate this. To have the CEO of an org replying to comments is refreshing and I actually think an excellent use of a few hours of time. "I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do". This is fantastic to hear and makes a big difference to hear, thanks for this.  "Our purpose is not to get people into EA, but to help solve the world’s most pressing problems." - This might be your purpose, but the reality is that 80,000 hours plays an enormous role in getting peopl einto EA. Losing some (or a lot) of this impact could have been recognised as a potential large (perhaps the largest) tradeoff with the new direction. What probably hit me most about the announcement was the seeming lack of recognition of the potentially most important tradeoffs - it makes it seem like the tradeoffs haven't been considered when I'm sure they have. You're right that we make bets whatever we do or don't do.  Thanks again for the reply!

Sorry to hear you found this saddening and confusing :/

Just to share another perspective: To me, the post did not come across as insensitive. I found the tone clear and sober, as I'm used to from 80k content, and I appreciated the explicit mention that there might now be space for another org to cover other cause areas like bio or nuclear. 

These trade-offs are always difficult, but as any EA org, 80k should do what they consider highest expected impact overall rather than what's best for the EA community, and I'm glad they're doing that. 

4
NickLaing
What confused/saddened me wasn't so much their reasons the change, but why they didn't address perhaps the 3-5 biggest potential objections  / downsides / trade offs to the decision. They even had a section "What does this mean for non-AI cause areas?" without stating the most important things that this means for non-AI cause areas, which include 1. Members the current community feeling left out/frustrated because for the first time they are no longer aligned with / no longer served by a top EA organisation 2. (From ZDGroff) "Organizations like 80,000 Hours set the tone for the community, and I think there's good rule-of-thumb reasons to think focusing on one issue is a mistake. As 80K's problem profile on factory farming says, factory farming may be the greatest moral mistake humanity is currently making, and it's good to put some weight on rules of thumb in addition to expectations." 3. The risk of narrowing the funnel into EA as less people will be attracted to a narrower AI focus (mentioned a few times). This seems like a pretty serious issue to not address, given that 80k (like it or not) is an EA front page  Just because 80k doesn't necessarily have these issues as their top goal, doesnt' mean these issues don't exist. I sense a bit of "Ostrich" mindset. I've heard a couple of times that they aren't aiming to be an onramp to EA, but that doesn't stop them from being one of the main Onramps evidenced by studies that have asked people how they got into EA.... I think the tone of the post is somewhat tone deaf and could easily have been mitigated with some simple soft and caring language, such as "we realise that some people may feel...", and "This could make it harder for....". Maybe that's not the tone 80k normally take, but I think that's a nicer way to operate which costs you basically nothing.

Morally, I am impressed that you are doing an in many ways socially awkward and uncomfortable thing because you think it is right. 

BUT

I strongly object to you citing the Metaculus AGI question as significant evidence of AGI by 2030. I do not think that when people forecast that question, they are necessarily forecasting when AGI, as commonly understood or in the sense that's directly relevant to X-risk will arrive. Yes the title of the question mentions AGI. But if you look at the resolution criteria, all an AI model has to in order to resolve the question 'yes' is pass a couple of benchmarks involving coding and general knowledge, put together a complicated model car, and imitate. None of that constitutes being AGI in the sense of "can replace any human knowledge worker in any job". For one thing, it doesn't involve any task that is carried out over a time span of days or weeks, but we know that memory and coherence over long time scales is something current models seem to be relatively bad at, compared to passing exam-style benchmarks. It also doesn't include any component that tests the ability of models to learn new tasks at human-like speed, which again, seems to be an is... (read more)

9
Niel_Bowerman
Thanks David.  I agree that the Metaculus question is a mediocre proxy for AGI, for the reasons you say.  We included it primarily because it shows the magnitude of the AI timelines update that we and others have made over the past few years.   In case it’s helpful context, here are two footnotes that I included in the strategy document that this post is based on, but that we cut for brevity in this EA Forum version: This Deepmind definition of AGI is the one that we primarily use internally.  I think that we may get strategically significant AI capabilities before this though, for example via automated AI R&D.   On the Metaculus definition, I included this footnote:
4
David Mathers🔸
Thanks, that is reassuring. 
2
Manuel Allgaier
Curious if you have better suggestions for forecasts to use, especially for communicating to a wider audience that's new to AI safety. 
2
David Mathers🔸
I don't know of anything better right now. 
2
Niel_Bowerman
I haven't read it, but Zershaaneh Qureshi at Convergence Analysis wrote a recent report on pathways to short timelines.  

I've been very concerned that EA orgs, particularly the bigger ones, would be too slow to orient and react to changes in the urgency of AI risk, so I'm very happy that 80k is making this shift in focus. 

Any change this size means a lot of work in restructuring teams, their priorities and what staff is working on, but I think this move ultimately plays to 80k's strengths. Props.

I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in.

I'm very glad 80,000 Hours is making this change. I'm not glad that we've entered the world where this change feels necessary.

To elaborate on the job board changes mentioned in the post:

  • We will continue listing non-AI-related roles, but will be raising our bar. With some cause areas, we still consider them relevant to AGI (for example: pandemic preparedness). With others, we still think the top roles could benefit from talented people with great fit, so we'll continue to post these roles.
  • We'll be highlighting some roles more prominently. Even among the roles we post, we think the best roles can be much more impactful than others. Based on conversations with experts, we have some guess at which roles these are, and want to feature them a little more strongly.
3
NickLaing
I think the post would have been far better if this kind of sentiment was front and center. Obviously its still only a softener but it shows understanding and empathy the CEO has missed. "I want to extend my sympathies to friends and organisations who feel left behind by 80k's pivot in strategy. I've talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline we're in."
5
Niel_Bowerman
Hey Nick, just wanted to say thanks for this suggestion.  We were trying to balance keeping the post succinct, but in retrospect I would have liked to have included more of the mood of Conor’s comment here without losing the urgency of the original post.  I too hate that this is the timeline we’re in.
3
NickLaing
Appreciate this - perhaps this can be improved in other communications outside the forum context! Even in appealing to people outside of EA to focus on AI, I think this kind of sentiment might help.

Makes sense, seems like a good application of the principle of cause neutrality: being willing to update on information and focus on the most cost-effective cause areas.

As an AI safety person who believes short timelines are very possible, I'm extremely glad to see this shift.

For those who are disappointed, I think it's worth mentioning that I just took a look at the Probably Good website and it seems much better than the last time I looked. I had previously been a bit reluctant to recommend it, but it now seems like a pretty good resource and I'm sure they'll be able to make it even better with more support.

Given that The 80,000 Hours Podcast is increasing its focus on AI, it's worth highlighting Asterisk Magazine as a good resource for exploring a broader set of EA-adjacent ideas.

I'd love to hear in more detail about what this shift will mean for the 80,000 Hours Podcast, specifically. 

The Podcast is a much-loved and hugely important piece of infrastructure for the entire EA movement. (Kudos to everyone involved over the years in making it so awesome - you deserve huge credit for building such a valuable brand and asset!) 

Having a guest appear on it to talk about a certain issue can make a massive real-world difference, in terms of boosting interest, talent, and donations for that issue. To pick just one example: Meghan Barrett's episode on insects seems to have been super influential. I'm sure that other people in the community will also be able to pick out specific episodes which have made a huge difference to interest in, and real-world action on, a particular issue. 

My guess is that to a large extent this boosted activity and impact for non-AI issues does not “funge” massively against work on AI. The people taking action on these different issues would probably not have alternatively devoted a similar level of resources to AI safety-related stuff. (Presumably there is *some* funging going on, but my gut instinct is that it's probably ... (read more)

Thanks for your comment and appreciation of the podcast.  

I think the short story is that yes, we’re going to be producing much less non-AI podcast content than we previously were — over the next two years, we tentatively expect ~80% of our releases to be AI/AGI focused. So we won’t entirely stop covering topics outside of AI, but those episodes will be rarer. 

We realised that in 2024, only around 12 of the 38 episodes we released on our main podcast feed were focused on AI and its potentially transformative impacts. On reflection, we think that doesn’t match the urgency we feel about the issue or how much we should be focusing on it. 

This decision involved very hard tradeoffs. It comes with major downsides, including limiting our ability to help motivate work on other pressing problems, along with the fact that some people will be less excited to listen to our podcast once it’s more narrowly focused. But we also think there’s a big upside: more effectively contributing to the conversation about what we believe is the most important issue of this decade.

On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it. They’re still incredibly important and neglected problems. But I endorse the strategic shift we’re making and think it reflects our values. I’m also sorry it will disappoint some of our audience, but I hope they can understand the reasons we’re making this call. 

9
CB🔸
There's something I'd like to understand here. Most of the individuals that an AGI will affect will be animals, including invertebrates and wild animals. This is because they are very numerous, even if one were to grant them a lower moral value (although artificial sentience could be up there too). AI is already being used to make factory farming more efficient (the AI for Animals newsletter is more complete about that). Is this an element you considered?  Some people in AI safety seem to consider only humans in the equation, while some assume that an aligned AI will, by default, treat them correctly. Conversely, some people push for an aligned AI that takes into account all sentient beings (see the recent AI for animals conference). I'd like to know what will be 80k's position on that topic? (if this is public information)

Thanks for asking. Our definition of impact includes non-human sentient beings, and we don't plan to change that. 

3
Forumite
Thanks for the rapid and clear response, Luisa - it's very much appreciated. I'm incredibly relieved and pleased to hear that the Podcast will still be covering some non-AI stuff, even it it's less frequently than previously. It feels like those episodes have huge impact, including in worlds where we see a rapid AI-driven transformation of society - e.g. by increasing the chances that whoever/whatever wields power in the future cares about all moral patients, not just humans.  Hope you have fun making those, and all, future episodes :) 
2
Forumite
This is probably motivated reasoning on my part, but the more I think about this, I think it genuinely probably does make sense for 80k to try to maintain as big and broad an audience for the Podcast as possible, whilst also ramping up its AI content. The alternative would be to turn the Podcast effectively into an only-AI thing, which would presumably limit the audience quite a lot (?) I'm genuinely unsure what is the best strategy here, from 80k's point of view, if the objective is something like "maximise listenership for AI related content". Hopefully, if it's a close call, they might err on the side of broadness, in order to be cooperative with the wider EA community. 

I generally support the idea of 80k Hours putting more emphasis on AI risk as a central issue facing our species.

However, I think it's catastrophically naive to frame the issue as 'helping the transition to AGI go well'. This presupposes that there is a plausible path for (1) AGI alignment to be solved, for (2) global AGI safety treaties to be achieved and enforced in time, and for (3) our kids to survive and flourish in a post-AGI world.

I've seen no principled arguments to believe that any of these three things can be achieved. At all. And certainly not in the time frame we seem to have available.

So the key question is -- if there is actually NO credible path for 'helping the transition to AGI go well', should 80k Hours be pursuing a strategy that amounts to a whole lot of cope, and rearranging deck chairs on the Titanic, and gives a false sense of comfort and security to AI devs, and EA people, and politicians, and the general public?

I think 80k Hours has done a lot of harm in the past by encouraging smart young EAs to join AI companies to try to improve their safety cultures form within. As far as I've seen, that strategy has been a huge failure for AI safety, and a huge win for... (read more)

Hey Geoffrey,

Niel gave a response to a similar comment below -- I'll just add a few things from my POV:

  • I'd guess that pausing (incl. for a long time) or slowing downAGI development would be good for helping AGI go well if it could be done by everyone / enforced / etc- so figuring out how to do that would be in scope re this more narrow focus. SO e.g. figuring out how an indefinite pause could work (maybe in a COVID-crisis like world where the overton window shifts?) seems helpful
  • I (& others at 80k) are just a lot less pessimistic vis a vis the prospects for AGI going well / not causing an existential catastrophe. So we just disagree about the premise that "there is actually NO credible path for 'helping the transition to AGI go well'". In my case maybe because I don't believe your (2) is necessary (tho various other governance things probably are) & I think your (1) isn't that unlikely to happen (tho very far from guaranteed!)
  • I'm at the same time more pessimistic about everyone the world stopping development toward this hugely commercially exciting technology, so feel like trying for that would be a bad strategy.

From the perspective of someone who thinks AI progress is real and might happen quickly over the next decade, I am happy about this update. Barring Ezra Klein and the Kevin guy from NYT, the majority of mainstream media publications are not taking AI progress seriously, so hopefully this brings some balance to the information ecosystem.

From the perspective of "what does this mean for the future of the EA movement," I feel somewhat negatively about this update. Non-AIS people within EA are already dissatisfied by the amount of attention, talent, and resources that are dedicated to AIS, and I believe this will only heighten that feeling.

3
NickLaing
So well said Akash nice one.

I have a complicated reaction.

  1. First, I think @NickLaing is right to point out that there's a missing mood here and to express disappointment that it isn't being sufficiently acknowledged.

2. My assumption is that the direction change is motivated by factors like:

  • A view of AI as a particularly time-sensitive area right now vs. areas like GHD often having a slower path to marginal impact (in part due to the excellence and strength of existing funding-constrained work).
  • An assumption that there are / will be many more net positions to fill in AI safety for the next few years, especially to the extent one thinks that funding will continue to shift in this direction. (Relatedly, one might think there will be relatively few positions to fill in certain other cause areas.)

    I would suggest that these kinds of views and assumptions don't imply that people who are already invested in other cause areas should shift focus. People who are already on a solid path to impact are not, as I understand it, 80K's primary target audience.

3. I'm generally OK with 80K going in this direction if that is what its staff, leadership, and donors want. I've taken a harder-line stance on this sort of thing to the ... (read more)

6
Toby Tremlett🔹
I love point 3 "to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) [...] I think there's an enhanced obligation to share the commons" - that's a good articulation of something I feel about Forum stewardship. 

Is there a possible world to "divest" or "spin out" the non-AI work of 80k hours org? I understand that this in and of itself could be a huge haul and may defeat the purpose of the re-alignment of values -- but could the door remain open to this if someone/another org expressed interest?

This seems a reasonable update, and I appreciate the decisiveness, and clear communication. I'm excited to see what comes of it!

Here is a simple argument that this strategic shift is a bad one:

(1) There should be (at least) one EA org that gives career advice across cause areas.

(2) If there should be such an org, it should be (at least also) 80k.

(3) Thus, 80k should be an org that gives career advice across cause areas.

(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)

I'm wondering with which premise 80k disagrees (and what you think about them!). They are indi... (read more)

But one could also reason:

(1) There should be (at least) one EA org focused on AI risk career advice; it is important that this org operate at a high level at the present time.

(2) If there should be such an org, it should be -- or maybe can only be -- 80K; it is more capable of meeting criterion (1) quickly than any other org that could try. It already has staff with significant experience in the area and organizational competence to deliver career advising services with moderately high throughput.

(3) Thus, 80K should focus on AI risk career advice.

If one generally accepts both your original three points and these three, I think they are left with a tradeoff to make, focusing on questions like:

  • If both versions of statement (1) cannot be fulfilled in the next 1-3 years (i.e., until another org can sufficiently plug whichever hole 80K didn't fill), which version is more important to fulfill during that time frame?
  • Given the capabilities and limitations of other orgs (both extant and potential future), would it be easier for another org to plug the AI-focused hole or the general hole?
2
Jakob Lohmar
Good reply! I thought of something similar as a possible objection against my premise (2) that 80k should fill the role of the cause-neutral org. Basically, there are opportunity costs to 80k filling this role because it could also fill the role of (e.g.) an AI-focused org. The question is how high these opportunity costs are and you point out two important factors. What I take to be important, and plausibly decisive, is that 80k is especially well suited to fill the role of the cause-neutral org (more so than the role of the AI-focused org) due to its biography and the brand it has built. Combined with a 'global' perspective on EA according to which there should be one such org, it seems plausible to me that it should be 80k.

After all, we don't want to do the most good in cause area X but the most good, period.

Yes, and 80k think that AI safety is the cause area that leads to the most good. 80k never covered all cause areas - they didn't cover the opera or beach cleanup or college scholarships or 99% of all possible cause areas. They have always focused on what they thought were the most important cause areas, and they continue to do so. Cause neutrality doesn't mean 'supporting all possible causes' (which would be absurd), it means 'being willing at support any cause area, if the evidence suggests it is the best'.

Perhaps this is a bit tangential, but I wanted to ask since the 80k team seem to be reading this post. How have 80k historically approached the mental health effects of exposing younger (i.e. likely to be a bit more neurotic) people to existential risks? I’m thinking in the vein of Here’s the exit. Do you/could you recommend alternate paths or career advice sites for people who might not be able to contribute to existential risk reduction due to, for lack of a better word, their temperament? (Perhaps a similar thing for factory farming, too?)

For example, I... (read more)

4
Arden Koehler
We don’t have anything written/official on this particular issue I don't think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related - what to do if you find the case for an issue intellectually compelling but don't feel motivated by it.

Thanks for the update!

Where does this overall leave you in terms of your public association with EA? Many orgs (including ones that are not just focused on AIS) are trying to dissociate themselves from the EA brand due to reputational reasons.

80k is arguably the one org that has the largest audience from the "outside world", while also having close ties with the EA community. Are you guys going to keep the status quo?

I will add my two cents on this in this footnote[1] too, but I would be super curious to hear your thoughts!

  1. ^

    I think in the short term a

... (read more)
4
Jess Binksmith
The most recent write up on our thinking on this is here (in addition to the comments about EA values in the post above). Our current plan is to continue with this approach. 

Thanks for the transparency! This is really helpful for coordination.

For anyone interested in what 80k is deprioritizing, this comment section might be a good space to pitch other EA career support ideas and offer support. 

There might be space for an organization specifically focussed in high school graduates to help them decide whether, where and what to study. This might be the most important decision in one's life, especially for people like me who grew up on the countryside without really any intellectual role models and are open to moving abroad ... (read more)

I applaud the decision to take a big swing, but I think the reasoning is unsound and probably leads to worse worlds.

I think there are actions that look like “making AI go well” that actually are worse than not doing anything at all, because things like “keep human in control over AI” can very easily lead to something like value lock-in, or at least leaving it in the hands of immoral stewards. It’s plausible that if ASI is developed and still controlled by humans, hundreds of trillions of animals would suffer, because humans still want to eat meat from an a... (read more)

3
Cody_Fenwick
One reason we use phrases “making AGI go well,” rather than some alternatives, is because 80k is concerned about risks like lock-in of really harmful values, in addition to human disempowerment and extinction risk — so I sympathise with your worries here. Figuring out how to avoid these kinds of risks is really important, and recognising that they might arise soon is definitely within the scope of our new strategy. We have written about ways the future can look very bad even if humans have control of AI, for example here, here, and here. I think it’s plausible to worry that not enough is being done about these kinds of concerns — that depends a lot on how plausible they are and how tractable the solutions are, which I don’t have very settled views on. You might also think that there’s nothing tractable to do about these risks, so it’s better to focus on interventions that pay off in the short-term. But my view at least is that it is worth putting more effort into figuring out what the solutions here might be.
1
Maxtandy
Thanks Cody. I appreciate the thoughtfulness of the replies given by you and others. I'm not sure if you were expecting the community response to be as it is.  My expressed thoughts were a bit muddled. I have a few reasons why I think 80k's change is not good. I think it's unclear how AI will develop further, and multiple worlds seem plausible. Some of my reasons apply to some worlds and not others. The inconsistent overlap is perhaps leading to a lack of clarity. Here's a more general category of failure mode of what I was trying to point to.  I think in cases where AGI does lead to explosive outcomes soon, it's suddenly very unclear what is best, or even good. It's something like a wicked problem, with lots of unexpected second order effects and so on. I don't think we have a good track record of thinking about this problem in a way that leads to solutions even on a first order effects level, as Geoffrey Miller highlighted earlier in the thread. In most of these worlds, what I expect will happen is something like: 1. Thinkers and leaders in the movement have genuinely interesting ideas and insights about what AGI could imply at an abstract or cosmic level. 2. Other leaders start working out what this actually implies individuals and organisations should do. This doesn't work though, because we don't know what we're doing. Due to unknown unknowns, the most important things are missed, and because of the massive level of detail in reality, the things that are suggested are significantly wrong at load-bearing points. There are also suggestions in the spirit of "we're not sure which of these directly opposing views X and Y are correct, and encourage careful consideration", because it is genuinely hard. 3. People looking for career advice or organisational direction etc. try to think carefully about things, but in the end, most just use it to rationalise a messy choice they make between X and Y that they actually make based on factors like convenience, cost and r
5
Cody_Fenwick
Thanks for the additional context! I think I understand your views better now and I appreciate your feedback. Just speaking for myself here, I think I can identify some key cruxes between us. I'll take them one by one: I disagree with this. I think it's better if people have a better understanding of the key issues raised by the emergence of AGI. We don't have all the answers, but we've thought about these issues a lot and have ideas about what kinds of problems are most pressing to address and what some potential solutions are. Communicating these ideas more broadly and to people who may be able to help is just better in expectation than failing to do so (all else equal), even though, as with any problem, you can't be sure you're making things better, and there's some chance you make things worse. I don't think I agree with this. I think the value of doing work in areas like global health or helping animals is largely in the direct impact of these actions, rather than any impact on what it means for the arrival of AGI. I don't think even if, in an overwhelming success, we cut malaria deaths in half next year, that will meaningfully increase the likelihood that AGI is aligned or that the training data reflects a better morality. It's more likely that directly trying to work to create beneficial AI will have these effects. Of course, the case for saving lives from malaria is still strong, because people's lives matter and are worth saving. Recall that the XPT is from 2022, so there's a lot that's happened since. Even still, here's what Ezra Karger noted about expectations of the experts and forecasters views when we interviewed him on the 80k podcast: My understanding is that XPT was using the definition of AGI used in the Metaculus question cited in Niel's original post (though see his comment for some caveats about the definition). In March 2022, that forecast was around 2056-2058; it's now at 2030. The Metaculus question also has over 1500 forecasters, wherea

You’re shifting your resources, but should you change your branding?

Focusing on new articles and research about AGI is one thing, but choosing to brand yourselves as an AI-focused career organisation is another.

Personal story (causal thinking): I first discovered the EA principles while researching how to do good in my career, where, aside from 80k, all the well-ranked websites were non-impact focused. If the website had been specifically about AI or existential risk careers, I’m quite sure I would’ve skipped it and spent years not discovering EA principle... (read more)

8
Arden Koehler
Hi Romain, Thanks for raising these points (and also for your translation!) We are currently planning to retain our cause-neutral (& cause-opinionated), impactful careers branding, though we do want to update the site to communicate much more clearly and urgently our new focus on helping things go well with AGI, which will affect our brand. How to navigate the kinds of tradeoffs you are pointing to is something we will be thinking about more as we propagate through this shift in focus through to our most public-facing programmes. We don't have answers just yet on what that will look like, but do plan to take into account feedback from users on different framings to try to help things resonate as well as we can, e.g. via A/B tests and user interviews.
4
david_reinstein
I would lean the other way, at least in some comms. You wouldn’t want people to think that (e.g.) “the career guidance space in high impact global health and wellbeing is being handled by 80k”. Changing branding could more clearly open opportunities for other orga to enter spaces like that.

Will this affect the 80k job board

Will you continue to advertise jobs in all top cause areas equally, or will the bar for jobs not related to AI safety be higher now? 

If the latter, is there space for an additional, cause-neutral job board that could feature all 80k-listed jobs and more from other cause areas? 

7
Conor Barnes 🔶
Hey Manuel, I would not describe the job board as currently advertising all cause areas equally, but yes, the bar for jobs not related to AI safety will be higher now. As I mention in my other comment, the job board is interpreting this changed strategic focus broadly to include biosecurity, nuclear security, and even meta-EA work -- we think all of these have important roles to play in a world with a short timeline to AGI. In terms of where we’ll be raising the bar, this will mostly affect global health, animal welfare, and climate postings — specifically in terms of the effort we put into finding roles in these areas. With global health and animal welfare, we're lucky to have great evaluators like GiveWell and great programs like Charity Entrepreneurship to help us find promising orgs and teams. It's easy for us to share these roles, and I remain excited to do so. However, part of our work involves sourcing for new roles and evaluating borderline roles. Much of this time will shift into more AIS-focused work. Cause-neutral job board: It's possible! I think that our change makes space for other boards to expand. I also think that this creates something of a trifecta, to put it very roughly: The 80k job board with our existential risk focus, Probably Good with a more global health focus, and Animal Advocacy Careers with an animal welfare focus. It's possible that effort put into a cause-neutral board could be better put elsewhere, given that there's already coverage split between these three.

Arden from 80k here -- just flagging that most of 80k is currently asleep (it's midnight in the UK), so we'll be coming back to respond to comments tomorrow! I might start a few replies, but will be getting on a plane soon so will also be circling back.

i'm selfishly in favor of this change.  my question is: will 80k rebrand itself, perhaps to "N k hours (where 1 < N < 50)"?

Ok, so in the spirit of 

EA’s focus on collaborativeness and truthseeking has meant that people encouraged us to interrogate whether our previous plans were in line with our beliefs 

[about p(doom|AGI)], and

we aim to be prepared to change our minds and plans if the evidence

 [is lacking], I ask if you have seriously considered whether

safely navigating the transition to a world with AGI

is even possible? (Let alone at all likely from where we stand.)

You (we all) should be devoting a significant fraction of resources toward slowing down/pausi... (read more)

Hey Greg!  I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems.  Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it'd be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.
 

2
Greg_Colbourn ⏸️
Hi Niel, what I'd like to see is an argument for the tractability of successfully "navigating the transition to a world with AGI" without a global catastrophe (or extinction) (i.e. an explanation for why your p(doom|AGI) is lower). I think this is much less tractable than getting a (really effective) Pause! (Even if a Pause itself is somewhat unlikely at this point.) I think most people in EA have relatively low (but still macroscopic) p(doom)s (e.g. 1-20%), and have the view that "by default, everything turns out fine". And I don't think this has ever been sufficiently justified. The common view is that alignment will just somehow be solved enough to keep us alive, and maybe even thrive (if we just keep directing more talent and funding to research). But then the extrapolation to the ultimate implications of such imperfect alignment (e.g. gradual disempowerment -> existential catastrophe) never happens.
Curated and popular this week
Relevant opportunities