Hey Nick, just wanted to say thanks for this suggestion. We were trying to balance keeping the post succinct, but in retrospect I would have liked to have included more of the mood of Conor’s comment here without losing the urgency of the original post. I too hate that this is the timeline we’re in.
I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause. As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.
A couple comments on other parts of your post in case it’s helpful:
I also struggle to understand how this is the best strategy as an onramp for people to EA - assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you're sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.
Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.
But I might be wrong about this, and I think it’s reasonable that others disagree.
I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).
I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.
boy is that some bet to make.
Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency.
On the other costs that you mention in your post, I think I see them as less stark than you do. Quoting Cody’s response to Rocky above:
> We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.
Thanks David. I agree that the Metaculus question is a mediocre proxy for AGI, for the reasons you say. We included it primarily because it shows the magnitude of the AI timelines update that we and others have made over the past few years.
In case it’s helpful context, here are two footnotes that I included in the strategy document that this post is based on, but that we cut for brevity in this EA Forum version:
We define AGI using the Morris, et al./Deepmind (2024) definition (see table 1) of "competent AGI" for the purposes of this document: an AI system that performs as well as at least 50% of skilled adults at a wide range of non-physical tasks, including metacognitive tasks like learning new skills.
This Deepmind definition of AGI is the one that we primarily use internally. I think that we may get strategically significant AI capabilities before this though, for example via automated AI R&D.
On the Metaculus definition, I included this footnote:
The headline Metaculus forecast on AGI doesn't fully line up with the Morris, et al. (2024) definition of AGI that we use in footnote 2. For example, the Metaculus definition includes robotic capabilities, and doesn't include being able to successfully do long-term planning and execution loops. But nonetheless I think this is the closest proxy for an AGI timeline that I've found on a public prediction market.
Hey Greg! I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems. Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it'd be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.
Hey John, unfortunately a lot of the data we use to assess our impact contains people’s personal details or comes from others’ analyses that we’re not able to share. As such, it is hard for me to give a sense of how many times more cost-effective we think our marginal spending is compared with the community funding bar.
But the original post includes various details about assessments of our impact, including the plan changes we’ve tracked, placements made, the EA survey, and the Open Philanthropy survey. We will be working on our annual review in spring 2024 and may have more details to share about the impact of our programmes then.
If you are interested in reading about our perspective on our historical cost-effectiveness from our 2019 annual review, you can do so here.
Thanks for the question. To be clear, we do think growing the team will significantly increase our impact in expectation.
a new career service org that caters to the other cause priorities of EA?
I'm guessing you are familiar with Probably Good? They are doing almost exactly the thing that you describe here. They are also accepting donations, and if you want to support them you can do so here.
Thanks for engaging with this post! A few thoughts prompted by your comment in case they are helpful:
Hey George —thanks for the question!
We haven’t done a full annual review of 2023 and the complete data isn’t in yet, so we haven't done a thorough assessment of the answer to your question yet. The answers to your question probably differ quite a bit programme to programme. But here are a few thoughts that seemed relevant to me:
On web:
On podcast:
On advising:
On job board:
Additional points:
I haven't read it, but Zershaaneh Qureshi at Convergence Analysis wrote a recent report on pathways to short timelines.