Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
Hey Zach,
(Responding as an 80k team member, though I’m quite new)
I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements have a lot to offer, and there are some lessons to be learned about this from the FTX era)
At the same time, when I look around the EA community, I want to see a set of institutions, organizations, funders and people that are live players, responding to the world as they see it, making sure they aren’t missing the biggest thing currently happening. (or, if like 80k they are an org where one of its main jobs is communicating important things, letting their audiences miss it.) Most importantly, I want people to act on their beliefs (with appropriate incorporation of heuristics, rules of thumb, outside views, etc). And to the extent that 80k staff and leadership’s beliefs changed with the new evidence, I’m excited for them to be acting on it.
I wasn’t involved in this strategic pivot, but when I was considering the job, I was excited to see a certain kind of leaping to action in the organization as I was considering whether to join.
It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn). In the past I’ve worried that various parts of the community were jumping too fast into what’s shiny and new, but 80k has been talking about this for more than a year, which is reassuring.
I think the 80k leadership have thoughts about all of these, but I agree that this blog post alone doesn’t fully make the case.
I think the right answer to these uncertainties is some combination of digging in and arguing about them (as you’ve started here — maybe there’s a longer conversation to be had), or waiting and see how these bets turn out.
Anyway, I appreciate considerations like the ones you’ve laid out because I think they’ll help 80k figure out if it’s making a mistake (now or in the future), even though I’m currently really energized and excited by the strategic pivot.
We’ve started adding support for search operators in the search text box. Right now you can use the “user” operator to filter by author, and the “topic” operator to filter by topic, though these will currently only do exact matches and are case-sensitive. Note that there is already a topic filter on the left side, if that is more convenient for you.
Oooh I'm especially excited for this for comments, but it looks like it doesn't work for comments, is that right?
I hear this; I don't know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don't at all think you're alone there), I hope there's something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it.
Appreciate your work as a university organizer - thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago).
Sorry I don't have much in the way of other recommendations; I hope others will post them.