Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
Thanks for this. I’ve read through the whole thing though haven’t thought about the numbers in depth yet. I’m hoping to write a forum post with my retrospective on the AI in Context video at some point!
A few quick thoughts which I imagine won’t be very new to people:
Mostly this is on vibes, and the MIRI team trying hard and seeming very successful and getting a lot of buzz, great blurbs, some billboards, etc.
I saw this tweet
E.g., the book is likely to become a NYT bestseller. The exact position can be improved by more pre-orders. (The figure is currently at around 5k pre-orders, according to the q&a; +20k more would make it a #1 bestseller).
Chat says about that
If preorders = 5k, you’re probably looking at 8k–15k total copies sold in week 1 (preorders + launch week sales).
Recently, nonfiction books debuting around 8k–12k week-1 copies often chart #8–#15 on the NYT list.
Lifetime Sales Ranges
Conservative: 20k–30k copies total (good for a nonfiction debut with moderate buzz).
Optimistic: 40k–60k (if reviews, media, podcasts, or TikTok keep it alive).
Breakout: 100k+ (usually requires either a viral moment, institutional adoption, or the author becoming part of a big public debate).
Is that a lot? I don't actually know, would be that’s not that many, but a decent number and might get a lot of buzz, commentary, etc. This is a major crux so I'd be interested in take.
I think the arguments here are clear but let me know if not
e.g.
Very interested in takes!
I hear this; I don't know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don't at all think you're alone there), I hope there's something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it.
Appreciate your work as a university organizer - thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago).
Sorry I don't have much in the way of other recommendations; I hope others will post them.
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
IMO that's a different category - there's a lot of that kind of thing as well and I'm glad it exists but I think it's useful to separate out.