ChanaMessinger

4697 karmaJoined www.chanamessinger.com
Interests:
Forecasting

Bio

Participation
2

Head of Video at 80,000 Hours

(Opinions here my own by default though will sometimes speak in a professional capacity).

Personal website: www.chanamessinger.com

Comments
369

Topic contributions
21

IMO that's a different category - there's a lot of that kind of thing as well and I'm glad it exists but I think it's useful to separate out.

Thanks for this. I’ve read through the whole thing though haven’t thought about the numbers in depth yet. I’m hoping to write a forum post with my retrospective on the AI in Context video at some point!


A few quick thoughts which I imagine won’t be very new to people:

  • Comment  and comment analysis could also be a proxy for engagement and quality of engagement
  • Someone said that it would be hard to predict future success from AI in context based only on our one big video and I strongly agree. We're hoping to release our next one in the next month, and I'm really excited about it but by default, we should expect a lot of regression to the mean. (Note: I wouldn't even think of us as having two videos. The other one is just a channel trailer we threw up to have something to introduce people to the channel.)
  • I like this question on the value of subsequent viewer minutes, and I don't currently have a take. I think some complicating factors are:
     - For one thing, it seems like a 5-minute video wouldn't do very well on YouTube, so it's not like it's really an option necessarily to make a lot of 5-minute videos relative to one 45-minute video, because 45-minute videos just have a niche. You might still want to make more 20-minute videos and fewer 45-minute videos.
     - I'm also not convinced that effort scales with time. Certainly editing time does, but often what you're trying to do (at least what we're trying to do) is tell a story and there's a certain length that allows you to tell the story. And so it's not convertible or fungible in the way that it might naively appear. 
    - To the point above about telling a story, I think part of the value of a video is whether people come away with like an overall sense of what the video is about in a way that's memorable/the takeaways, and that might require telling a good story. Some stories might take 20 minutes to tell and some stories might take 45 minutes to tell. Maybe you want to focus on the stories that take less time to tell if it takes you a lot less time or money to make the video, but as I say I don't think effort scales that way for us. 
     
  • For what it’s worth we’re not currently focused really heavily on getting to the right target audience. We’re currently doing product validation to just see if we know how to make good videos, but will be excited to think about that more in the future.
     


What are we doing about the MIRI book inbound?
 

Claim: The MIRI book might be a very big deal, read by lots of people

Mostly this is on vibes, and the MIRI team trying hard and seeming very successful and getting a lot of buzz, great blurbs, some billboards, etc.

I saw this tweet

E.g., the book is likely to become a NYT bestseller. The exact position can be improved by more pre-orders. (The figure is currently at around 5k pre-orders, according to the q&a; +20k more would make it a #1 bestseller).

 

Chat says about that

If preorders = 5k, you’re probably looking at 8k–15k total copies sold in week 1 (preorders + launch week sales).

Recently, nonfiction books debuting around 8k–12k week-1 copies often chart #8–#15 on the NYT list.

Lifetime Sales Ranges

Conservative: 20k–30k copies total (good for a nonfiction debut with moderate buzz).

Optimistic: 40k–60k (if reviews, media, podcasts, or TikTok keep it alive).

Breakout: 100k+ (usually requires either a viral moment, institutional adoption, or the author becoming part of a big public debate).

Is that a lot? I don't actually know, would be that’s not that many, but a decent number and might get a lot of buzz, commentary, etc. This is a major crux so I'd be interested in take.

If true: there might be an influx of people into this space, or who are hoping to get into this space AND the space could lose a lot of impact if it’s not ready to make use of this pipeline

I think the arguments here are clear but let me know if not

 

Therefore, people/orgs should be thinking about how to make the best pipelines for the inflow.

e.g.

  • If you have next steps for people (BlueDot, CEA, MATS), be ready to retweet / restack MIRI’s materials and be like “if you care about this, here’s a way to get involved”
  • Similarly, maybe pitch MIRI on putting your org / next steps on their landing page for the book and see if they think that makes sense
  • Landing page / resource hub: “So you just read the MIRI book?” page that curates your content, fellow orgs’ resources, and next steps. Make it optimized for search and linkable.
  • Other?

 

Very interested in takes!

I hear this; I don't know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don't at all think you're alone there), I hope there's something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it.

Appreciate your work as a university organizer - thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago).

Sorry I don't have much in the way of other recommendations; I hope others will post them.

I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.

That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)

One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.

I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.

Load more