A

Austin

Cofounder @ Manifund & Manifold
4147 karmaJoined San Francisco, CA, USA

Bio

Hey there, I'm Austin, currently running https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com!

Comments
236

For the $1m estimate, I think the figures were intended to include estimated opportunity cost foregone (eg when self-funding), and Marcus ballparked it at $100k/y * 10 years? But this is obviously a tricky calculation.

tbh, I would have assumed that the $300k through LTFF was not your primary source of funding -- it's awesome that you've produced your videos on relatively low budgets! (and maybe we should work on getting you more funding, haha)

Oh definitely! I agree that by default, paid ads reach lower-quality & less-engaged audiences, and the question would be how much to adjust that by.

(though paid ads might work better for a goal of reaching new people, of increasing total # of people who have heard of core AI safety ideas)

Thanks for the thoughtful replies, here and elsewhere!

  • Very much appreciate data corrections! I think medium-to-long term, our goal is to have this info in some kind of database where anyone can suggest data corrections or upload their own numbers, like Wikipedia or Github
  • Tentatively, I think paid advertising is reasonable to include. Maybe more creators should be buying ads! So long as you're getting exposure in a cost-effective way and reaching equivalently-good people, I think "spend money/effort to create content" and "spend money/effort to distribute content" are both very reasonable interventions
  • I don't have strong takes on quality weightings -- Marcus is much more of a video junkie than me, and has spent the last couple weeks with these videos playing constantly, so I'll let him weigh in. But broadly I do expect people to have very different takes on quality -- I'm not expecting people to agree on quality, but rather want people to have the chance to put down their own estimates. (I'm curious to see your takes on all the other channels too!)
  • Sorry if we didn't include your feedback in the first post -- I think the nature of this project is that waiting for feedback from everyone is going to delay our output by too much, and we're aiming to post often and wait for corrections in public, mostly because we're extremely bandwidth constrained (running this on something like 0.25 FTE between Marcus and me)

Hey Liron! I think growth in viewership is a key reason to start and continue projects like Doom Debates. I think we're still pretty early in the AI safety discourse, and the "market" should grow, along with all of these channels. 

I also think that there are many other credible sources of impact other than raw viewership - for example, I think you interviewing Vitalik is great, because it legitimizes the field, puts his views on the record, and gives him space to reflect on what actions to take - even if not that many people end up seeing the video. (compare irl talks from speakers eg at Manifest - much fewer viewers per talk but the theory of impact is somewhat different) 

I have some intuitions in that direction (ie for a given individual to a topic, the first minute of exposure is more valuable than the 100th), and that would be the case for supporting things like TikToks. 

I'd love to get some estimates on what the drop-off in value looks like! It might be tricky to actually apply - we/creators have individual video view counts and lengths, but no data on uniqueness of viewer (both for a single video and across different videos on the same channel, which I'd think should count as cumulative) 

The drop-off might be less than your example suggests - it's actually very unclear to me which of those 2 I'd prefer. 

AI safety videos can have impact by:

  1. Introducing people to a topic
  2. Convincing people to take an action (change careers, donate)
  3. Providing info to people working in the field

And shortform does relatively better on 1 and worse on 2 and 3, imo. 

Austin
16
3
0
2
1

Super excited to have this out; Marcus and I have been thinking about this for the last couple of weeks. We're hoping this is a first step towards getting public cost-effectiveness estimates in AI safety; even if our estimates aren't perfect, it's good to have some made-up numbers.

Other thoughts:

  • Before doing this, we might have guessed that it'd be most cost-effective to make many cheap, low effort-videos. AI in Context belies this; they've spent the most per video (>$100k/video vs $10-$20k that others) but still achieve strong QAVM/$
  • I think of Dwarkesh as perhaps being "too successful to be pigeonholed into 'AI safety'", but I still think his case is important because I expect media reach to be very heavy-tailed. Also, even if the vast minority of his content is safety-oriented, he has the strongest audience quality (with eg heads of state and megacorps on the record as fans)
  • Marcus chose to set Qa as 1 for "human chosen at random worldwide"; I think this might artificially inflate QAVM, and better baselines might be "random video watcher" (excluding the very poor) or "random US citizen". But I'm still unsure. Overall, we're more confident in the relative rankings, and less confident about what "1 QAVM" is worth.
  • One further goal I have is to establish impact valuations for different orgs/creators based on their "impact revenue". Perhaps before we published this, we should have polled some grantmakers or creators to see what they would have been excited to pay for 1 QAVM...

Broadly agree that applying EA principles towards other cause areas would be great, especially for areas that are already intuitively popular and have a lot of money behind them (eg climate change, education). One could imagine a consultancy or research org that specializes as "Givewell of X".

Influencing where other billionaires give also seems great. My understanding is that this is Longview's remit, but I don't think they've succeeded at moving giant amounts yet, and there's a lot of room for others to try similar things. It might be harder to advise billionaires who already have a foundation (eg Bill Gates), since their foundations see it as their role to decide where money should go; but doing work to catch the eye of newly minted billionaires might be a viable strategy, similar to how Givewell found a Dustin.

We pre-ordered a batch of 50 books for Mox, and are hosting a Q&A and book signing with Nate Soares on (tentatively) Oct 9th. I encourage other groups (eg fellowships like MATS, hubs like Constellation) to organize events like this if your audiences would be interested!

Answer by Austin10
0
0

We've been considering an effort like this on Manifund's side, and will likely publish some (very rudimentary) results soon!

Here are some of my guesses why this hasn't happened already:

  • As others mentioned, longtermism/xrisk work has long feedback loops, and the impact of different kinds of work is very sensitive to background assumptions
  • AI safety is newer as a field -- it's more like early-stage venture funding (which is about speculating on unproven teams and ideas) or academic research, rather than public equities (where there's lots of data for analysts to go over)
  • AI safety is also a tight-knit field, so impressions travel by word of mouth rather than through public analyses
  • It takes a special kind of person to be able to do Givewell-type analyses well; grantmaking skill is rare. It then takes some thick skin to publish work that's critical of people in a tight-knit field
  • OpenPhil and Longview don't have much incentive to publish their own analyses (as opposed to just showing them to their own donors); they'll get funded either way, and on the flip side, publishing their work exposes them to downside risk

I'm excited for the rest of this sequence!

Some basic questions I'm curious to get data are: what are the sizes of the boards you've been surveying? And how do board sizes compare to team sizes? How often do boards meet? Outside of meetings, how often are there other comms (email/Slack/etc)?

Load more