I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff like [investing](https://mdickens.me/category/finance/) and [fitness](https://mdickens.me/category/fitness/).
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
OP's ranking had Doom Debates at 3rd-from-bottom; I re-calculated the rankings in 3 different ways and Doom Debates came last in all of them. But I think this under-rates the expected value of Doom Debates because most of the value comes from the possibility that the channel blows up in the future.
Nice analysis, this is the sort of thing I like to see. I have some ideas for potential improvements that don't require significant effort:
Before doing this, we might have guessed that it'd be most cost-effective to make many cheap, low effort-videos. AI in Context belies this; they've spent the most per video (>$100k/video vs $10-
I think this is an artifact of the way video views are weighted, for two reasons:
Here are the results on views per dollar, rather than view-minutes per dollar:
(EDIT: apparently Markdown tables don't work and I can't upload screenshots to comments so here's my best attempt at a table. you can view the spreadsheet if this is too ugly)
For lack of any better way to do a weighting, I also tried ranking channels by the average of "views per dollar relative to average" and "view-minutes per dollar relative to average", i.e.: [(views per dollar) / (average views per dollar) + (view-minutes per dollar) / (average view-minutes per dollar)] / 2
I think this isn't a great weighting system because it ends up basically the same as ranking by views-per-dollar. That's because views-per-dollar are right-skewed with a few big positive outliers; whereas view-minutes-per-dollar are left-skewed with a few negative outliers. Ranking by geometric mean might make more sense.
These are the results ranked by geometric mean:
(here is my spreadsheet)
This comment currently has 7 agree-votes and 0 disagree-votes. Which makes the think the median EA's intuitions on protest effectiveness aren't as pessimistic as I thought.
(Perhaps people who are critical of a strategy are more likely to comment on it, which creates a skewed perception when reading comments?)
My rough impression is that there are indeed some "AI safety" orgs that operate in the way you describe, where they are focused more on promoting US hegemony and less on preventing AI from killing everyone.* But CAIS is more on the notkilleveryoneism side of things.
*from what I've seen, the biggest offenders are CSET, Horizon Institute, and Fathom