I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff like [investing](https://mdickens.me/category/finance/) and [fitness](https://mdickens.me/category/fitness/).
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
The claim: Working at a scaling lab is the best way to gain expertise in machine learning [which can then be leveraged into solving the alignment problem].
Coincidentally, earlier today I was listening to an interview with Zvi in which he said:
If you think there is a group of let’s say 2,000 people in the world, and they are the people who are primarily tasked with actively working to destroy it: they are working at the most destructive job per unit of effort that you could possibly have. I am saying get your career capital not as one of those 2,000 people. That’s a very, very small ask. I am not putting that big a burden on you here, right? It seems like that is the least you could possibly ask for in the sense of not being the baddies.
(Zvi was talking specifically about working on capabilities at a frontier lab, not on alignment.)
Some people are promoting social media awareness of x-risks, for example that Kurzgesagt video, which was funded by Open Philanthropy[1]. There's also Doom Debates, Robert Miles's YouTube channel, and some others. There are some media projects on Manifund too, for example this one.
If your question is, why aren't people doing more of that sort of thing? Then yeah, that's a good question. If I was the AI Safety Funding Czar, I would be allocating a bigger budget to media projects (both social media and traditional media).
There are two arguments against giving marginal funding to media projects that I actually believe:
There are other arguments that I don't believe, although I expect some people have arguments that have never even occurred to me. The main arguments I can think of that I don't find persuasive are
I don't know for sure that that specific video was part of the Open Philanthropy grant, but I'm guessing it was based on its content. ↩︎
Zvi's Big Nonprofits Post lists a bunch of advocacy orgs.
Last year I wrote about every policy/advocacy org that I could find at the time, although the list is now somewhat out of date, e.g. The Midas Project (which Ben West mentioned) did not exist yet.
I was in a similar position to you—I wanted to donate to AI policy or advocacy, but as far as I know there aren't any grantmakers focusing on advocacy. You shouldn't necessarily trust that I did a good job of evaluating organizations but maybe my list can give you ideas.
Alternative idea: AI companies should have a little checkbox saying "Please use 100% of the revenue from my subscription to fund safety research only." This avoids some of the problems with your idea and also introduces some new problems.
I think there is a non-infinitesimal chance that Anthropic would actually implement this.
Hard to know the full story but for me this is a weak update against CAIS's judgment.
Right now CAIS is one out of a total of maybe two orgs (along with FLI) pushing for AI legislation that both (1) openly care about x-risk and (2) are sufficiently Respectable TM* to get funding from big donors. This move could be an attempt to maintain CAIS's image as Respectable TM. My guess is it's the wrong move but I have a lot of uncertainty. I think firing people due to public pressure is generally a bad idea although I'm not confident that that's what actually happened.
*I hope my capitalization makes this clear but to be explicit, I don't think Respectable TM is the same thing as "actually respectable". For example, MIRI is actually respectable, but isn't Respectable TM.
Edit: I just re-read the CAIS tweet, from the wording it is clear that CAIS meant one of two things: "We are bowing to public pressure" or "We didn't realize John Sherman would say things like this, and we consider it a fireable offense". Neither one is a good look IMO.
I think I disagree with this.
I think the main reason people are more likely to talk about those people's ideas is because their ideas are genuinely really good.
There will inevitably be some people who have good ideas, and lots of people are persuaded, and from the outside this looks like deference.