M

MichaelDickens

5938 karmaJoined
mdickens.me

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).

I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff like [investing](https://mdickens.me/category/finance/) and [fitness](https://mdickens.me/category/fitness/).

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
838

In that sense, there is still a bit of elitism in the sense that some of the ideas of the sorta co-founders of the movements, like Eliezer Yudkowsky, Nick Bostrom, Will MacAskill and such, are likely to be treated with notably more deference.

I think I disagree with this.

  1. It has been joked that disagreeing with Eliezer is the favorite activity of LessWrongers.
  2. Will MacAskill has written some very well-reasoned and well-received EAF posts that nonetheless were met with strong disagreement, for example Are we living at the most influential time in history? I think this is a sign of good epistemics: people recognized that post as good and interesting, but they didn't defer much—they mostly disagreed with its conclusion.

I think the main reason people are more likely to talk about those people's ideas is because their ideas are genuinely really good.

There will inevitably be some people who have good ideas, and lots of people are persuaded, and from the outside this looks like deference.

The claim: Working at a scaling lab is the best way to gain expertise in machine learning [which can then be leveraged into solving the alignment problem].

Coincidentally, earlier today I was listening to an interview with Zvi in which he said:

If you think there is a group of let’s say 2,000 people in the world, and they are the people who are primarily tasked with actively working to destroy it: they are working at the most destructive job per unit of effort that you could possibly have. I am saying get your career capital not as one of those 2,000 people. That’s a very, very small ask. I am not putting that big a burden on you here, right? It seems like that is the least you could possibly ask for in the sense of not being the baddies.

(Zvi was talking specifically about working on capabilities at a frontier lab, not on alignment.)

Funny, I thought the same thing but I voted on the opposite end of the spectrum. I suppose "how familiar are you with the voting guidelines?" is pretty open to interpretation.

Some people are promoting social media awareness of x-risks, for example that Kurzgesagt video, which was funded by Open Philanthropy[1]. There's also Doom Debates, Robert Miles's YouTube channel, and some others. There are some media projects on Manifund too, for example this one.

If your question is, why aren't people doing more of that sort of thing? Then yeah, that's a good question. If I was the AI Safety Funding Czar, I would be allocating a bigger budget to media projects (both social media and traditional media).

There are two arguments against giving marginal funding to media projects that I actually believe:

  1. My guess is that public protests are more cost-effective right now, because (a) they're more neglected (b) they naturally generate media attention, and perhaps (c) they are more dramatic which leads people to take the AI x-risk problem more seriously.
  2. I also expect some kinds of policy work to be more cost-effective. There's already a lot of policy research happening but I think we need more (a) people talking honestly to policymakers about x-risk and (b) writing legislation targeted at reducing x-risk. Policy has the advantage that you don't need to change as many minds to have a large impact, but it has the disadvantage that those minds are particularly hard to change—a huge chunk of their job is listening to people saying "please pay attention to my issue", so you have a lot of competition.

There are other arguments that I don't believe, although I expect some people have arguments that have never even occurred to me. The main arguments I can think of that I don't find persuasive are

  1. It's hopeless to try to make AI safer via public opinion / the people developing AI don't care about public opinion.
  2. We should mainly fund technical research instead, e.g. because the technical problems in AI safety are more tractable.
  3. Public-facing messages will inevitably be misunderstood and distorted and we will end up in a worse place than where we started.
  4. If media projects succeed, then we will get regulations that slow down AI development, but we need to go as fast as possible to usher in the glorious transhumanist future or to beat China or whatever.

  1. I don't know for sure that that specific video was part of the Open Philanthropy grant, but I'm guessing it was based on its content. ↩︎

Zvi's Big Nonprofits Post lists a bunch of advocacy orgs.

Last year I wrote about every policy/advocacy org that I could find at the time, although the list is now somewhat out of date, e.g. The Midas Project (which Ben West mentioned) did not exist yet.

I was in a similar position to you—I wanted to donate to AI policy or advocacy, but as far as I know there aren't any grantmakers focusing on advocacy. You shouldn't necessarily trust that I did a good job of evaluating organizations but maybe my list can give you ideas.

Alternative idea: AI companies should have a little checkbox saying "Please use 100% of the revenue from my subscription to fund safety research only." This avoids some of the problems with your idea and also introduces some new problems.

I think there is a non-infinitesimal chance that Anthropic would actually implement this.

Hard to know the full story but for me this is a weak update against CAIS's judgment.

Right now CAIS is one out of a total of maybe two orgs (along with FLI) pushing for AI legislation that both (1) openly care about x-risk and (2) are sufficiently Respectable TM* to get funding from big donors. This move could be an attempt to maintain CAIS's image as Respectable TM. My guess is it's the wrong move but I have a lot of uncertainty. I think firing people due to public pressure is generally a bad idea although I'm not confident that that's what actually happened.

*I hope my capitalization makes this clear but to be explicit, I don't think Respectable TM is the same thing as "actually respectable". For example, MIRI is actually respectable, but isn't Respectable TM.

Edit: I just re-read the CAIS tweet, from the wording it is clear that CAIS meant one of two things: "We are bowing to public pressure" or "We didn't realize John Sherman would say things like this, and we consider it a fireable offense". Neither one is a good look IMO.

The RSP specifies that CBRN-4 and AI R&D-5 both require ASL-4 security. Where is ASL-4 itself defined?

Load more