Meta’s frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models – which will have more dangerous capabilities – we call on Meta to take responsible release seriously and stop irreversible proliferation. Join us for a peaceful protest at Meta’s office in San Francisco at 250 Howard St at 4pm PT.
RSVP on Facebook[1] or through this form.
Let’s send a message to Meta:
- Stop irreversible proliferation of model weights. Meta’s models are not safe if anyone can remove the safety measures.
- Take AI risks seriously.
- Take responsibility for harms caused by your AIs.
All you need to bring is yourself and a sign, if you want to make your own. I will lead a trip to SF from Berkeley but anyone can join at the location. We will have a sign-making party before the demonstration-- stay tuned for details. We'll go out for drinks afterward
- ^
I like the irony.
I'm also heartened by recent polling, and spend a lot of time time these days thinking about how to argue for the importance of existential risks from artificial intelligence.
I'm guessing the main difference in our perspective here is that you see including existing harms in public messaging as "hiding under the banner" of another issue. In my mind, (1) existing harms are closely related to the threat models for existential risks (i.e. how do we get these systems to do the things we want and not do the other things); and (2) I think it's just really important for advocates to try to build coalitions between different interest groups with shared instrumental goals (e.g. building voter support for AI regulation). I've seen a lot of social movements devolve into factionalism, and I see the early stages of that happening in AI safety, which I think is a real shame.
Like, one thing that would really help the safety situation is if frontier models were treated like nuclear power plants and couldn't just be deployed at a single company's whim without meeting a laundry list of safety criteria (both because of the direct effects of the safety criteria, and because such criteria literally just buys us some time). If it is the case that X-risk interest groups can build power and increase the chance of passing legislation by allying with others who want to include (totally legitimate) harms like respecting intellectual property in that list of criteria, I don't see that as hiding under another's banner. I see it as building strategic partnerships.
Anyway, this all goes a bit further than the point I was making in my initial comment, which is that I think the public isn't very sensitive to subtle differences in messaging — and that's okay because those subtle differences are much more important when you are drafting legislation compared to generally building public pressure.