Meta’s frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models – which will have more dangerous capabilities – we call on Meta to take responsible release seriously and stop irreversible proliferation. Join us for a peaceful protest at Meta’s office in San Francisco at 250 Howard St at 4pm PT.
RSVP on Facebook[1] or through this form.
Let’s send a message to Meta:
- Stop irreversible proliferation of model weights. Meta’s models are not safe if anyone can remove the safety measures.
- Take AI risks seriously.
- Take responsibility for harms caused by your AIs.
All you need to bring is yourself and a sign, if you want to make your own. I will lead a trip to SF from Berkeley but anyone can join at the location. We will have a sign-making party before the demonstration-- stay tuned for details. We'll go out for drinks afterward
- ^
I like the irony.
"They are both unsafe now for the things they can be used for and releasing model weights in the future will be more unsafe because of things the model could do."
I think using "unsafe" in a very broad way like this is misleading overall and generally makes the AI safety community look like miscalibrated alarmists. I do not want to end up in a position where, in 5 or 10 years' time, policy proposals aimed at reducing existential risk come with 5 or 10 years worth of baggage in the form of previous claims about model harms that have turned out to be false. I expect that the direct effects of the Llama models that have been released so far will be net positive by a significant margin (for all the standard reasons that open source stuff is net positive). Maybe you disagree with this, but a) it seems better to focus on the more important claim, for which there's a consensus in the field, and b) even if you're going to make both claims, using the same word ("unsafe") in these two very different senses is effectively a motte and bailey.
The policy you are suggesting is far further away from "open source" than this is. It is totally reasonable for Meta to claim that doing something closer to open source has some proportion of the benefits of full open source.