In no particular order, here's a collection of Twitter screenshots of people attacking AI Safety. A lot of them are poorly reasoned, and some of them are simply ad-hominem. Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers.
1
2
3
4
5
(That one wasn't actually a critique, but it did convey useful information about the state of AI Safety's optics.)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Conclusions
I originally intended to end this post with a call to action, but we mustn't propose solutions immediately. In lieu of a specific proposal, I ask you, can the optics of AI safety be improved?
"Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers." I'm kind of skeptical of this.
Outside of Giada Pistilli and Talia Ringer I don't think these tweets would appear on the typical ML researcher timeline, they seem closer to niche rationality/EA shitposting.
Whether the typical ML person would think alignment/AI x-risk is really dumb is a different question, and I don't really know the answer to that one!
Although to be clear it's still nice to have bunch of different critical perspectives! This post exposed me to some people I didn't know of.