TR

Thane Ruthenis

2 karmaJoined

Comments
1

I could see claims like “our AI can now design bioweapons” or “our AI is capable of massively speeding up research” to be brushed aside as AI hype, given the steady stream of people overhyping AI capabilities.

Which means that if an AGI lab wants to make their employees less able to successfully whistleblow, they should arrange for fake "leaks" about intelligence explosions to be happening all the time, crying wolf and poisoning the epistemic environment.

Is this what that kerfuffle was about? See also this, and the rest of that account's activity.

Those may or may not be just random shitposters, but I now see a clear motivation for OpenAI to actually run those psyops.

What are some candidates for x that would convince interested members of the public?

I think the whistleblower would need to grab some actual proof of their claims: outputs of the models' R&D/bioweapons research, or logs displaying the model's capability as it autonomously researches stuff...

A copy of internal correspondence discussing all of this might also work, if there's sufficient volume of it. Or copies of internal papers which confirm/enable the breakthroughs.