Hide table of contents

A few observations from the last few weeks:

  • On March 22, FLI published an open letter calling for a six-month moratorium on frontier AI progress.
  • On March 29, Eliezer Yudkowsky published a piece in TIME calling for an indefinite moratorium.
  • To our knowledge, none of the top AI organizations (OpenAI, DeepMind, Anthropic) have released a statement responding to these pieces.

We offer a request to AGI organizations: Determine what you think about these requests for an AI pause (possibly with uncertainties acknowledged), write up your beliefs in some form, and publicly announce your position. 

We believe statements from labs could improve discourse, coordination, and transparency on this important and timely topic. 

Discourse: We believe labs are well-positioned to contribute to dialogue around whether (or how) to slow AI progress, making it more likely for society to reach true and useful positions.

Coordination: Statements from labs could make coordination more likely. For example, lab A could say “we would support a pause under X conditions with Y implementation details”. Alternatively, lab B could say “we would be willing to pause if lab C agreed to Z conditions.”

Transparency: Transparency helps others build accurate models of labs, their trustworthiness, and their future actions. This is especially important for labs that seek to receive support from specific communities, policymakers, or the general public. You have an opportunity to show the world how you reason about one of the most important safety-relevant topics. 

We would be especially excited about statements that are written or endorsed by lab leadership. We would also be excited to see labs encourage employees to share their (personal) views on the requests for moratoriums. 

Sometimes, silence is the best strategy. There may be attempts at coordination that are less likely to succeed if people transparently share their worldviews. If this is the case, we request that AI organizations make this clear (example: "We have decided to avoid issuing public statements about X for now, as we work on Y. We hope to provide an update within Z weeks.")

At the time of this post, the FLI letter has been signed by 7 DeepMind research scientists/engineers, probably 0 OpenAI research scientists and 0 Anthropic employees. 

See also:

85

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

I definitely agree that the current situation of silence makes the overall runaway, fast, dirty  AI development scenario much more likely and the space much tenser. 

Additionally, there might just be concerns that these labs have thought of from a business or at-scale research point of view, which we haven't (this would really help an already strained, resource-scarce alignment field in terms of what to prioritize!). 

 

Ultimately, I think what is stopping these labs is PR and a sense of "tainting their own field."

More from Akash
Curated and popular this week
Relevant opportunities