Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.
(Thread continues from there with more details -- seems like a notable major development!)
I was going to reply to this comment, but after seeing the comments here, I've decided to abstain from sharing information on this specific post. The confidence that people here have about this being bad news, rather than uncertain news, indicates very dangerous levels of incompetence, narrow-mindedness, and even unfamiliarity with race dynamics (e.g. how one of the main risks of accelerating AI, even early on, comes from the creation of executives and AI engineers who neurotically pursue AI acceleration).
NickLaing is just one person and if one person doesn't have a complete picture then that's not a big deal, that's random error and it happens to everyone. When a dozen or more people each have an incomplete picture and confidently take aggressive stances against Anthropic, then that's a very serious issue. I now have a better sense of why Yudkowsky became apprehensive about writing about AI publicly, or why Dustin Moscovitz throws his weight behind Anthropic and insists that they're the good guys. If the people here would like to attempt to develop a perspective on race dynamics, they can start with the Yudkowsky Christiano debate which is balanced, or Yudkowsky's List of Lethalities and Christiano's response. Johnswentworth just put up a great post relevant to the topic. Or just read Christiano's response or Holden's Cold Takes series, the important thing here isn't balance, it's about having any perspective at all on race dynamics before you decide whether to tear into Anthropic's reputation.