Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.
(Thread continues from there with more details -- seems like a notable major development!)
This is assuming that Anthropic is net positive even in isolation. They may be doing some alignment research, but they are also pushing the capabilities frontier. They are either corrupted by money and power, or hubristically think that they can actually save the world following their strategy, rather than just end it. Regardless, they are happy to gamble hundreds of millions of lives (in expectation) without any democratic mandate. Their "responsible scaling" policy is anything but (it's basically an oxymoron at this stage, when AGI is on the horizon and alignment is so far from being solved).