Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.
(Thread continues from there with more details -- seems like a notable major development!)
I don't think it's at all obvious whether this development is good or bad (though I would lean towards bad), but both here and on LessWrong you have not made a coherent attempt to support your argument. Your concept of "redundancy" in AI labs is confusing and the implied connection to safety is tenuous.