Anthropic just published their submission to the Request for Information for a new US AI Action Plan (OSTP RFI).
It's 10 pages total and focuses on strengthening the US AISI (and broadly government capacity to test models), strengthening export controls (with some very concrete proposals), enhancing lab security through standards, scaling up energy infrastructure (asking for building infrastructure for 50 GW of power, or about 4% of the entire US grid capacity), accelerating AI adoption in government, and improving government sensemaking around economic impacts.
I recommend reading it. It's quite insightful regarding the priorities of Anthropic's policy team right now.
Nowhere in their RFP do they place restrictions on what kinds of energy capacity they want built. They are asking for a 4% increase in U.S. energy capacity—this is a serious amount of additional CO2 emissions if that capacity isn’t built renewably. But that’s just what they’re asking for now; if they’re serious about building & scaling AGI, they would be asking for much bigger increases, without a strong precedent of carbon-neutrality to back it up. That seems really bad?
Also to pre-empt—the energy capacity has to come before you build an AI powerful enough to ‘solve climate change’. So if they fail to do that, the downside is that they make the problem significantly worse. I think the environmental downsides of attempting to build AGI should be a meaningful part of one’s calculus.
I don't know. My guess is that they give very slim odds to the Trump admin caring about carbon neutrality, and think that the benefit of including a mention in their submission to be close to zero (other than demonstrating resolve in their principles to others).
On the minus side, such a mention risks a reaction with significant cost to their AI safety/security asks. So overall, I can see them thinking that including a mention does not make sense for their strategy. I'm not endorsing that calculus, just conjecturing.