Anthropic just published their submission to the Request for Information for a new US AI Action Plan (OSTP RFI).
It's 10 pages total and focuses on strengthening the US AISI (and broadly government capacity to test models), strengthening export controls (with some very concrete proposals), enhancing lab security through standards, scaling up energy infrastructure (asking for building infrastructure for 50 GW of power, or about 4% of the entire US grid capacity), accelerating AI adoption in government, and improving government sensemaking around economic impacts.
I recommend reading it. It's quite insightful regarding the priorities of Anthropic's policy team right now.
Completely agree that climate analysis should be a huge part of the scaling AGI equation. I don't buy the "But AGI might solve climate" argument. It might solve everything, but the uncertainty is so huge I don't think we should account for that in any equation - I think we should calculate the "knowns:" and largely ignore the wildly unpredictable "unknowns" here.