"The new influence web is pushing the argument that AI is less an existential danger than a crucial business opportunity, and arguing that strict safety rules would hand America’s AI edge to China. It has already caused key lawmakers to back off some of their more worried rhetoric about the technology.
... The effort, a loosely coordinated campaign led by tech giants IBM and Meta, includes wealthy new players in the AI lobbying space such as top chipmaker Nvidia, as well as smaller AI startups, the influential venture capital firm Andreessen Horowitz and libertarian billionaire Charles Koch.
... Last year, Rep. Ted Lieu (D-Calif.) declared himself “freaked out” by cutting-edge AI systems, also known as frontier models, and called for regulation to ward off several scary scenarios. Today, Lieu co-chairs the House AI Task Force and says he’s unconvinced by claims that Congress must crack down on advanced AI.
“If you just say, ‘We’re scared of frontier models’ — okay, maybe we should be scared,” Lieu told POLITICO. “But I would need something beyond that to do legislation. I would need to know what is the threat or the harm that we’re trying to stop.”
... After months of conversations with IBM and its allies, Rep. Jay Obernolte (R-Calif.), chair of the House AI Task Force, says more lawmakers are now openly questioning whether advanced AI models are really that dangerous.
In an April interview, Obernolte called it “the wrong path” for Washington to require licenses for frontier AI. And he said skepticism of that approach seems to be spreading.
“I think the people I serve with are much more realistic now about the fact that AI — I mean, it has very consequential negative impacts, potentially, but those do not include an army of evil robots rising up to take over the world,” said Obernolte."
[brainstorming]
It may be useful to consider the % of [worldwide net private wealth] that is lost if the US government commits to certain extremely strict AI regulation. We can call that % the "wealth impact factor of potential AI regulation" (WIFPAIR). We can expect that, other things being equal, in worlds where WIFPAIR is higher more resources are being used for anti-AI-regulation lobbying efforts (and thus EA-aligned people probably have less influence over what the US government does w.r.t. AI regulation).
The WIFPAIR can become much higher in the future, and therefore convincing the US government to establish effective AI regulation can become much harder (if it's not already virtually impossible today).
If at some future point WIFPAIR gets sufficiently high, the anti-AI-regulation efforts may become at least as intense as the anti-communist efforts in the US during the 1950s.