SH

Seth Herd

63 karmaJoined

Comments
22

Copied from my LW comment, since this is probably more of an EAF discussion:

This is really important pushback. This is the discussion we need to be having.

Most people who are trying to track this believe China has not been racing toward AGI up to this point. Whether they embark on that race is probably being determined now - and based in no small part on the US's perceived attitude and intentions.

Any calls for racing toward AGI should be closely accompanied with "and of course we'd use it to benefit the entire world, sharing the rapidly growing pie". If our intentions are hostile, foreign powers have little choice but to race us.

And we should not be so confident we will remain ahead if we do race. There are many routes to progress other than sheer scale of pretraining. The release of DeepSeek r1 today indicates that China is not so far behind. Let's remember that while the US "won" the race for nukes, our primary rival had nukes very soon after - by stealing our advancements. A standoff between AGI-armed US and China could be disastrous - or navigated successfully if we take the right tone and prevent further proliferation (I shudder to think of Putin controlling an AGI, or many potentially unstable actors).

This discussion is important, so it needs to be better. This pushback is itself badly flawed. In calling out the report's lack of references, it provides almost none itself. Citing a 2017 official statement from China seems utterly irrelevant to guessing their current, privately held position. Almost everyone has updated massively since 2017. (edit: It's good that this piece does note that public statements are basically meaningless in such matters.) If China is "racing toward AGI" as an internal policy, they probably would've adopted that recently. (I doubt that they are racing yet, but it seems entirely possible they'll start now in response to the US push to do so - and the their perspective on the US as a dangerous aggressor on the world stage. But what do I know - we need real experts on China and international relations.)

Pointing out the technical errors in the report seems irrelevant to harmful. You can understand very little of the details and still understand that AGI would be a big, big deal if true — and the many experts predicting short timelines could be right. Nitpicking the technical expertise of people who are essentially probably correct in their assessment just sets a bad tone of fighting/arguing instead of having a sensible discussion.

And we desperately need a sensible discussion on this topic.

I completely agree. 

But others may not, because most humans aren't longtermists nor utilitarians. So I'm afraid arguments like this won't sway the public opinion much at all. People like progress because it will get them and their loved ones (children and grandchildren, whose future they can imagine) better lives. They just barely care at all whether humanity ends after their grandchildren's lives (to the extent they can even think about it).

This is why I believe that most arguents against AGI x-risk are really based on differing timelines. People like to think that humans are so special we won't surpass them for a long time. And they mostly care about the future for their loved ones.

I think the point is making this explicit and having a solid exposition to point to when saying "progress is no good if we all die sooner!"

I don't think it's worth the effort; I'd personally be just as pleased with one snapshot of the participants in conversation as I would be with a whole video. The point of podcasts for me is that I can do something else while still taking in something useful for my alignment work. But I am definitely a tone-of-voice attender over a facial-expression attender, so others will doubtless get more value out of it.

Yes, but pursuing excellence also costs time that could be spent elsewhere, and time/results tradeoffs are often highly nonlinear. 

The perfect is the enemy of the good. It seems to me that the most common LW/EA personality already pursues excellence more than is optimal.

For more, see my LW comment

Excellent work.

To summarize one central argument in briefest form:
 

Aschenbrenner's conclusion in Situational Awareness is wrong in overstating the claim.

He claims that treating AGI as a national security issue is the obvious and inevitable conclusion for those that understand the enormous potential of AGI development in the next few years. But Aschenbrenner doesn't adequately consider the possibility of treating AGI primarily as a threat to humanity instead of a threat the nation or to a political ideal (the free world). If we considered it primarily a threat to humanity, we might be able to cooperate with China and other actors to safeguard humanity.

I think this argument is straightforwardly true. Aschenbrenner does not adequately consider alternative strategies, and thus his claim of the conclusion being the inevitable consensus is false.

But the opposite isn't an inevitable conclusion, either.

I currently think Aschenbrenner is more likely correct about the best course of action. But I am highly uncertain. I have thought hard about this issue for many hours both before and after Aschenbrenner's piece sparked some public discussion. But my analysis, and the public debate thus far, are very far from conclusive on this complex issue.

This question deserves much more thought. It has a strong claim to being the second most pressing issue in the world at this moment, just behind technical AGI alignment.

This post can be summarized as "Aschenbrenner's narrative is highly questionable". Of course it is. From my perspective, having thought deeply about each of the issues he's addressing, his claims are also highly plausible.  To "just discard" this argument because it's "questionable" would be very foolish. It would be like driving with your eyes closed once the traffic gets confusing.

This is the harshest response I've ever written. To the author, I apologize. To the EA community: we will not help the world if we fall back on vibes-based thinking and calling things we don't like "questionable" to dismiss them.  We must engage at the object level. While the future is hard to predict, it is quite possible that it will be very unlike the past, but in understandable ways. We will have plenty of problems with the rest of the world doing its standard vibes-based thinking and policy-making. The EA community needs to do better.

There is much to question and debate in Aschenbrenner's post, but it must be engaged with at the object level. I will do that, elsewhere.

 

On the vibes/ad-hominem level, note that Aschenbrenner also recently wrote that Nobody’s on the ball on AGI alignment. He appears to believe (there and elsewhere) that AGI is a deadly risk, and we might very well all die from it. He might be out to make a quick billion, but he's also serious about the risks involved.

The author's object-level claim is that they don't think AGI is immanent. Why? How sure are you? How about we take some action or at least think about the possibility, just in case you might be wrong and the many people close to its development might be right?

Agreed. That juxtaposition is quite suspicious.

Unfortunately, most of Aschenbrenner's claims seem highly plausible. AGI is a huge deal, it could happen very soon, and the government is very likely to do something about it before it's fully transformative. Whether them spending tons of money on his proposed manhattan project is the right move is highly debatable, and we should debate it.

I think the scaling hypothesis is false, and we'll get to AGI quite soon anyway, by other routes.  The better scaling works, the faster we'll get there, but that's gravy. We have all of the components of a human-like mind today, putting them together is one route to AGI.

Load more