A couple more thoughts on this.
I think it's likely that without a long (e.g. multi-decade) AI pause, one or more of these "non-takeover AI risks" can't be solved or reduced to an acceptable level. To be more specific:
I'm worried that by creating (or redirecting) a movement to solve these problems, without noting at an early stage that these problems may not be solvable in a relevant time-frame (without a long AI pause), it will feed into a human tendency to be overconfident about one's own ideas and solutions, and create a group of people whose identities, livelihoods, and social status are tied up with having (what they think are) good solutions or approaches to these problems, ultimately making it harder in the future to build consensus about the desirability of pausing AI development.
Perhaps the most important question is whether you support a restriction on space colonization (completely or to a few nearby planets) during the Long Reflection. Unrestricted colonization seems good from a pure pro-natalist perspective, but bad from an optionalist perspective, as it makes much more likely that if anti-natalism (or adjacent positions like there should be strict care or controls over what lives can be brought into existence) is right, some of the colonies will fail to reach the correct conclusion and go on to colonize the universe in an unrestricted way, thus making humanity as a whole unable to implement the correct option.
If you do support such a restriction, then I think we agree on "the highest order bits" or the most important policy implication of optionalism, but probably still disagree on what is the best population size during the Long Reflection, which may be unresolvable due to our differing intuitions. I think I probably have more sympathy for anti-natalist intuitions than you do (in particular that most current lives may have negative value and people are mistaken about this), and worry more that creating negative-value lives and/or bringing lives into existence without adequate care could constitute a kind of irreversible or irreparable moral error. Unfortunately I do not see a good way to resolve such disagreements at our current stage of philosophical progress.
I think both natalism and anti-natalism risk committing moral atrocities, if their opposite position turns out to be correct. Natalism if either people are often mistaken about their lives being worth living (cf Deluded Gladness Argument), or bringing people into existence requires much more due diligence about understanding/predicting their specific well-informed preferences (perhaps more understanding than current science and philosophy allow). Anti-natalism if human extinction implies losing an astronomically large amount of potential value (cf Astronomical Waste).
My own position, which might be called "min-natalism" or "optionalism", is that we should ideally aim for a minimal population that's necessary to prevent extinction and foster philosophical progress. This would maintain our optionality for pursuing natalism, anti-natalism, or something else later, while acknowledging and attempting to minimize the relevant moral risks, until we can more definitively answer the various philosophical questions that these positions depend on.
(It occurs to me this is essentially the Long Reflection, applied to the natalism question, but I don't think I've seen anyone explicitly take this position or make this connection before. It seems somewhat surprising that it's not a more popular perspective in the natalism vs anti-natalism debate.)
I'm generally a fan of John Cochrane. I would agree that government regulation of AI isn't likely to work out well, which is why I favor an international pause on AI development instead (less need for government competence on detailed technical matters).
His stance on unemployment seems less understandable. I guess he either hasn't considered the possibility that AGI could drive wages below human subsistence levels, or think that's fine (humans just work for the same low wages as AIs and governments make up the difference with a "broad safety net that cushions all misfortunes")?
Oh, of course he also doesn't take x-risk concerns seriously enough, but that's more understandable for an economist who probably just started thinking about AI recently.
Vitalik Buterin: Right. Well, one thing is one domain being offence-dominant by itself isn’t a failure condition, right? Because defence-dominant domains can compensate for offence-dominant domains. And that has totally happened in the past, many times. If you even just compare now to 1,000 years ago: cannons are very offence-dominant and castles stopped them working. But if you compare physical warfare now to before, is it more offence-dominant on the whole? It’s not clear, right?
Overall it seems very unclear what Vitalik's logic is in this area, and I wish Robert had pushed him to think or speak more clearly.
I wish there was discussion about a longer pause (e.g. multi-decade), to allow time for human genetic enhancement to take effect. Does @CarlShulman support that, and why or why not?
Also I'm having trouble making sense of the following. What kind of AI disaster is Carl worried about, that's only a disaster for him personally, but not for society?
But also, I’m worried about disaster at a personal level. If AI was going to happen 20 years later, that would better for me. But that’s not the way to think about it for society at large.
Thanks for letting me know! I have been wondering for a while why AI philosophical competence is so neglected, even compared to other subareas of what I call "ensuring a good outcome for the AI transition" (which are all terribly neglected in my view), and I appreciate your data point. Would be interested to hear your conclusions after you've thought about it.
I think my point in the opening comment does not logically depend on whether the risk vs time (in pause/slowdown) curve is convex or concave[1], but it may be a major difference in how we're thinking about the situation, so thanks for surfacing this. In particular I see 3 large sources of convexity:
I think this kind of approach can backfire badly (especially given human overconfidence), because we currently don't know how to judge progress on these problems except by using human judgment, and it may be easier for AIs to game human judgment than to make real progress. (Researchers trying to use LLMs as RL judges apparently run into the analogous problem constantly.)
What if the leaders can't or shouldn't trust the AI results?
I'm trying to coordinate with, or avoid interfering with, people who are trying to implement an AI pause or create conditions conducive to a future pause. As mentioned in the grandparent comment, one way people like us could interfere with such efforts is by feeding into a human tendency to be overconfident about one's own ideas/solutions/approaches.