This is a special post for quick takes by Ian Turner. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable?

If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff.

I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.

not if the ai increases intelligence via speed up or other methods which don't change the goals. 

Curated and popular this week
Relevant opportunities