K

kokotajlod

3055 karmaJoined

Bio

Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo

Sequences
2

Tiny Probabilities of Vast Utilities: A Problem for Longtermism?
What to do about short timelines?

Comments
412

I agree that as time goes on states will take an increasing and eventually dominant role in AI stuff.

My position is that timelines are short enough, and takeoff is fast enough, that e.g. decisions and character traits of the CEO of an AI lab will explain more of the variance in outcomes than decisions and character traits of the US President.

  1. My understanding is that relatively few EAs are actual hardcore classic hedonist utilitarians. I think this is ~sufficient to explain why more haven't become accelerationists.
  2. Have you cornered a classic hedonist utilitarian EA and asked them? Have you cornered three? What did they say?

Thanks for discussing with me!

(I forgot to mention an important part of my argument, oops -- You wouldn't have said "at least 100 years off" you would have said "at least 5000 years off." Because you are anchoring to recent-past rates of progress rather than looking at how rates of progress increase over time and extrapolating. (This is just an analogy / data point, not the key part of my argument, but look at GWP growth rates as a proxy for tech progress rates: According to this GWP doubling time was something like 600 years back then, whereas it's more like 20 years now. So 1.5 OOMs faster.) Saying "at least a hundred years off" in 1600 would be like saying "at least 3 years off" today. Which I think is quite reasonable.)

I agree with the claims "this problem is extremely fucking hard" and "humans aren't cracking this anytime soon" and I suspect Yudkowsky does too these days.

I disagree that nanotech has to predate taking over the world; that wasn't an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.

I like your suggestion to model a more earthly scenario but I lack the energy and interest to do so right now.

My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past -- e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn't see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out -- 'impossible' relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.

Cool. Seems you and I are mostly agreed on terminology then.

Yeah we definitely disagree about that crux. You'll see. Happy to talk about it more sometime if you like.

Re: galaxy vs. earth: The difference is one of degree, not kind. In both cases we have a finite amount of resources and a finite amount of time with which to do experiments. The proper way to handle this, I think, is to smear out our uncertainty over many orders of magnitude. E.g. the first OOM gets 5% of our probability mass, the second OOM gets 5% of the remaining probability mass, and so forth. Then we look at how many OOMs of extra research and testing (compared to what humans have done) a million ASIs would be able to do in a year, and compare it to how many OOMs extra (beyond that level) a galaxy worth of ASI would be able to do in many years. And crunch the numbers.

What if he just said "Some sort of super-powerful nanofactory-like thing?" 

He's not citing some existing literature that shows how to do it, but rather citing some existing literature which should make it plausible to a reasonable judge that a million superintelligences working for a year could figure out how to do it. (If you dispute the plausibility of this, what's your argument? We have an unfinished exchange on this point elsewhere in this comment section. Seems you agree that a galaxy full of superintelligences could do it; I feel like it's pretty plausible that if a galaxy of superintelligences could do it, a mere million also could do it.)

I think the tech companies -- and in particular the AGI companies -- are already too powerful for such an informal public backlash to slow them down significantly.

I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.

I take it that not only do you disagree, you agree it's the crux? Or don't you? If you agree it's the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don't agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.

(Re: people tipping off, etc.: I'm happy to say more on this but I'm going to hold off for now since I don't want to lose the main thread of the conversation.)
 

What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you'll probably have enough compute to make a million copies)

I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy's worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs. 

And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it's plausible that this would be enough. How plausible? Idk I'm guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.

I didn't say they'd covertly be building it. It would probably be significantly harder if covert, they wouldn't be able to get as many OOMs. But they'd still get some probably.

I don't think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)

Load more