PV

Pablo Villalobos

206 karmaJoined Madrid, Spain

Bio

Research assistant at Epoch

Comments
14

Hi, thank you for your post, and I'm sorry to hear about your (and others') bad experience in EA. However, I think if your experience in EA has mostly been in the Bay Area, you might have an unrepresentative perspective on EA as a whole. Most of the worst incidents of the type you mention I've heard about in EA have taken place in the Bay Area, I'm not sure why.

I've mostly been involved in the Western European and Spanish-speaking EA communities, and as far as I know there have been much less incidents here. Of course, this might just be because these communities are smaller, or I might just not have heard of incidents which have taken place. Maybe it's my perspective that's unrepresentative.

In any case, if you haven't tried it yet, consider spending more time in other EA communities.

I'm personally still reserving judgment until the dust settles. I think in this situation, given the animosity towards SBF from customers, investors, etc, there are clear incentives to speak out loud if you believe there was fraud, and to stay quiet if you believe it was an honest (even if terrible) mistake. So we're likely seeing biased evidence.

Still, a mistake of this magnitude seems at the very least grossly negligent. You can't preserve both the integrity and the competence of SBF after this. And I agree that it's hard to know whether you're competent enough to do something, until you do it and succeed or fail. But then the lesson to learn is sonething like "be in constant vigilance, seek feedback from the people who know most about what you are trying to do", etc.

Also, loyalty is only as good as your group is. You can't use a loyalty argument to defend a member of your group when they become suspect of malfeasance. You might appeal to the loyalty of those who knew them best and didn't spot any signs of bad behavior before, but that's only a handful of people.

Well, I also think that the core argument is not really valid. Engagement does not require conceding that the other person is right.

The way I understand it, the core of the argument is that AI fears are based on taking a pseudo-trait like "intelligence" and extrapolating it to a "super" regime. The author claims that this is philosophical nonsense and thus there's nothing to worry about. I reject that AI fears are based on those pseudo-traits.

AI risk is not in principle about intelligence or agency. A sufficient amount of brute-force search is enough to be catastrophic. An example of this is the "Outcome Pump". But if you want a less exotic example, consider evolution. Evolution is not sentient, not intelligent, and not an agent (unless your definition of those is very broad). And yet, evolution from time to time makes human civilization stumble by coming up with deadly, contagious viruses.

Now, viruses evolve to make more copies of themselves, so it is quite unlikely that an evolved virus will kill 100% of the population. But if virus evolution didn't have that life-preserving property, and if it happened 1000 times faster, then we would all die within months.

The analogy with AI is: suppose we spend 10^100000 FLOPs on a brute force search for industrial robot designs. We simulate the effects of different designs on the current world and pick the one whose effects are closest to out target goal. The final designs will be exceedingly good at whatever the target of the search is, including at convincing us that we should actually build the robots. Basically, the moment someone sees those designs, humanity will have lost some control over their future. In the same way that, once SARS-CoV-2 entered a single human body, the future of humanity suddenly became much more dependent on our pandemic response.

In practice we don't have that much computational power. That's why intelligence becomes a necessary component of this, because intelligence vastly reduces the search space. Note that this is not some "pseudo-trait" built on human psychology. This is intelligence in the sense of compression: how many bits of evidence you need to complete a search. It is a well-defined concept with clear properties.

Current AIs are not very intelligent by this measure. And maybe they'll be. Maybe it would take some paradigm different from Deep Learning to achieve this level of intelligence. That is an empirical question that we'll need to solve. But at no point does SIILTBness play any role in this.

Sufficiently powerful search is dangerous even if there's nothing like it is to be a search process. And 'powerful' here is a measure of how many states you visit and how efficiently you do it. Evolution itself is a testament to the power of search. It is not philosophical non-sense, but the most powerful force on Earth for billions of years.

(Note: the version of AI risk I have explored here is a particularly 'hard' version, associated with the people who are most pessimistic about AI, notably MIRI. There are other versions that do rest on something like agency or intelligence)

The objection that I thought was valid is that current generative AIs might not be that dangerous. But the author himself acknowledges that training situated and embodied AIs could be dangerous, and it seems clear that the economic incentives to build that kind of AI are strong enough that it will happen eventually (and we are already training AIs on virtual environments such as Minecraft. Is that situated and embodied enough?)

Upvoted because I think the linked post raises an actually valid objection, even thought it does not seem devastating to me and it is kind of obscured by a lot of philosophy that also seems not that relevant to me.

There was a linkpost for this in LessWrong a few days ago, I think the discussion in the comments is good.

I quite liked this post, but just a minor quibble. Engram preservation still does not directly save lives, it gives us an indefinite amount of time, which is hopefully enough to develop the technology to actually save them.

You could say that it's impossible to save a life since there's always a small chance of untimely death, but let's say we consider a life "saved" when the chance of death in unwanted conditions is below some threshold, like 10%.

I would say widespread engram preservation reduces the chance of untimely death from ~100% (assuming no longevity advances in the near future) to the probability of x-risks. Depending on the threshold, you might have to deal with x-risks to consider these lives "saved".

Well, capital accumulation does raise productivity, so traditional pro-growth policies are not useless. But they are not enough, as you argue.

Ultimately, we need either technologies that directly raise productivity (like atomically precise manufacturing, fusion energy or other cheap energy source) or technologies that accelerate R&D and commercial adoption. Apart from AI and increasing global population, I can think of four:

  • boosting average intelligence via genetic engineering
  • reforming science and engineering, as well as education (a la dath ilan)
  • nootropics, BCIs, and other electrochemical methods of tinkering with the brain
  • systematic experimentation with social technology (having easy ways of testing ideas like open borders, UBI, Georgism, prediction markets and adopting those that work)

From the longtermist perspective, degrowth is not that bad as long as we are eventually able to grow again. For example, we could hypothetically halt or reverse some growth and work on creating safe AGI or nanotechnology or human enhancement or space exploration until we are able to bypass Earth's ecological limits.

A small scale version of this happened during the pandemic, when economic activity was greatly reduced until the situation stabilized and we had better tools to fight the virus.

But let's not be mistaken, growth (perhaps measured by something other than GDP) is pretty much the goal here. If we have to forego growth temporarily, it's because we have failed to find clever ways of bypassing the current limits. It's not a strategy, it's what losing looks like.

It's also probably politically infeasible: just raising inflation and energy prices is enough to have most people completely forget about the environment. It could not be a planned thing, rather a consequence of economic forces.

It's like if Haber and Bosch hadn't invented their nitrogen process in 1910. We would have run out of fertilizer and then population growth would've had to slow down or even reverse.

Great question. The paper does mention micronutrients but does not try to evaluate which of these advantages had a greater influence. I used the back-of-the-envelope calculation in footnote 6 as a sanity check that the effect size is plausible but I don't know enough about nutrition to have any intuition on this.

I don't think embryo selection is remotely a central example of 20th century eugenics, even if it involves 'genetic enhancement'. No one is getting killed, sterilized or otherwise being subjected to nonconsensual treatments.

In the end, it's no different than other non-genetic interventions to 'improve' the general population, like the education system. Education transforms children for life in a way that many consider socially beneficial.

Why are we okay with having such massive interventions on a child's environment (30 hours a week for 12+ years!), but not on a child's genes? After all, phenotype is determined by genes+environment. Why is it ok to change one but not the other?

What is morally wrong about selecting which people come to existence based on their genes, when we already make such decisions based on all other aspects of their life? There are almost no illiterate people in the western world, almost no people with stunted growth. We've selected them out of existence via environmental interventions. Should we stop doing that?

A valid reason to reject this new eugenics would be fearing that the eugenic selection pressure could end up being controlled by political processes, which could be dangerous. But the educational system is already controlled by political processes in most countries, and again this is mostly seen as acceptable.

Load more