Jaime Sevilla

Director @ Epoch
4904 karmaJoined Working (0-5 years)
Interests:
Forecasting

Bio

Director of Epoch, an organization investigating the future of Artificial Intelligence.

Currently working on:

  • Macroeconomic models of AI takeoff
  • Trends in Artificial Intelligence
  • Forecasting cumulative records
  • Improving forecast aggregation

I am also one of the coordinators of Riesgos Catastróficos Globales, a Spanish-speaking network of experts working on Global Catastrophic Risks.  

I also run Connectome Art, an online art gallery where I host art I made using AI.

Posts
58

Sorted by New

Sequences
3

Riesgos Catastróficos Globales
Aggregating Forecasts
Forecasting quantum computing

Comments
251

TL;DR

  1. Sentience appear in many animals, indicating it might have a fundamental purpose for cognition. Advanced AI, specially if trained on data and environments similar to humans, will then likely be conscious
  2. Restrictions to advanced AI would likely delay technological progress and potentially require a state of surveillance. A moratorium might also shift society towards a culture that is more cautious towards expanding life.

I think what is missing for this argument to go through is arguing that the costs in 2 are higher than the cost of mistreated Artificial Sentience.

I am not sure what this footnote means. The "cost of training per 1m tokens" is a very weird unit to talk about, since it depends on the model size and the GPU efficiency. I strongly suspect you meant to write something else and got mixed up.

What do you mean exactly? None of these maps have a discontinuity at .5

I find these analogies more reassuring than worrying TBH

My point is that our best models of economic growth and innovation (such as the semi-endogeneous growth model that Paul Romer won the Nobel prize for) straightforwardly predict hyperbolic growth under the assumptions that AI can substitute for most economically useful tasks and that AI labor is accumulable (in the technical sense that you can translate a economic output into more AI workers). This is even though these models assume strong diminishing returns to innovation, in the vein of "ideas are getting harder to find".

Furthermore, even if you weaken the assumptions of these models (for example assuming that AI won't participate in scientific innovation, or that not every task can be automated) you still can get pretty intense accelerated growth (up to x10 greater than today's frontier economies). 

Accelerating growth has been the norm for most of human history, and growth rates of 10%/year or greater have been historically observed in, e.g. 2000s China, so I don't think this is an unreasonable prediction to hold.

I've only read the summary, but my quick sense is that Thorstad is conflating two different versions of the singularity thesis (fast takeoff vs slow but still hyperbolic takeoff), and that these arguments fail to engage with the relevant considerations.

Particularly, Erdil and Besiroglu (2023) show how hyperbolic growth (and thus a "singularity", though I dislike that terminology) can arise even when there are strong diminishing returns to innovation and sublinear growth with respect to innovation.

(speculating) The key property you are looking for IMO is to which degree people are looking at different information when making forecasts. Models that parcel reality into neat little mutually exclusive packages are more amenable , while forecasts that obscurely aggregate information from independent sources will work better with geomeans. 

In any case, this has little bearing on aggregating welfare IMO. You may want to check out geometric rationality as an account that lends itself more to using geometric aggregation of welfare. 

Interesting case. I can see the intuitive case for the median.

I think the mean is more appropriate - in this case, what this is telling you is that your uncertainty is dominated by the possibility of a fat tail, and the priority is ruling it out.

I'd still report both for completeness sake, and to illustrate the low resilience of the guess.

Very much enjoyed the posts btw

Amazing achievements Mel! With your support, the group is doing a fantastic job, and I am excited about its direction.

>his has meant that, currently, our wider community lacks a clear direction, so it’s been harder to share resources among sub-groups and to feel part of a bigger community striving for a common goal.

I feel similarly! At the time being, it feels that our community has fragmented into many organizations and initiatives: Ayuda Efectiva, Riesgos Catastróficos Globales, Carreras con Impacto, EAGx LatAm, EA Barcelona. I would be keen on developing better the relationships between these pieces; for example I was enthused to have Guillem from RCG present in EA Barcelona. Would be cool to have more chats and find more links!

I have so many axes of disagreement that is hard to figure out which one is most relevant. I guess let's go one by one.

Me: "What do you mean when you say AIs might be unaligned with human values?"

I would say that pretty much every agent other than me (and probably me in different times and moods) are "misaligned" with me, in the sense that I would not like a world where they get to dictate everything that happens without consulting me in any way.

This is a quibble because in fact I think if many people were put in such a position they would try asking others what they want and try to make it happen.

Consider a random retirement home. Compared to the rest of the world, it has basically no power. If the rest of humanity decided to destroy or loot the retirement home, there would be virtually no serious opposition.

This hypothetical assumes too much, because people outside care about the lovely people in the retirement home, and they represent their interests. The question is, will some future AIs with relevance and power care for humans, as humans become obsolete?

I think this is relevant, because in the current world there is a lot of variety. There are people who care about retirement homes and people who don't. The people who care about retirement homes work hard toale sure retirement homes are well cared for.

But we could imagine a future world where the AI that pulls ahead of the pack is very indifferent about humans, while the AI that cares about humans falls behind; perhaps this is because caring about humans puts you at a disadvantage (if you are not willing to squish humans in your territory your space to build servers gets reduced or something; I think this is unlikely but possible) and/or because there is a winner-take-all mechanism and the first AI systems that gets there coincidentally don't care about humans (unlikely but possible). Then we would be without representation and in possibly quite a sucky situation.

I'm asking why it matters morally. Why should I care if a human takes my place after I die compared to an AI?

Stop that train, I do not want to be replaced by either human or AI. I want to be in the future and have relevance, or at least be empowered through agents that represent my interests.

I also want my fellow humans to be there, if they want to, and have their own interests be represented.

Humans seem to get their moral values from cultural learning and emulation, which seems broadly similar to the way that AIs will get their moral values.

I don't think AIs learn in a similar way to humans, and future AI might learn in a even more dissimilar way. The argument I would find more persuasive is pointing out that humans learn in different ways to one another, from very different data and situations, and yet end with similar values that include caring for one another. That I find suggestive, though it's hard to be confident.

Load more