Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
Hide table of contents

11

Daniel Kokotajlo, Diego Caleiro, Ramana Kumar and I recently discussed the idea of Permanent societal improvement - non-Xrisk related ways of affecting, and hopefully improving, the far future. These are actions we can take now that will have some multiplicative effect on the value of the future of humanity. This post is intended as the beginning of a conversation, not the end of a research project, and we eagerly await feedback and more ideas. Also please bear in mind that not everyone agreed with all the ideas, and any mistakes remain my own.

 

A toy model:


Suppose there is a 5% chance humanity will be destroyed in 2100, and if we survive that great filter then we will go on to colonise the light cone. Assuming this is a ‘good’ colonisation, full of happy, enlightened, virtuous people, it seems that reducing Existential Risk by 5% to 0% would roughly increase the Expected Value of the future by 5.3%. We could compare this to an action that would make the colonised universe 10% better - this would then increase the EV of the future by roughly 10%. So improving the future, in this toy example, could be dramatically better than reducing Xrisk.

 

What types of things could be Permanent Societal Improvements?

 

A major restriction is that they have to be things that would not otherwise be done later. If I invent something that would otherwise have been invented 20 years later, I have only improved the world by ( 20 years x impact of invention ) , not ( lifespan of humanity x impact of invention ).* This is quite a strong restriction on what could count as such permanent improvements.

 

Here are a few broad categories we came up with.

 

Influencing Lock-in

  • It is possible that humanity will end up ‘locked in’ to a certain political state. This could be a Singleton - an agent with complete control, either AI or totalitarian state or a stable multipolar society, maybe due to EM competition.

  • If this is the case, then affecting which political state humanity gets locked into would have a permanent effect on the future.

  • Alternatively, we might affect what the future cared about. Under some types of Singleton, virtually any ethical debate could become a very pressing issue: we want to make sure the right side is preserved and propagated. Maybe it is important to persuade people that animals are morally valuable now so that an AI will care about them (though perhaps CEV obviates the need for this.) Or maybe we need to make sure the future Hegemon cares enough about art so the lightcone isn’t deprived of it.

  • Value Lock-in could happen even without a Singleton, given some new technologies or memes. For example, if we invent memetic or technological ways to reinforce existing values, (like brainwashing but more effective) then existing value systems could become significantly more entrenched.

 

Compounding resource constraints

  • If there is some resource which is going to be a constraint on the growth of moral value, we could affect the future by investing now to reduce the constraint. This is especially true if the resource exhibits compound growth.

  • For example, if total population were to grow at 1% a year indefinitely, by having some extra children now (say 1%) we could permanently increase the future population in a multiplicative way. If you subscribe to an aggregative theory of ethics, this could make the future 1% more valuable.

  • If the future will be bounded by the speed of light, launching ships now could help alleviate the volume constraint.
  • Or maybe we could invest in server capacity in readiness of a EM future.

 

Moral Progress / Decay

  • If humanity makes moral progress, we might improve the future by accelerating this process.

  • Alternatively, if humanity suffers from value drift, we could improve the future by retarding this decay.

  • The benefit of accelerating moral progress is much less if there is an ideal ethics we are converging towards, as then we only get the benefit from accelerating near-perfect ethics.

  • Conversely, if our values are drifting in such a way that most of the future will be of no value, perhaps due to value fragility, then delaying the decay could dramatically improve the future. If we suffer 1% drift a year, then standing athwart history for a year would improve the total value of the future by 1%.

 

Original Sin

  • Some people think that the British Empire was permanently ‘tainted’, in some way, by its early endorsement of slavery, and that this moral taint persisted even after it had abolished slavery in most of the world. If this is true, it could be valuable to ensure the future isn’t founded in some way that permanently taints it.

  • Conversely, maybe having some people from the 20th and 21st centuries could be a long-lasting source of pride and joy for future generations, so some cryonics could be considered a permanent improvement. Similar things could be said about historical artifacts, beautiful natural landmarks, etc.

 

Coordination problems

  • If humanity colonises the stars we may end up being fractured by distance, different colonies unable to communicate. Perhaps when the first ships are sent a lasting convention could be established that all colonies should send updates about their history back to earth. This history could be an object of great value, but establishing this convention might be something that could only be done very early on in the colonisation process.

  • Alternatively, we could establish property right norms to divide up the lightcone and prevent conflict. By establishing a norm now that colonised could claim whatever they wanted if they traveled directly away from earth, but could not ‘cross-colonise’ into other sectors, we could prevent future wars. This norm would be much harder to establish once the earth was no longer the clear schelling point for the origin, and once it was clear who the ex-post winners and losers from this policy would be.

  • Establishing norms that will protect biological humans and EMs from Hansonian competition - like a right to retire.

  • If uploads are not conscious, it might be important to agree on this before EMs massively outnumber biological humans; after that point it would become much harder.

 

* ignoring whatever else the future would-be inventor would otherwise do with their resources.

 

11

0
0

Reactions

0
0

More posts like this

Comments10


Sorted by Click to highlight new comments since:

Or maybe we could invest in server capacity in readiness of a EM future.

This one seemed out of place to me. Conditioned on the time we start expanding and the rate at which we expand, we're going to have access to some fixed set of resources at a given point in the future, so I don't see how investing in server capacity now affects our server capacity in the far future. (though I do agree that affecting the start time and rate of expansion could be permanent improvements.)

Establishing norms that will protect biological humans and EMs from Hansonian competition - like a right to retire. If uploads are not conscious, it might be important to agree on this before EMs massively outnumber biological humans; after that point it would become much harder.

These seem to be about simply picking the right policies now and locking them in. It might also be important to lock in the right policies vis-a-vis privacy, the death penalty, property rights, etc etc, but why should we think that we can lock such policies in now? This reduces to either "minimize value drift" or "create a singleton", both of which I agree with but you already listed them.

Have you seen Nick Beckstead's slides on 'How to compare broad and targeted attempts to shape the far future'?

He gives a lot of ideas for broad interventions, along with ways of thinking about them.

So we get astronomical stakes by multiplying a large amount of time by a large amount of space to get a large light cone of potential future value. Interventions that work along only one of those dimensions -- say, I bury a single computer that generates one utilon per year deep underground, which continues to run for the life of the universe, or I somehow grant a one-off one utilon to every human alive in the year 1 billion -- are dominated by those interventions that affect the product of space and time (e.g. the interventions you listed here). But if there were just one more dimension to multiply, then interventions that addressed the product of all three might dominate all considerations that we currently think about.

Yep. Any ideas what such an other dimension might be? (There are of course the "normal" other dimensions, like average well-being, that are included in the calculation of utilons.)

"Assuming this is a ‘good’ colonisation, full of happy, enlightened, virtuous people, it seems that reducing Existential Risk by 5% to 0% would roughly increase the Expected Value of the future by 5.3%."

How did you get 5.3%?

(100/95) - 1

An important topic!

Potentially influencing lock-in is certainly among my motivations for wanting to work on AI friendliness, and doing things that could have a positive impact of a potential lock-in has a lot speaking for it I think (and many of these things, such as improving the morality of the general populous, or creating tools or initiatives for thinking better about such questions, are things that could have significant positive effects also if no lock-in occurs).

As to example of having-more-children out of far-future concerns, I think this could go the other way also (although I don't necessarily thing that it would - I really don't know). If we e.g. reach a solution where it is decided that all humans have certain rights, can reproduce, etc, but also decide that all or a fraction of the matter in the universe we have little need for are used to increase utility in more efficient ways (e.g. by creating utilitronium or by creating non-human sentient beings with positive and meaningful existences), then a larger human population could lead to less of that.

A major restriction is that they have to be things that would not otherwise be done later. If I invent something that would otherwise have been invented 20 years later, I have only improved the world by ( 20 years x impact of invention ) , not ( lifespan of humanity x impact of invention ).* This is quite a strong restriction on what could count as such permanent improvements.

Hasn't Bostrom said something about how delays in technological progress create astronomical losses because they slow down subsequent technology?

Along the same lines I think economic progress and setting norms in international law could similarly have compounding effects that affect the far future pretty well. Otherwise I'm not sure what really can be done about these things besides raising awareness and spreading of ideas.

[anonymous]-2
0
0

You seem to want to look at oughts while not knowing what is. Oughts are limited to the possible and knowing what is possible can only happen by understanding what is and why it is. Understanding human nature then is the first critical task, and to do that requires a good understanding of evolution since humans evolved. This has been my work for several decades and I am still hoping that folks in your movement will see value in it instead of wandering in a vast intellectual wasteland wearing blindfolds and talking about how great it would be if you could direct humanity to the Great Lakes.

I not sure I understand your argument. Could you help me out with some examples of:

  • Effective Altruists "wandering in a vast intellectual wasteland wearing blindfolds and talking about how great it would be if you could direct humanity to the Great Lakes"
  • How an understanding of human evolution would help us to find out what we ought to do.
More from Larks
Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 4m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel’s podcast to discuss factory farming. I hope you’ll give it a listen — and consider supporting his fundraiser for FarmKind’s Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what’s the point in fighting it? I think both views are wrong. In fact, I think factory farming sits in the ideal position for moral reform. Because its end is neither inevitable nor impossible, it offers a unique opportunity for advocacy to change the trajectory of human moral progress. Not inevitable Dwarkesh raised an objection to working on factory farming that I often hear from techno-optimists who care about the issue: isn’t its end inevitable? Some cite the long arc of moral progress; others the promise of vast technological change like cultivated meat or Artificial General Intelligence (AGI) which surpasses human capabilities. It’s true that humanity has achieved incredible moral progress for humans. But that progress was never inevitable — it was the result of moral and political reform as much as technology. And that moral progress mostly hasn’t yet extended to animals. For them, the long moral arc of history has so far only bent downward. Technology may one day end factory farming, just as cars liberated w
 ·  · 1m read
 · 
This is a personal essay about my failed attempt to convince effective altruists to become socialists. I started as a convinced socialist who thought EA ignored the 'root causes' of poverty by focusing on charity instead of structural change. After studying sociology and economics to build a rigorous case for socialism, the project completely backfired as I realized my political beliefs were largely psychological coping mechanisms. Here are the key points: * Understanding the "root cause" of a problem doesn't necessarily lead to better solutions - Even if capitalism causes poverty, understanding "dynamics of capitalism" won't necessarily help you solve it * Abstract sociological theories are mostly obscurantist bullshit - Academic sociology suffers from either unrealistic mathematical models or vague, unfalsifiable claims that don't help you understand or change the world * The world is better understood as misaligned incentives rather than coordinated oppression - Most social problems stem from coordination failures and competing interests, not a capitalist class conspiring against everyone else * Individual variation undermines class-based politics - People within the same "class" have wildly different cognitive traits, interests, and beliefs, making collective action nearly impossible * Political beliefs serve important psychological functions - They help us cope with personal limitations and maintain self-esteem, often at the expense of accuracy * Evolution shaped us for competition, not truth - Our brains prioritize survival, status, and reproduction over understanding reality or being happy * Marx's insights, properly applied, undermine the Marxist political project - His theory of ideological formation aligns with evolutionary psychology, but when applied to individuals rather than classes, it explains why the working class will not overthrow capitalism. In terms of ideas, I don’t think there’s anything too groundbreaking in this essay. A lot of the
Recent opportunities in Building effective altruism
20
· · 2m read