Hide table of contents

Help saving/improving the world

Picture this: It's 1961, and John F. Kennedy sits across from Nikita Khrushchev in Vienna. Both men are intelligent, both are patriotic, both genuinely believe they're working toward a better world. Yet within months, they'll bring humanity to the brink of nuclear war over missiles in Cuba. Why? Not because they had different information about nuclear weapons' destructive power—both understood that perfectly. The difference lay in their fundamental moral frameworks for decision-making. (If they were both non-communists or both communists, Khruschev wouldn’t feel the need to move nukes to Cuba in order to threaten the United States in the first place!)1

What if we could change that dynamic? What if, instead of operating from fundamentally different moral philosophies, world leaders could converge on the same values used for making the decisions that shape our planet's future? This isn't utopian dreaming—it's the concrete proposal that experts at the UN I spoke to believe could work, and it needs your assistance in figuring out the technical details of how such project is to be implemented. How does one actually implement this? Who do I email? I’m asking!

How a success would save/improve the world

Every decision any person makes—from choosing what to eat for breakfast to launching a military operation—follows the same formula: they use the information they have, apply their decision-making process (often based on their values, except for when human psychology causes them to make a decision that goes against their values), and choose from their available options.[1] When world leaders share the same fundamental values, they'll want the same outcomes, assuming they're working with similar information and human error doesn't interfere.[2]

Consider how Victor Zhdanov single-handedly convinced the World Health Organization to eradicate smallpox in 1958. One scientist with a compelling moral vision—that humanity should be free from this ancient scourge—managed to align global health authorities around a shared goal, and by 1980, smallpox became the first disease in human history to be completely eliminated through deliberate human effort.

Source: Smallpox cases reported worldwide - Our World in Data

But we don’t need to have an unrealistically perfect outcome to provide some positive impact (Which is the standard for considering if a project should be implemented). Success could take many forms: complete moral convergence leading to a near-utopian world, partial agreement among some of the most powerful that prevents the worst conflicts, or even just establishing that there exists a correct moral framework worth discovering together. Each level offers tremendous value over our current status quo of competing moral philosophies driving global tensions.

First table (Table on the left)

most or all world leaders

correct moral system

some world leaders

correct moral system

no or few world leaders

correct moral system

Second table (Table on the right)

most or all world leaders

almost correct moral system

some world leaders

almost correct moral system

no or few world leaders

almost correct moral system

most or all people

most or all people

& most or all world leaders

use the correct moral system

most or all people

& some world leaders

use the correct moral system

most or all people

& no or few world leaders

use the correct moral system

 

most or all people

& most or all world leaders

use the almost correct moral system

most or all people

& some world leaders

use the almost correct moral system

most or all people

& no or few world leaders

use the almost correct moral system

some people

some people

& most or all world leaders

use the correct moral system

some people

& some world leaders

use the correct moral system

some people

& no or few world leaders

use the correct moral system

 

some people

& most or all world leaders

use the almost correct moral system

some people

& some world leaders

use the almost correct moral system(⭐)

some people

& no or few world leaders

use the almost correct moral system

no or few people

no or few people

& most or all world leaders

use the correct moral system

no or few people

& some world leaders

use the correct moral system

no or few people

& no or few world leaders

use the correct moral system(⭐)

 

no or few people

& most or all world leaders

use the almost correct moral system

no or few people 

& some world leaders

use the almost correct moral system

no or few people

& no or few world leaders

use the almost correct moral system

The outcome of this program is fully described by where it falls in the table on the left, and where it falls in the table on the right. Currently, you fall into the cells with a ⭐. (We can do better than that.)

Why we might get a success

Here's where it gets psychologically interesting. Most people, when pressed, wouldn't take a pill that permanently locked them into their current political beliefs. Why? Because deep down, they value something more fundamental than their current opinions—they value some different main goal (Most often “Actually doing good”), and the goals they currently act on are just proxies, and proxies can change if thought that a new proxy closer matches the main goal (Which, again, is often to do good, in some form or another).

This suggests that world leaders, like most thoughtful people, hold meta-values: they care about making decisions based on sound reasoning and moral principles, not just satisfying whatever beliefs they happen to hold at any given moment.

People have gotten a success in the past

A few centuries ago, the field science was plagued by the same problems we see in geopolitics today: brilliant country-leaders arguing past each other, and people didn’t really act on the fact that at least one or both people in an argument are wrong, and that both would be better off if they figured out who, if anyone, is right.

Then, in 1620, Francis Bacon introduced the scientific method in his book “Novum Organum", which is now the standard for discovering truth, and suddenly collaborative progress became possible, and no one fought a war over if π=3.14 or if π=3.15. Today, scientists from competing nations routinely work together on projects that would have been unthinkable in previous eras.

The precedent exists. Individual visionaries have catalyzed massive institutional changes before. Zhdanov with smallpox eradication. Roosevelt with the UN's founding principles. Francis Bacon with literally all of science. Abraham Lincoln who single-handedly* stopped slavery.

I think something that we don’t talk about that often is that these are all regular people. Literally every person who’s done something of this caliber is just a person. They were once a baby, just crawling around. If you were Lincoln, d’ya reckon you could stop slavery? Of course you could!

And this time around, we need not rely only on one person, and all of the modern internet.

Little opposition

Here's the most intriguing aspect: leaders who might otherwise oppose this program actually have incentives to support it. If you're a world leader with immutable values who could single-handedly stop this initiative, then by definition, the program wouldn't work. So there's no point in trying to stop it—the project doesn’t get any harder if you go from just thinking you don’t want to participate to actually not participating, so you don’t get anything out of trying to sabotage this project, except for being known back home as “the person who passed off world peace or something much better.”

Why we might not not get a success

Of course, there are serious obstacles. The first is speed—how long does it take to change someone's deeply held moral convictions? We don't know for certain, but there's reason for optimism. This program could convince many world leaders in parallel, potentially accelerating the process through peer influence and shared deliberation.

The trickiest (and most studied) challenge is political incentives. In our current system, leaders who prioritize staying in power over not being evil often do better than those who prioritize doing good. A program that makes leaders more moral might paradoxically make them less effective at maintaining power—at least initially. This creates a tragic selection pressure against the very people we most want to see embrace moral alignment.

But there are counterbalancing forces. Moral leaders might actually perform better in the polls and the non-democratic-power-struggles in a world full of other moral leaders.

Imagine an American revolution where King George III and George Washington agreed that it be moral for America to become independent. If you’re a British soldier reading this in the news, you can be pretty confident that you won’t be deployed thousands of miles away from home to keep America under British rule in a world where the King feels no need for it.

In the real world, Rulers often put a lot of resources into keeping power. The ones that don’t, well, don’t rule for very long.

In a morally aligned world, those resources could be redirected toward solving shared challenges, because everyone agrees on the same values, so a modern King George wouldn’t have to sacrifice anything from a political challenger like George Washington.

According to US Costs of Election • OpenSecrets, 15,901,068,285$ was spent on the US election of 2024. In a morally aligned world, those funds could be better spent on directly saving millions of people’s lives, while still having enough money to get everyone reading this article a free slice of pizza.

 Additionally, the leader who first helps achieve something much better than world peace by helping this project would gain enormous prestige and political capital. Think of a politician who makes your blood boil. Now, imagine if they achieved something much better than world peace. You might consider voting for that person!

[Image suggestion: Contrast between current diplomatic meetings (tense, guarded) and imagined future meetings (collaborative, focused on problem-solving)]

How we might get a success

The program would need to evolve continuously as leadership changes through elections, appointments, resignations, succession, etc. Perhaps the UN could maintain this as a somewhat ongoing project, if that would be worth the costs.

We should probably start with the most influential world leaders and expand gradually. This allows for learning and refinement, which can make the project move faster. Success breeds success—as more leaders experience the benefits of moral alignment and thus help this project, the project moves faster, hopefully exponentially.

The cultural shift this could create within major institutions would be profound. Imagine foreign ministries, military academies, and international organizations with similar frameworks to Open Philanthropy or the Future of Life Institute. Such a culture would be self-reinforcing and help integrate new members naturally.

Seriously, help save/improve the world

You could live in a world where you genuinely no longer need worry about seeing a news story like “Israel-Iran war enters sixth day” in just a few months or years. Many people have earned themselves the title “World-saver” during the Cold War, since there were so many times when the fate of humanity depended on whether one man chose to launch nukes or not.

If you’re in a position to do so, you can earn yourself that title by helping this project function in a way that leads to that kind of outcome.

Now go, and help save/improve the world.

  1. ^

     More generally, everyone makes decisions based on

     Any inputs that could affect a person's decision,

    The person’s options (All the possible outputs),

    And the person’s decision-making process (The function that takes in the inputs and outputs a decision in the set of possible outputs.)

    This is because when a person decides something, the person uses [the inputs that could affect the output] (1) , and out of all the possible output decisions (2), the person uses [the function that takes in the inputs and outputs a decision]. (3)
    To be more specific, if you don’t change a person’s [things you call “inputs”], you have to change the other part of the person’s decision-making process – that is, you need to change the pairing between inputs and outputs. Otherwise, the person’s current inputs will lead to the same output. (The person will make the same decision)

  2. ^

     Game theory literature supports this framework, particularly in geopolitical contexts where human error is minimized. If everyone puts sufficiently similar value on each outcome, then the outcome with the highest moral value will be a Nash Equilibrium - no player can improve their position by changing their strategy. Doing so would lead to an outcome without the highest moral value, which, in this scenario, is worse for all parties.

-5

0
2

Reactions

0
2

More posts like this

Comments25
Sorted by Click to highlight new comments since:
[anonymous]1
0
0

Another reason world leaders might support this is that they think the program would have a good result (namely by them thinking that their current goals would be the goals that would be landed on, namely because they might think their goals are right and that the program would land on the right goals or goals close to the right ones), and that that result would become even better with their participation.

Simple feedback: read this book:

https://www.amazon.com/Dictators-Handbook-Behavior-Almost-Politics/dp/1610391845

Think about politics in Darwinian terms: who survives the process?

Write a new comment...

Write a new comment...

Of course! The detailed historical examples. No amount of abstract knowledge can substitute historical discussion.

In fact the academic version (the logic of political survival) is for me less interesting, because it is too much based on data analysis instead of cases.

[anonymous]1
0
0

I will also note that the possibility of more morally misaligned actors might use the information that world leaders now agree on X moral values to their advantage, in order to do bad things. Perhaps this force is counteracted by more morally aligned people using such information to do good things!

[anonymous]1
0
0

it seems boggling at first glance that this would work, but in summary, it would work like this: Sometimes, in an argument, one or more sides doesn’t care about reaching the RIGHT conclusion, they just care about it reaching a conclusion they approve of. This is often the difficulty with arguments.

However, when everyone is brought to the table and wants to reach the RIGHT conclusion, you find that the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!

This project would basically bring world leaders to the table, where they would look for the RIGHT conclusion to major problems, which should lead to the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!

[anonymous]1
0
0

There is sort of precedent for this: science used to be much more argumentative, and now, most of science is done in very intelligent ways, aimed at getting to the RIGHT answer, and not “their answer”. This led to many, if not most or all, scientific problems being solved*.

In addition, if you aim to be a powerful scientist, fighting for “your answer” makes it much harder than it is if you were fighting for the RIGHT answer. Similarly, if this project worked well, it would be much harder to gain power if you fought for “your values” than if you fought for the RIGHT values!

[anonymous]1
0
0

One way to advertise this idea is that it reminds people of what the UN/UN charter was for, and that it is an improvement upon it.

[anonymous]1
0
0

I will note that most change of this scale doesn’t arise from methods like this. This could aide in giving a rough sense of how likely this is to work. Here’s some examples of things like this working:

  1. Victor Zhdanov’s work single-handed (And successfuly!) lobbying the WHO to eradicate smallpox. (https://youtu.be/ll9myMeFU3g, starting at the 8:00 mark)
  2. Eleanor Roosevelt (FDR’s wife) played a massive role in the creation of the UN. https://www.law.ox.ac.uk/centres-institutes/bonavero-institute-human-rights/eleanor-roosevelt-and-universal-declaration-human
  3. and more

And here are some examples of efforts that have required broader support:

  1. Climate change prevention
  2. The abolition of slavery
  3. Most or all movements pertaining to increasing equality
  4. Nearly every election
  5. The protection of the ozone layer
  6. many if not most revolutions throughout history
  7. and more

(Note: this was all off the top of my head.)

[anonymous]1
0
0

If this worked, it would probably result in a major culture shift throughout most major institutions, which would help keep the program from falling apart, and would help incorporate new members.

[anonymous]1
0
0

Exact information on this is dependent on data on phycology and whatnot. If you know about that stuff, please let me know or add it here.

[anonymous]1
0
0

Also, if world leaders spend a lot of time surrounded by a particular culture (e.g., a month at some event), they might carry some of that culture over when they get back home, but also they re-assimilate into their home culture.

[anonymous]1
0
0

And a good culture (say, in the UN) can also help with this project's success. A bad one can result in this project being harder.

[anonymous]1
0
0

Message to any world leaders who aren’t willing to change their values: If you can successfully stop this from happening if you tried, then it wouldn’t work, so there’s no point in trying to stop me. It would be comparable to voting in an election determined by people’s opinions, not by how they voted (the equivalent of writing on a random piece of paper, “I vote like so: __”).

I say this because in any scenario where, even assuming every world leader who has completely unwavering moral values tried really hard to stop our program AND cooperated with one another, IF such an effort would potentially be successful, then our program would fail.

To expand on that: If your efforts make the difference between our program succeeding and failing or otherwise affecting its success, we would have a huge incentive to ensure that this program isn’t bad for you. This is because, if [you think it would be better for [your values] to try and prevent any given facet/part of our program], you would logically do so, and we don’t want that, so we will make sure [You are happy with each of those facets of the program].

Basically, you don’t need to stop our program. The threat that you might try to stop our program has the same effect.

If we can help you in a way that doesn't come at a cost to us (e.g., reschedule meetings so the time of the meetings work better for you), we will!

As an analogy, if you had the option to get rid of a country, then you don’t have to worry about them being bad for you, because they have a massive incentive to be good for you: not getting destroyed.

Here’s another analogy: Someone is making you food. You don't have to spend thousands of dollars to ensure that the person makes good food since you can simply throw the food away if the food does not taste good, and the person making the food already has a massive incentive to make food that tastes good to you: not getting the food thrown out.

All of this goes without saying, but saying it makes it clear.

A common strategy used to limit the effects of human error it to better account for it in models and whatnot, often by coming up with a value system that would make sense for any given set of decisions where some of them are due to human error. For example, in economics, one might say that a person ascribes inherent additional value to things that are on sale.

Another way is to try to make human error guide someone in a similar direction to logical decisions. For example, there is a major taboo against drug use in many areas, which supposedly decreases drug use when unnecessary.

More generally, a common strategy is to limit how much human error changes someone’s decisions, on average.

A world leader’s goals are probably adjustable one way or another. In the case where a world leader is committed to some values that depend on something (e.g., whatever is seen as “patriotic”, whatever their religion says (this only applies to some religions), changing those things changes their values. That might be very difficult for some value systems, but luckily [a commitment to the values of something that can easily change] has plenty of good logical arguments against them (https://youtu.be/wRHBwxC8b8I), which could be a better strategy to change someone’s mind if they have such a commitment that is difficult to change, but for which one can change if they have such a commitment.

[anonymous]1
0
0

If you know about psychology or world leaders, please let me know how true this might be. If it isn’t true, we’d have to work out how we might handle a world where only some people have their morals aligned. My first thought on this is that:

  1. a world where more world leaders have their values aligned is probably AT LEAST better than the current status quo.
  2. Over time, these people might be phased out, and maybe one strategy is to try to phase them out faster.
[anonymous]1
0
0

Supposedly, a more morally aligned global order might try to make itself more morally aligned. We only need this to work enough for it to sort itself out.

[anonymous]1
0
0

Maybe replacing the keys to power?

[anonymous]1
0
0

Maybe this would start its rollout on the most major world leaders first? And then, over time, more and more people get added to the program once we’re ready for them

I imagine this would be implemented in a similar fashion ion to other UN programs when they started, but before that, we should work out key things that would change how or if the program should happen.

If anyone here knows any info that can help with this (e.g., Does any world leader have a commitment to their current values instead of their overall values?), please let me know in a comment, email, etc.

Quick note: (Note taken while I am tired, so medium “parse-ability”): this program should be able to adjust to new ideas such that [an idea on how this program can be improved] can be implemented as soon as possible, perhaps without having to do an event. This is tricky for some ideas (e.g., how the event could be more fun). This would cause ideas to be implemented sooner, and also there’s be less of a cost to do the program sooner, since you wouldn’t be “missing” most important ideas. One idea that MIGHT satisfy this is: Part of the UN normal chat space (slack, discord, or whatever they use, if anything) was a philosophy section on what philosophy to go by and why, so the discussion can continue 24/7, and ideas for improvement can get implemented for the next day (or sooner).

Curated and popular this week
Relevant opportunities