Hide table of contents

We, A Happier World, just uploaded a video on value lock-in, inspired by Will MacAskill's book What We Owe The Future! 

This is part of a whole series we're making on the book, full playlist here.

Thanks to Sarah Emminghaus for her help with the script!

Transcript

Sources are marked with an asterisk. Text might differ slightly in wording from the final video.

Hundred schools of thought

2600 years ago, China went through a long period of conflict that is now known as the Warring states era. But it also brought about a time with many philosophical and cultural experiments that is now known as the hundred schools of thought. That’s when Confucianism was born – the philosophy of Kong Fuzi who believed that self-improvement led to a spiritual transformation. Confucianism encouraged respect for your parents and obedience to authority, rulers and the state. The ethics depended on relationships between people rather than the actions themselves: A son beating their father is not okay, but the opposite is.**

There were a few other popular philosophies at the time; for example legalism. Legalists were strong proponents of heavy punishments for wrongdoings, a powerful military and a strong state, they believed people were selfish and needed heavy guidance.

Then there were the mohists – at the time, they were the confucianists’ main rival. Mohists believed that we should care about other people as much as we care about ourselves. And that we should take whatever actions benefited the most people. They proposed owning no luxury and consuming less.

The rivalism ended in 221 BC when the legalism-influenced Qin conquered China and took strong measures against all competing schools of thought – apparently legalism had won. That all changed when the dynasty ended just 15 years later and Confucianism turned out to be the new popular ideology.* Since then, all Chinese dynasties embraced Confucianism until the Qing dynasty ended in 1912. When Mao and the communist party started ruling China in 1949 it got suppressed, but it remained popular and it’s being revived today.

The popularity of Confucianism is a great example of value lock-in: a situation where one set of values wins against others and stays in place for a very long time.

Other examples include Christianity and Islam: The bible and the Quran are still the best-selling books today!

Risks of locking in current values

In general when we look at values from the past – be it 10, 50 or 200 years ago – it feels like we progressed towards the better. There’s no way we would want those “outdated” values to still persist. But what makes a lot of us so sure that our current values are good? They might be better in some ways, but just like we do now, people in the future will probably look at our current values and find a lot of them abhorrent. They will probably be glad we overcame those “outdated” values.

Right now a lot of pretty different worldviews are competing against each other – similar as in the time of the hundred schools of thought in China. No one school of thought has won – yet. 

Now what is interesting about this with regards to the long term future: How does a school of thought win against others? How do values get locked in? What values from today do we want to lock in – if any? Which ones are maybe already locked in even though they might not be what is best for humanity?

This video is part of a series based on Oxford philosopher Will MacAskill’s new book called What We Owe The Future. The book makes the case for caring about our longterm future and explores what we can do to have a positive impact on it. In a previous episode we told the story of Benjamin Lay, an inspiring abolitionist. In another one we argued why caring about our longterm future is important.

Will MacAskill mentions three ways values could get locked in in the future: If we set up a world government, through space colonisation, and through the development of powerful AI. The latter is perhaps the scariest. So far, Artificial Intelligence is narrow – meaning, machines only know how to perform narrow tasks, like beating us at chess. No AI can actually fully replicate human intelligence over all possible tasks. When AI exceeds human capabilities across all domains, and that might happen at some point sooner rather than later, there’s the massive risk values would be locked in that are not what is best for humanity. Perhaps by the AI somehow imposing them on us. 

This is already happening to some degree today through social media algorithms. 

We know this topic is incredibly broad and complex. We’re also not sure whether this really would happen. After all, it sounds a bit ridiculous. We will talk more about AI in our next video on existential threats, so subscribe and ring that notification bell to get notified when it comes out!

So how do we keep the wrong values from being locked in? 

In general: We should try to keep our minds as open as possible for as long as possible. As such, we should promote free speech and create a marketplace of ideas wherever possible. Whenever multiple schools of thought are encouraged, it gets less likely for one of them to win and be locked in.

One idea we like a lot is the idea of ideological charter cities: autonomous communities with their own laws that try out different ideas as an experiment – to see how well some theories translate into the real world without the specific policies being difficult to reverse. One real life example again happened in China not that long ago. In 1979 a special economic zone was created around the city of Shenzhen – with more liberal economic policies than in the rest of the country. The average yearly income there grew by a factor of two hundred over forty years, the experiment inspired broader reforms across the country. Since then, hundreds of millions of people in China have gotten out of poverty.

The idea of an ideological charter city could also be tried out by marxists, environmentalists or anarchists! That way we could see which ideas work best in the real world and learn something from it.

Another thing that can help is to have more open borders. If people are allowed to migrate more easily, people can vote with their feet which countries’ values they prefer.

Then there are values that are already helping us to get to morally better societies. Reason, reflection and empathy are probably among them. Engaging in good faith arguments, being open to other people’s viewpoints, empathising with people not in our inner circle – all those practices can help getting to an improved point of view and better morals.

There’s a paradox here: Making sure values don’t get locked in means locking in values like these. But this is a paradox we’re happy to live with.

Conclusion

If you thought this was interesting, you should definitely check out Will’s book. In the next episode we’ll be talking about the possible end of humanity. So don’t forget to subscribe!

Comments2


Sorted by Click to highlight new comments since:

Love this - love the set design, love that you've come on camera, great script and interesting topic! Everything is awesome! 

Curated and popular this week
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under