Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.
That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.
I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.
TLDR
* Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.”
* Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling.
* I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past.
* That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%.
* This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness.
* There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data.
1. Background: A Happiness Paradox
Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
It would may be helpful to grow a cross-national/ethnic overarching identity around "wisdom and doing good". EA does this is a bit, but is heavily constrained to the technocratic. While that is it useful subcomponent of that broader identity, it can push away people who share or aspire the underlying ideals of (1) "Doing good as a core goal of existence" and (2) "Being wise about how one chooses to do good"—but who don't share the disposition or culture of most EA's. Even the name itself can be a turnoff—it sound intellectual and elitist.
Having a named identity which is broader than EA, but which contains it, could be incredibly helpful for connecting across neurodiverse divides in daily work, and could be incredibly valuable as a cross-cutting cleavage in national/ethnic/ etc. divides in conflict environments, if this can encompass a broad enough population over time.
I'm not sure what that name might be in English, or if it makes more sense to just expand meaning of EA, but it may be worth thinking about this, and consciously growing a movement around that with aligned movements that perhaps get at other "lenses of wisdom" that focus on best utilizing/growing resources for broad positive impact.
Assuming misaligned AI is a risk, is technical AI alignment enough, or do you need joint AI/Societal alignment?
My work has involved trying to support risk awareness and coordination similar to what has been suggested for AI alignment. For example, for mitigating harms around synthetic media / “deepfakes” (now rebranded to generative AI) and it worked for a few years with all the major orgs and most relevant research groups.
But then new orgs jumped in to fill the capability gap! (e.g. eleuther, stability, etc.)
Due to demand and for potentially good reasons: those capabilities which can harm people can also help people. The ultimate result is the proliferation/access/democratization of AI capabilities in the face of risks.
I’m currently skeptical that this sort of coordination is possible without some addressing deeper societal incentives (AKA reward functions; e.g. around profit/power/attention maximization, self-dealing, etc.) and related multi-principal-agent challenges. This joint/ai societal alignment or holistic alignment would seem to be a prerequisite to the actual implementation of technical alignment.[2]
This is assuming you can even get the major players on board, which isn't true for e.g. misaligned recommender systems that I've also worked on (on the societal side).
This would also be generally good for the world! E.g. to address externalities, political dysfunction, corruption, etc.