I don’t want to lose my job because of my opinions, hence this ALT account.
Wonderfully written.
Although Fukuyama’s end is anything but, as there will come a point where democracy, free markets, and consumerism will collapse and sunder into AI-driven technocracy.
Democracy, human rights, free markets, and consumerism “won out” because they increased human productivity and standards of living, relative to rivaling systems. That doesn’t make them a destiny, but rather a step that is temporary like all things.
For the wealthy and for rulers or anyone with power, other humans were and are simultaneously assets and liabilities. But we are gradually entering an age where other humans will cease to become assets yet will remain liabilities. After all, you don’t need to provide health insurance or pay the healthcare costs of a robot. If humans are economically needed, then the best system for them is a free market democracy.
But what happens to the free market democracy when humans are no longer needed?
We will eventually arrive at an ugly new era, fully automized, where humanity becomes increasingly redundant, worthless, and obsolete. The utility and power (economic, military, and civil) an average person possesses will become close to naught. No one will “need” you, the human, and if you aren’t part of the affluent, you’ll be lucky if others altruistically wish to keep you alive…
We still hold our hope that the global elites will care about human rights, lives, and democracy and consumerism in the coming age where we are powerless compared to those who own the robots and all the humanless means of production. But perhaps it’s the inner cynic in me that says it’s highly unlikely.
Yet as altruistic folks, we strive to make sure the system that replaces the current one will be benevolent to most, if not all.
At min, his life is as much a marvel to praise as it is a bit of a tragedy. Like a true altruist, he quite literally worked himself to death for the good of others. Even if his methodologies weren’t always the most effective, there are very few who will be able to match his degree of selfless sacrifice.
As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.
In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.
Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.
In practice, this means today’s EA is more willing to consider altruistic impact that can’t be easily accurately measured or quantified, especially with (some) Longtermist interests. I find this to be a rather damning weakness, although one could make the case it is also a strength.
This also extends to outreach.
For example, I wouldn’t be surprised if an EA would give a dollar or volunteer for the seeing eye-dog organizations [or any other ineffective organizations] under the justification that this is “community-building” and like the borg, someday we will be able to assimilate them and make them more effective or recruit new people in EA.
To me and other old guard EAs, it’s wishful thinking, because it makes EA less pure to its epistemic roots, esp. overtime as non-EA ideas enter the fold and influence the group. One example of this is how DEI initiatives are wholeheartedly welcomed by EA organizations whereas in fact there is little evidence the DEI/progressive way of hiring personnel and staff results in better performance outcomes compared to normal hiring that doesn’t factor in or give advantage/edge to a candidate based on their ethnicity, gender, race, or sexual orientation.
But even more so, with cause prioritization. In the past, I felt that it became very difficult to have your championed or preferred cause even considered remotely effective. In fact, the null hypothesis was that your cause isn’t effective... and that most causes weren’t.
Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence. A less elitist and stringent approach, but inevitable once you become big tent. Some people feel this made EA a friendlier place. Let’s just say that today you’d be less likely to be kicked out of an EA meeting for being naively optimistic and without a care for figures/numbers, and more likely to be kicked out for being overtly critical (or even mean) even if that criticalness was the strict attitude of early EA meetings that turned a lot of people off from EA (including myself, when I first heard about it. I later came around to appreciate and welcome that sort of attitude and its relative rarity in the world. Strict and robust epistemics are underappreciated).
For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.
Unlike Disney and its iron claw grip over its brands and properties, it’s much easier to call oneself or identify an EA nowadays or as part of the EA sphere… because well, anything and everything can be EA. The EA brand, whilst once tightly controlled and small, has now grown and it can be difficult to tell the fake Gucci bags from the real deal when both are sold at the same market.
My greatest fear is that EA will overtime become just A, without the E, and lose the initial ruthless data and results driven form of moral concern.