TLDR
When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities.
Intro
I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those.
Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate.
The Mathematics of Job Preference
If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference.
Even if you don't work directly for an EA organization but have secured:
* A job allowing significant donations
* A position with direct positive impact aligned with your values
* Work that combines your skills, interests, and preferred location
You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
Thought: In what ways do EA orgs / funds go about things differently than in the rest of the non-profit (or even for-profit) world? If they do things differently: Why? How much has that been analyzed? How much have they looked into the literature / existing alternative approaches / talked to domain experts?
Naively, if the the thing they do differently is not related to the core differences between EA / that org and the rest of the world, then I'd expect that this is kind of like trying to re-invent the wheel and it won't be a good use of resources unless you have a good reason to think you can do better.
Here's a perspective I mentioned recently to someone:
Many people in EA seem to think that very few people outside the "self identifies as an EA" crowd really care about EA concerns. Similarly, many people seem to think that very few researchers outside of a handful of EA-affiliated AI safety researchers really care about existential risks from AI.
Whereas my perspective tends to be that the basic claims of EA are actually pretty uncontroversial. I've mentioned some of the basic ideas many times to people and I remember getting pushback I think only once - and that was from a self-professed Kantian who already knew about EA and rejected it because they associated it with Utilitarianism. Similarly, I've presented some of the basic ideas behind AI risk many times to engineers and I've only very rarely gotten any pushback. Mostly people totally agree that it's an important set of issues to work on, but there are also other issues we need to focus on (maybe even to a greater degree), they can't work on it themselves because they have a regular job, etc. Moreover, I'm pretty sure that for a lot of such people, if you compensate them sufficiently and remove the barriers that are preventing them from e.g., working on AGI safety, then they'd be highly motivated to work on it. I mean, sure, if I can get paid my regular salary or even more and I can also maybe help save the world, then that's fantastic!
I'm not saying that it's always worth removing all those barriers. In many cases it may be better to hire someone who is so motivated to do the job that they'd be willing to sacrifice for it. But in other cases it might be worth considering whether someone who isn't "part of EA" might totally agree that EA is great, and all you have to do is remove the barriers for that person (financial / career / reputational / etc.) and then they could make some really great contributions to the causes that EA cares about.
Questions:
[Note: This is a bit long for a shortform. I'm still thinking about this - I may move to a regular post once I've thought about it a bit more and maybe gotten some feedback from others.]