Hide table of contents

TL:DR; There are currently more than 100 open EA-aligned tech jobs, including roles such as software engineering, ML engineering, data science, product management, and UI/UX design.

Thanks to Vaidehi Agarwalla, Yonatan Cale, Patrick Gruban and David Mears for feedback on drafts of this post.

EA’s tech needs in 2018

Flashback to EAG London 2018: Having just come out of a web development bootcamp, I got the impression that the only use for software engineers within the EA community was earning-to-give. There were very few software engineering jobs at EA organizations, and they didn’t seem to be particularly exciting. As one conference participant said to me: “There are only so many EA orgs for which websites must be built”. Sure, there was AI safety research, but your “standard” web developer does not have the required skill set.

EA’s software needs in 2022

Fast forward 3.5 years, this has changed. Probably partly due to EA growing significantly in funding, more and more tech jobs are advertised every day. The diversity of jobs has also increased, with roles ranging from app and web development, to data science, ML engineering or product management, as well as varying in seniority levels.

Some examples:

  • Anthropic is looking for a senior software engineer to build large scale ML systems to do AI alignment work.
  • Momentum is hiring a frontend engineer to help build their donation app.
  • Our World in Data is searching for a Head of Product and Design.
  • IDInsight has open positions for junior data scientists and engineers.

At the most recent EAG London, I was talking about this to someone working at a big tech company, and they asked me: “But how many EA-aligned tech jobs are there? Surely not a 100?”, to which I replied without much thinking, “No, 100 sounds about right!” 

Are there really more than 100 open EA-aligned tech jobs?

Curious whether I was correct, I’ve compiled this list, and, lo and behold, my estimate seems rather spot on. I could find 115 openings for EA-aligned tech jobs, at least if we interpret the relevant terms as follows.

What I counted as a tech job

I consider the following roles a tech job:

  • Software engineer/developer
  • Data scientist
  • Data engineer
  • IT admin
  • Product manager
  • UI/UX designer
  • Research engineer
  • Machine Learning engineer

What I didn't count as a tech job

I did not include AI research jobs, like this one at DeepMind. Under some interpretation of the term “tech”, you might want to add such jobs, but the target audience for this post are people working in “ordinary” industry jobs that don’t require a PhD in Computer Science. I have met many EAs who have a data/software background, but aren’t drawn to AI research work as they aren't qualified. I did, however, include any tech jobs from the above list in organizations working on AI alignment, such as this one at OpenAI.

I excluded internships, as calling these “jobs” seems to be somewhat misleading.

What did I count as EA-aligned?

As EA-aligned, I considered jobs that satisfy any of the following conditions:

[EDIT] To avoid confusion, it is worth pointing out that some of the jobs on the 80k job board are listed there not because they are directly impactful, but because they help people build up career capital to be impactful later. As Niel Bowerman from 80k writes in a comment to this post: "I'm keen for people to take these "career capital" roles so that in the future they can contribute more directly to making the development of powerful AI systems go well." So "EA-aligned" shouldn't be read as "directly impactful towards EA goals".

 [EDIT 2] Some have argued in the comments that working on AI capabilities research, e.g., for OpenAI or DeepMind, would be actively harmful. 

There is probably much more to come

I did not reach out to the organizations to check whether the positions listed are all still open. Even if some of them have already been closed and the ads have just not been taken offline, it is reasonable to expect that lots more jobs will be posted soon given that the FTX Foundation is expected to give out at least $100m in grants this year and has in recent weeks made its first funding decisions.

Does EA have an impactful tech job that would fit you?

There are lots of EAs with tech skills. As a proxy, of the roughly 1250 public profiles on the EA Hub that have been filled with more information than just the user’s name, roughly 200 have listed “software engineering” as a skill. Extrapolating this to an estimated community size of 7400 active EAs in 2021 (based on the EA survey), and subtracting the 31% of students in said survey, we should expect there to be have been more than 800 active EAs who are professionals with software engineering skills at that time. (Granted, these skills are quite different from UX or product management skills, so I am not saying that the talent pool for those skills is large within EA.)

Despite this, it has been noted that EA organizations don’t find it easy to hire tech talent. So, if you work as a software engineer, UX designer, data scientist or product manager in the industry, consider applying for an EA-aligned tech job. My sense from having run many events in the EA tech community is that the average member would like to use their skills to do direct work, but underestimates how many jobs there are.

Where to find EA-aligned tech jobs

  • For current openings (as of 26 April 2022), see this list of jobs I created. I do not intend to update it regularly, but feel free to make suggestions for any jobs I might have missed by adding a comment.
  • The 80k job board, filter by “Engineering”
  • The Software, Data, and Tech Effective Altruism Facebook group
  • Attend EAG(x)s. Some organizations are looking for people and present at career fairs, but haven’t posted those jobs yet.
Comments21


Sorted by Click to highlight new comments since:

(Not necessarily a criticism of this post, but) I want to note that some (maybe 20%?) of these roles seem probably-net-negative to me, and  I think there are big differences in effectiveness between the rest.

Maybe I'm wrong, but make sure to think carefully about finding a job that has a big positive impact, not just getting a job at a (more or less) EA-aligned organization!

I agree with this. Please don't work in AI capabilities research, and in particular don't work in labs directly trying to build AGI (e.g. OpenAI or Deepmind). There are few jobs that cause as much harm, and historically the EA community has already caused great harm here. (There are some arguments that people can make the processes at those organizations safer, but I've only heard negative things about people working in jobs that are non-safety related who tried to do this, and I don't currently think you will have much success changing organizations like that from a ground-level engineering role)

Do you think this is the case for Deepmind / OpenAI's safety teams as well, or does this only apply to non-safety roles within these organisations?

I don't think this is true for the safety teams at Deepmind, but think it was true for some of the safety team at OpenAI, though I don't think all of it (I don't know what the current safety team at OpenAI is like, since most of it left to Anthropic).

Thanks for sharing. It seems like the most informed people in AI Safety have strongly changed their views on the impact of OpenAI and Deepmind compared to only a few years ago. Most notably, I was surprised to see ~all of the OpenAI safety team leave for Anthropic . This shift and the reasoning behind it have been fairly opaque to me, although I try to keep up to date. Clearly there are risks with publicly criticizing these important organizations, but I'd be really interested to hear more about this update from anybody who understands it.

Thanks for the comment and further clarifying OP's point. This is an important perspective. I have edited the post to refer to your comment. 
Would you maybe like to share a link to some discussion regarding this for those who would like to read more about it?

Which roles specifically seem net-negative to you?

Chi
23
0
0

Not OP, but I'm guessing it's at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good "by default", so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don't know enough about the org to judge.

Thanks. I don't have a personal opinion on this, but I've adapted the list to show which of the OpenAI positions were listed on the 80k job board and which not. I would point out that 80k lists OpenAI as an org they recommend.

Manifold Markets (my startup) is also EA-aligned and hiring! http://bit.ly/manifold-jobs, or reach out to me over EA Forum or email (austin@manifold.markets)~

Thanks for writing up this post.  I'm excited to see more software engineers and other folks with tech backgrounds moving into impactful roles.  

Part of my role involves leading the 80,000 Hours Job Board.  In case it's helpful I wanted to mention that I don't think of all of the roles on the job board as being directly impactful.  Several tech roles are listed there primarily for career capital reasons, such as roles working on AI capabilities and cybersecurity.  I'm keen for people to take these "career capital" roles so that in the future they can contribute more directly to making the development of powerful AI systems go well.  

Could 80,000 Hours make it clear on their job which roles they think are valuable only for career capital and aren't directly impactful? It could just involve adding a quick boilerplate statement like in the job details, such as:

Relevant problem area: AI safety & policy

Wondering why we’ve listed this role?

We think this role could be a great way to develop relevant career capital, although other opportunities would be better for directly making an impact.

Perhaps this suggestion is unworkable for various reasons. But I think it's easy for people to think, since this job is listed on the 80,000 Hours jobs board and seems to have some connection to social impact, then it's a great way to make an impact. It's already tempting enough for people to work on AGI capabilities as long as it's ""safe"". And when the job description says "OpenAI […] is often perceived as one of the leading organisations working on the development of beneficial AGI," the takeaway for readers is likely that any role there is a great way to positively shape the development of AI.

What are your thoughts on Habryka's comment here?

Please don't work in AI capabilities research, and in particular don't work in labs directly trying to build AGI (e.g. OpenAI or Deepmind). There are few jobs that cause as much harm, and historically the EA community has already caused great harm here. (There are some arguments that people can make the processes at those organizations safer, but I've only heard negative things about people working in jobs that are non-safety related who tried to do this, and I don't currently think you will have much success changing organizations like that from a ground-level engineering role)

China-related AI safety and governance paths - Career review (80000hours.org) recommends working in regular AI labs and trying to build up the field of AI safety there. But how would one actually try to pivot a given company in a more safety-oriented direction?

Thanks, Niel, I probably should have been more explicit about this. I've added a paragraph to make this clearer.

As a snapshot of the landscape 1 year on, post-FTX:

80,000 Hours lists 62 roles under the skill sets 'software engineering' (50) and 'information security' (18) when I use the filter to exclude 'career development' roles.

This sounds like a wealth of roles, but note that the great majority (45) are in AI (global health and development is the distant second-place runner up, at 6); and the great majority are in the Bay Area (35; London is second with 5).

Of course, this isn't a perfectly fair test, as I just did the quick thing of using filters on the 80K job board rather than checking all the organisations as Sebastian did last year.

How much do these roles pay? If they do not pay salaries at the same level that someone with those skills could make elsewhere, that means that top talent might not go to these roles unless they happen to be effective altruists, which seems unlikely. I've read a lot about how EA is no longer money constrained, so if these roles do not pay very well, is that actually true?

It varies, but many of them pay very well, comparable to or better than elsewhere. On the other hand, people reading this post are likely EAs, so the point that they might underpay somewhat seems less relevant. And in some cases, goal alignment actually matters tremendously, and paying less than market rate is strategically a fantastic way to filter for that. 

And in roles where both high pay and alignment are needed, I'd probably prefer to see something like "We will pay 75% of market rate, and will additionally donate a marginal 50% of your salary to an effective charity of your choice," rather than paying market rate.

Great post! The "list of EA-aligned organizations"  seems very useful. Do you know if it is linked to some more official EA source like the CEA website or 80k website? and do you know how up to date it is?
Initially it seems like core EA information that is hanging in a random Notion page but I am probably wrong

Thanks!
AFAIA, the list of orgs is not linked to any official EA source, but a wiki that can be updated by anyone.  If you're looking for something more official, the list of organizations on 80k's job board might be better. But it's missing some orgs that are clearly EA-aligned, e.g., the national regranting orgs.

Yep, the recommended orgs list on the 80,000 Hours Job Board (and the job board itself) is certainly not aiming to be comprehensive.

Thanks for the info!

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region