Hide table of contents

Alternate title: EA Job Boards Board

Relevant XKCD

Epistemic status: this is a quickly drafted and incomplete list. This is not wisdom.

 

What am I missing from this list of organizations and options relevant to job-seekers?

 

Career/Jobs Organizations
80,000 Hours
Animal Advocacy Careers
https://freelancingforgood.com/ (another job board)
High Impact Professionals

High Impact Recruitment
EA Opportunities
EA Pathfinder
Open Philanthropy's career development and transition funding (they may have other services I have failed to include)
Pineapple Operations
Probably Good

Successif
Training for Good

Nonlinear advises for AI safety careers in technical, governance, or meta

(don’t forget to actually talk to the orgs who offer advising or other services)
 

Facebook groups (such as effective altruism job postings) - there are a number of industry or location oriented groups as well. See this effective altruism groups directory - or this list of online groups, some of which are profession oriented

The opportunities channel  in the EA Anywhere Slack.

 

EA Global events are very much networking events and, I suspect, partially job fairs.
 

Don’t forget about high-impact jobs outside of explicitly-EA orgs.

80,000 Hours may have a big list of unexplored, potentially high-impact paths but I couldn't find it in a quick search.
 

I’ve probably got a huge blindspot around options for students and academics so those are largely missing from this list. Scholarships for students?
 

Make your own job:

  • apply for a grant (big list here, or manifund.,  FTX Future Fund or a regranter, community building grants from the Centre for Effective Altruism, and likely more)
  • start your own charity/organization/enterprise (e.g. with Charity Entrepreneurship, or on your own)
  • start a startup and earn to give
  • start an impact-focused for-profit company

(Considerations apply when starting your own project, such as: downside risk, but also the risk of being ambitious enough)

 

Make someone else’s job:

  • earn to give some other way
  • or save for your future self to put time towards impact

 

Win your job retrospectively:

  • prizes
  • impact markets / impact certificates

18

0
0

Reactions

0
0
New Answer
New Comment


9 Answers sorted by

The Job listing (open) page could also be relevant, as well as Take action.

https://bluedotimpact.org/ - certain career path building - AGI, biorisk

Bonus, from the EA Newsletter: If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

Local or online groups may have career workshops or 1-1s available with people who could offer advice.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal