Hide table of contents

Summary: I have noticed some people, especially when they are new to the community, see prominent messaging around AI or biorisk and conclude that they, if they don't have a technical background, don't fit in the community or cannot have an impactful career. 

  1. To make such careers more mentally available, I have compiled a list of examples. 
    1. Mostly aimed at people who themselves feel discouraged because they feel like they "don't fit", and who haven't been in the community long enough to see the diversity of roles themselves. 
  2. Thoughts on going forward with increasing visibility of such roles are at the bottom.

Some thoughts to begin with

Your University major is not the biggest determinator of whether you will have an impactful career. Also see:

And remember that disadvantages in one area can be offset through other areas, and that for such huge endeavours that EA tackles, we need a community with a portfolio of very different skills and mindsets. And that, while some people may be on paths that are less clearly lined out for them in advance, that does not mean that there is not a space where they slot in and can thrive while doing good.

Direct work

Example 1: Operations management

Operations management is one of the highest-impact careers overall

Profile: https://80000hours.org/articles/operations-management/

Podcast w Tanya Singh: https://80000hours.org/podcast/episodes/tanya-singh-operations-bottleneck/

In general, there are very many roles in organisations that play to different strengths; you really wouldn't want to work somewhere that was solely run by researchers.

 

Example 2: Art, design and illustration

Just like other companies, EA(-aligned) organisations need people who are skilled in design and communicating ideas visually.

An example is the new https://www.shouldwestudio.com/ who produce longtermist video content (they are currently looking for animators!)
 

Example 3: Communications

Job profile: https://80000hours.org/articles/communication/ 

Journalism: https://80000hours.org/podcast/episodes/ezra-klein-journalism-most-important-topics/

Of course, incl Kelsey Piper: https://www.vox.com/authors/kelsey-piper 

Example 4: Policy and politics

I think this is fairly well-known, so here just a few links

https://80000hours.org/problem-profiles/improving-institutional-decision-making/

https://80000hours.org/career-reviews/policy-oriented-civil-service-uk/

Research paths

Check out https://effectivethesis.org/theses/

Includes research ideas in media & communications, sociology, political science, law, business, and history.

This is just a glimpse of what is possible, though! I bet you could, if you so desired, find exciting and impactful topics in any field.

Example 1: History research

Rose Hadshar's talk: https://forum.effectivealtruism.org/posts/52Lkk9XbznGFS439W/rose-hadshar-from-the-neolithic-revolution-to-the-far-future

A glimpse of her work: https://forum.effectivealtruism.org/posts/bRbJJw25dJ8a8pmn5/how-moral-progress-happens-the-decline-of-footbinding-as-a-3

Research ideas in the history of social movements: https://forum.effectivealtruism.org/posts/nDotYmmnQyWFjRCZW/some-research-ideas-on-the-history-of-social-movements

Example 2: Behavioural science

Daniel has a PhD in education and, when we last spoke, researched "security mindset" by interviewing people.

https://www.danielgreene.net/ 

Examples of people on their own paths

  • Kikiope Oluwarore, who used her expertise as a veterinarian to co-found Healthier Hens.
  • Liv Boeree, who used to be a famous poker player, then convinced other poker players to donate money, and who also used her influence in EA-adjacent Youtube videos. (Also note that there are people like Suzy Shepherd who shoot and cut and design those videos. Again - you don't have to be the face in the limelight)
  • Thomas Moynihan, who (I think) studied history and then used that expertise to research and write a book about the history of the ideas of existential risks. 
  • Lizka Vaintrob, who manages the EA Forum and writes content for CEA.
  • Varsha Venugopal, who studied urban and regional planning and international development, and co-founded the childhood immunization charity Suvita.
  • Of course, Julia Galef, who has been driving forward so much in the rationality sphere (co-founding CFAR and writing The Scout Mindset).

… and these are just examples of people who occurred to me, which means that they are skewed towards being more prominent/visible in the community. Still, I hope they'll provide some inspiration and motivation to find your own unique niche and have an impact that suits your unique skills and expertise. And hopefully find an occupation in which to flourish!

Suggestions for increasing the visibility of "alternative" EA careers

  • A few role models are already well-known, but they are all exceptional people, which might not help encourage people who are unsure of whether and where they fit (e.g. Kelsey Piper is the one example for EA journalism and she's carved this niche because she is an incredible journalist, but that might not be a helpful role model if I am in the early stages of my career, thinking "I don't know… I guess I'm pretty good at writing?").
    • So I suggest increasing the visibility of people doing more "ordinary" work, to stress that this is part of EA, too. (A bit like when Tanya Singh came on the 80k podcast in 2018, but less like "this is the new big thing, everyone now talks about ops careers", and more like a monthly feature of "humans of EA", showing a wide range of people)
Comments7


Sorted by Click to highlight new comments since:

Wow, your post is timely. I just finished writing a blog post about my thoughts on/impressions of EA after 3 weeks of consuming what I can about it. Part of that post mentions that I really don't know which direction to take because I'm not an academic, I don't have "career experience" that I can leverage, and it feels a bit daunting to try to figure out where I want to/could have impact. 

So thank you for this, I really hope I get some value out of it.

(AI technical safety research benefits from technical expertise, but you can do AI forecasting, strategy, governance, and macrostrategy research without technical/science-y expertise-- like Katja Grace and Daniel Kokotajlo and many people associated with GovAI, independently or at organizations including Rethink Priorities, AI Impacts, GovAI, and AI labs.)

>a monthly feature of "humans of EA", showing a wide range of people

really like this idea

Great post!

My shortlist of non-technical roles is:

  1. community building (including “virtual” EA community building, for example, a profession-focused group, or just engaging with the EA Forum, EA slack channels, EA subreddit or the EA Twitter community)

  2. operations roles generally (including assistant roles)

and

  1. activism / being active in party politics.

Fantastic post - look forward to sharing it with others in the future!

One note: is it possible to update the designers summary? “Making things look pretty”may not communicate the value of their work, which is often highly nuanced and strategic.

Yes, fair point. I updated it to reflect this somewhat better.

It's probably covered by Example 4, but International Aid and Development under the umbrella of diplomacy. I feel like there should be an EA NGO that has a high level diplomatic interface with governments and the United Nations, unless I'm missing something there isn't.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co