Hide table of contents

UPDATE: I now consider my 2022 Interested in EA/longtermist research careers? Here are my top recommended resources a better starting point than this older post, but this post might be useful after you've read that 2022 one.

Cross-posted to LessWrong.

I've had calls with >30 people who are interested in things like testing their fit for EA-aligned research careers, writing on the Forum, "getting up to speed" on areas of EA, etc. (This is usually during EA conferences.) 

I gradually collected a set of links and notes that I felt that many such people would benefit from seeing, then turned that into a Google Doc. Many people told me they found that doc useful, so I'm now (a) sharing it as a public post, and (b) still entertaining the hypothesis that those people were all just vicious liars and sycophants, of course. 

Disclaimers

  • Not all of these links/notes will be relevant to any given person
  • These links/notes are most relevant to people interested in (1) research roles, (2) roles at explicitly EA organisations, and/or (3) longtermism
    • But this just because that’s what I know best
      • There are of course many important roles that aren’t about research or aren’t at EA orgs!
      • And I'm happy with many EAs prioritising cause areas other than longtermism
    • But, in any case, some of the links/notes will also be relevant to other people and pathways
  • This doc mentions some orgs I work for or have worked for previously, but the opinions expressed here are my own, and I wrote the post (and the doc it evolved from) in a personal capacity

Regarding writing, the Forum, etc.

Research ideas

Programs, approaches, or tips for testing fit for (longtermism-related) research

Programs

Not all of these things are necessarily "open" right now. 

Here are things I would describe as research training programs (in alphabetical order to avoid picking favourites):

Note: I know less about what the opportunities at the Center for Reducing Suffering and the Nonlinear Fund would be like than I know about what the other opportunities would be like, so I'm not necessarily able to personally endorse those two opportunities. 

Other things

Getting “up to speed” on EA, longtermism, x-risks, etc.

Other

I'd welcome comments suggesting other relevant links, or just sharing people's own thoughts on any of the topics addressed above!

Comments10


Sorted by Click to highlight new comments since:

I definitely agree that one of the best things applicants interested in roles at organizations like ours can do to improve their odds of being a successful researcher is to read and write independent research for this forum and get feedback from the community.

I think another underrated way to acquire a credible and relevant credential is to become a top forecaster on Metaculus, Good Judgement Open, or Facebook’s Forecastapp.

Thanks for sharing, Michael!

I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)

Relatedly, CRS has an internship opportunity.

Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.

Relatedly, CRS has an internship opportunity.

Thanks for mentioning this - I'll now added it to the "Programs [...]" section :)

Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.

This was intentional, but I think I no longer endorse that decision, so I've now removed the second mention.

I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)

I definitely think that that list is within-scope for this document, but (or "and relatedly") I've already got it in the Central directory for open research questions that's linked to from here.

There are many relevant collections of research questions, and I've already included all the ones I'm aware of in that other post. So I think it doesn't make sense to add any here unless I think the collection is especially worth highlighting to people interested in testing their fit for (longtermism-related) research. 

I think the 80k collection fits that bill due to being curated, organised by discipline, and aimed at giving a representative sense of many different areas. I think my "Crucial questions" post fits that bill due to being aimed at overviewing the whole landscape of longtermism in a fairly comprehensive and structured way (though of course, there's probably some bias in my assessment here!). 

I think my history topics collection fits that bill, but I'm less sure. So I've now added below it the disclaimer "This is somewhat less noteworthy than the other links".

I think my RSP doc doesn't fit that bill, really, so in the process of writing this comment I've decided to move that out of this post and into my Central directory post. 

(The fact that this post evolved out of notes I shared with people also helps explain why stuff I wrote has perhaps undue prominence here.)

Here's one other section that was in the doc. I'm guessing this section will be less useful to the average person than the other sections, so I've "demoted" it to a comment.

Some quick thoughts regarding the value of posting on the Forum and/or conducting independent research, in my experience

  • Note that:
    • This section is lightly edited from what I wrote ~August 2020; I didn't bother fully updating it with newer evidence and thoughts
    • This may of course not generalise to other people.
    • Some of this work was independent, some was associated with Convergence Analysis (who I worked for), and some was in between
  • Doing this definitely improved my thinking, my network, and how well-known I am among EAs
    • Not sure how much the third thing actually matters
  • Doing this seems to have accelerated my career trajectory via the above and via providing evidence of my abilities
  • I have some evidence of impact from my work
  • The network-building/signalling from this may have also helped me have impact in other ways

Some people might also find it useful to check out EA-related facebook groups, which there's a directory for here: https://www.facebook.com/EffectiveGroups/

Thanks, Michael!

The list of summer research training programs seems helpful. There might be some newer ones that are worth adding too.

Yeah, thanks for point this out! SERI seems cool to me, and I've now added a link to that form :)

(I actually added the link right before you made your comment, I think, due to someone else highlighting it to me in a different context. But it was indeed absent from the initial version of the post.)

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op