Hide table of contents

Key points 

  • As of 2022, Rethink Priorities (RP) is doing more consultancy-like research, much of which is not–or not yet–published, especially in the areas of global health and development (including climate change) as well as AI governance.
  • To date, RP has conducted over 50 person-years of work. You can find all our published research here.
  • We have spent 38% of our time this year so far working on animal welfare research, 27% on longtermism, 12.5% on surveys, and 16% on global health and development.
  • Our plan is to launch a new Worldview Investigations team.
  • We are setting up a Special Projects department to help launch promising initiatives. We are currently fiscally sponsoring and operationally supporting projects like Epoch, an insect welfare project, Unjournal, EA Market Testing, Condor Camp, and EA Pathfinder.
  • By the end of August, we will have onboarded 25 new staff in research and operations and will have worked with an additional seven research fellows. We also work with 28 contractors for various projects. Our staff count is planned to reach almost 50 full-time equivalents by the end of the year. Subscribe to our newsletter if you want to hear about job openings, events, and research.
  • RP receives funding from Open Philanthropy, EA Funds, FTX Future Fund, Survival and Flourishing Fund, Animal Charity Evaluators, as well as from individual donors–small and large. We are looking for more donations, especially unrestricted ones, to meet our impact aspirations. You can donate here.

Our mission

Rethink Priorities’ mission is to generate the most impact we can for others in the present and the long-term future. Using evidence and reason, we identify where resources would be most effective and help direct them there. We do this by conducting critical research to inform policymakers and philanthropists, and by guiding the development of new organizations to address key problems. Our work covers important and neglected cause areas, including animal welfare, artificial intelligence, climate change, global health and development, and other work to safeguard a flourishing long-term future. We also aim to understand and support effective altruism–the community of people focused on these issues.

We described how we see our path to impact in our strategy post at the end of last year. The next impact assessment is due in late 2022. 

In the upcoming months, we strive to keep high quality standards for our work and aim at increasing the number of research reports that can be published in academic journals. We believe that our strong operations and ability to hire remotely and internationally help our efforts to scale by finding and integrating the best researchers. At the same time, we hope that providing opportunities to early-career researchers will grow effective altruism by getting more talented people skilled at working on important problems. Having capable managers and leaders will continue to be integral to the organization’s success as we grow. 

Staff growth

In our last hiring round, we received 2,172 applications from 1,636 people for 17 different roles, approximately one third of whom we estimate are committed EAs. Since late 2021, we have hired 16 permanent staff members and seven fellows, and will have nine more permanent hires joining by the end of August. Another four people have accepted offers to start permanent positions by the end of October. We are currently working with 28 contractors, including 12 for the Moral Weight Project and eight for various Special Projects. Six of RP’s new managers are internal promotions, another five are external hires. 

Kieran Greig joined RP as Chief Strategy Analyst this month. We also started hiring executive research assistants for our Co-CEOs as well as research assistants for every team. This strengthened capacity allows executive staff to free up more time for high-level strategy work. 

You can see our team page here.

At the time of publication, our staff count is planned to have reached 48.8 full-time equivalents (FTE) by the end of 2022. Our staff total corresponds to 38 FTE focused on research and 10.8 FTE on operations.

In the first half of 2022, we spent 38% of our time working on research relevant to animal welfare (farmed animals, wild animals, and invertebrates), 27% on longtermism, 12.5% on surveys (including EA movement research), 16% on global health and development, and 6.5% on other research projects. Although the proportions have changed relative to last year, each single team has been able to expand its research capacity. 

We have spent USD 2,618,633 so far this year.

Strategic updates

RP is starting a new Special Projects Team (housed within the Operations Department) to support new initiatives, which will include: 

  • Launching our own projects: The Special Projects Team will assist in launching megaprojects—ambitious and cost-effective projects that scalably use EA capital overhang. The current focus is on longtermist and “meta EA” megaprojects (see below for examples), although the team may be involved in other cause areas in the future. Each megaproject would start internally and eventually become independent.
  • Incubating others’ projects: Given our strong operations, RP envisions acting as a full-service fiscal sponsor for promising EA groups on an invitation-only basis. This structure could enable strong teams to focus on their core work rather than the day-to-day operations of their organization.

At the moment, we are fiscally sponsoring and operationally supporting projects like 

  • Epoch (forecasting the development of transformative AI)
  • Convening AI policy experts in Washington
  • An insect welfare project
  • Unjournal (open platform for research relevant to global priorities)
  • EA Market Testing (how to best promote effective giving and action)
  • Condor Camp (EA and longtermism training for talented Latin American students)
  • EA Pathfinder (advice for mid-career professionals to switch into EA work)

Research

Some of our departments or teams have already published on the EA Forum or elsewhere, while others have been doing substantial work behind the scenes. Our researchers are preparing more reports for publication in the next few months. The following list contains a few highlights, but does not consist of a comprehensive overview of all of RP’s research projects.

Longtermism Department

General Longtermism

AI Governance and Strategy

  • Abi Olvera is working on an AI policy database, intended to list and provide information on all existing AI policy ideas that it might be feasible and good to implement in the near or medium term.
  • Ashwin Acharya and Alex Lintz are working to organize an AI strategy retreat in Washington, D.C. later this year with around 35 relevant researchers and practitioners. They are also working to elicit such people’s theories of change for AI governance, writing those up, drawing out implications, and finding cruxes between different theories.
  • Ben Cottier is investigating the character of AI diffusion: how fast and by what mechanisms AI technologies spread, what strategic implications that has (e.g., for AI race dynamics), and what interventions could be pursued to influence diffusion.
  • Max Räuker is working on a survey of a set of people knowledgeable about AI existential risk to elicit their views on various possible intermediate goals for AI governance.
  • Shaun Ee is working on a report on “Defense in Depth against Catastrophic AI Incidents,” explaining and making the case for a strategy of using multiple overlapping defense layers for AI risk to ensure that if any one layer fails, the rest will still prevent serious incidents from happening.

Surveys Department

  • David Moss and Jamie Elsey wrote up their investigations about how many people have heard of EA.
  • The team is also working on the next iteration of the EA Survey and conducting numerous polls, surveys and analyses on behalf of different EA organizations.
  • They are currently working with 1DaySooner to research attitudes towards human challenge trials.
  • Willem Sleegers, together with other RP staff, is developing a Wild Animal Welfare Scale for measuring the degree to which individuals care about the welfare of animals living in the wild, which will be submitted to an academic journal.
  • A large majority of the Survey Department’s projects are private requests (surveys, experiments, polling and focus groups) from core EA organizations, with the rate of requests having increased substantially in recent months.
  • We presently have to turn down some large commissions due to lack of staff capacity, and lack of funds in place to expand our team (or to maintain the team at its current size). That said, we would still encourage organizations to approach us to see whether we have capacity for any particular project.

Animal Welfare Department

Moral Weight

  • We held an academic conference on interspecies comparisons of welfare (recordings available) together with the ASENT project at the London School of Economics.
  • After the departure of Jason Schukraft to work at Open Philanthropy, our Moral Weight Project is now led by Bob Fischer.
  • With a team of 12 academic contractors, they reviewed 95 welfare-relevant traits across 11 animal species.
  • Bob Fischer wrote a paper for academic audiences that explains how our moral weight work can inform interspecies welfare comparisons, which is useful even for researchers who aren’t utilitarians and don’t share an interest in cause prioritization.
  • Adam Shriver produced another paper on the limits of neuron counts as proxies for animals’ relative moral weights.
  • Bob Fischer and Emily Sandall wrote a report on whether insects raised for food and feed show large differences in either the probability of sentience or capacity for welfare across life stages.

Global Health and Development Department

  • Ruby Dickson and Jenny Kudymowa estimated the cost-effectiveness of some global health organizations.
  • Ruby Dickson and Greer Gosnell evaluated various livelihood interventions.
  • Jenny Kudymowa and Bruce Tsai reviewed the effectiveness of prizes in spurring innovation.
  • Bruce Tsai and Jenny Kudymowa examined interventions to increase scientific capacity in Sub-Saharan Africa.

Climate Change

  • Greer Gosnell and Bruce Tsai conducted a literature review on damage functions of integrated assessment models in climate change.
  • Greer Gosnell and Bruce Tsai mapped the climate philanthropy landscape.
  • Greer Gosnell and Ruby Dickson studied anti-deforestation initiatives.
  • Greer Gosnell and Ruby Dickson looked into funding gaps and bottlenecks to the deployment of carbon capture, utilization, and storage technologies.
  • Ruby Dickson and Greer Gosnell attempted to quantify the potential economic growth benefits of clean energy R&D.

(Note: Public versions of the reports of this department will be available later this year.)

Worldview Investigations

  • RP is planning to launch a class of projects on worldview investigations. We will examine crucial questions that may have a huge bearing on how philanthropic dollars will be allocated to humans and to different animal species, to present and to future generations. One such project, already underway, is our work on interspecies comparisons of moral weight.
  • Funding would be most welcome to help us launch this new area of our work.

Funding opportunities

Rethink Priorities’ most urgent funding need is for unrestricted donations, which would help ensure that we have the ability to direct funds to where they would be most effective and that we can react quickly to new opportunities that arise. We have often had the greatest impact when we had the flexibility to explore new potential avenues of research, and we’ve only been able to do this through unrestricted funding.

However, given that we’re often asked about our current funding needs in each of the cause areas in which we work, we have included the below chart.

We indicate the approximate amounts of funding we’d like to raise by year-end 2022 to sustain and continue our work, as well as our goal for fundraising to grow each program. These figures will be updated on our donation page every month. We are making plans for further growth that we will talk about on the EA Forum in our next impact and strategy post in November. 

Research Area

Low (USD)

High (USD)

Animal Welfare

$0.88M

$2.88M

Longtermism

$1.3M

$1.5M

Surveys

$1.1M

$1.8M

Global Health and Development

$0.78M

$0.8M

Worldview Investigations

$0

$0.93M

Total

$4.06M

$7.91M


You can make your contribution here. We accept major credit cards, PayPal, Venmo, bank transfers, cryptocurrency donations, and stock transfers.

If you have questions about donation opportunities, please email or book a meeting with our Director of Development. 

Acknowledgements

This post was written by Rachel Norman and Janique Behman from Rethink Priorities. Thanks to Marcus A. Davis, Abraham Rowe, Tom Hird, Bob Fischer, Daniela Waldhorn, Michael Aird, and David Moss for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can see more of our research here.

Comments6
Sorted by Click to highlight new comments since:

I'm a bit confused by this bit:

"We presently have to turn down some large commissions due to lack of staff capacity, and lack of funds in place to expand our team (or to maintain the team at its current size)."

Do you charge for your commissions? I'm struggling to get my head around why the ability to take commissions could be constrained by both lack of funding and staff capacity.

Thoughts I have about what might explain it / what you might mean:

  • you don't actually charge and so more commissions just means more work for free. (Or you accept low paid commissions.)
  • commissions don't always come at convenient times so sometimes there are bursts of too much work to do / too many requests, compared to some quieter periods where researchers have to focus more on their own independently generated projects.
  • you have both the research talent and the funding, it's just that there's a time delay for hiring, onboarding etc before you can convert both components into increased capacity.

Clarification on which of these, if any, seems closest to RP's situation would be welcome. Thanks!

The short answer is simply that the vast majority of projects requested of us are highly time sensitive (i.e. orgs want them completed within very fast timeline), so we need to have the staff already in place if we’re to take them on, as it’s not possible to hire staff in time to complete them even if they are offering more than enough funding (e.g. 6 or 7 figures) to make it happen.

This is particularly unfortunate, since we want to grow our team to take on more of these projects, and have repeatedly turned down many highly skilled applicants who could do valuable work, exclusively due to lack of funding.

Still, I would definitely encourage people to reach out to us to see whether we have capacity for projects.

Thanks for writing this up! Really appreciate the clear and transparent writeup across hiring, output, and financial numbers, and think that more orgs (including Manifold!) should strive for this level of clarity. One thing I would have been curious to see is how much money came in from each funding source, haha.

I set up a prediction market to see how RP will do against its funding goals:

Very cool, thanks, Austin!
We will publish another post on the Forum with updated funding goals by mid-November at the latest. I'm curious to see how our plans and ambitions might have changed by then. 
[Note: I'm working as Rethink Priorities' Director of Development.]

One such project, already underway, is our work on interspecies comparisons of moral weight.

FYI this link gives me an "Access Denied" error.

It should link to the section above, we'll fix it. Thanks!
 

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Recent opportunities in Career choice
92
· · 3m read