This is a special post for quick takes by Ben Stevenson. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Has anybody changed their behaviour after the animal welfare vs global health debate week? A month or so on, I'm curious if anybody is planning to donate differently, considering a career pivot, etc. If anybody doesn't want to share publicly but would share privately, please feel free to message me.

Linking @Angelina Li's post asking how people would change their behaviour, and tagging @Toby Tremlett🔹 who might have thought about tracking this.

I redirected my giving from GiveWell to EA Animal Welfare Fund. I had been meaning to for a while (since the donation election), so wouldn't necessarily call it marginal, but it was the trigger.

I haven't exactly changed my behaviour, but the fact that I didn't read any arguments for donating to global health that I found particularly persuasive means that I'm slightly less likely to change any of my recurring donations (currently 100% animal welfare).

I'd love to know the answer to this question, but I haven't tracked it (I'm hoping we will get some information from a) the donation election - where people can comment on their vote and b) next year's EA survey). 

I've just written a blog post to summarise EA-relevant UK political news from the last ~six weeks.

The post is here: AI summit, semiconductor trade policy, and a green light for alternative proteins (substack.com)

Early November is the date for the UK’s summit on AI safety, according to leaks yesterday. Offers have been sent out for new AI Civil Service roles. British politics seems increasingly important to the AI safety world.

This is my attempt to justify the ways of Westminster to EA, and EA to Westminster. I’m spotlighting recent headlines on the AI summit, semiconductor trade policy, and alternative proteins.

I'm planning to circulate this around some EAs, but also some people working in the Civil Service, political consulting and journalism. Many might already be familiar with the stories. But I think this might be useful if I can (a) provide insightful UK political context for EAs, or (b) provide an EA perspective to curious adjacents. I'll probably continue this if I think either (a) or (b) is paying off.

(I work at Rethink Priorities, but this is entirely in my personal capacity).

Thanks for sharing Ben! As a UK national and resident I'm grateful for an easy way to be at least a little aware of relevant UK politics, which I otherwise struggle to manage.

Thanks Ben! Glad it was helpful1

EDIT 2024-06-10: We are no longer accepting applications. Thank you to all who got in touch.

The Animal Welfare Department at Rethink Priorities is recruiting volunteer researchers to support on a high-impact project!

We’re conducting a review on interventions to reduce meat consumption, and we’re seeking help checking whether academic studies meet our eligibility criteria. This will involve reviewing the full text of studies, especially methodology sections.

We’re interested in volunteers who have some experience reading empirical academic literature, especially postgraduates. The role is an unpaid volunteer opportunity. We expect this to be a ten week project, requiring approximately five hours per week. But your time commitment can be flexible, depending on your availability.

This is an exciting opportunity for graduate students and early career researchers to gain research experience, learn about an interesting topic, and directly participate in an impactful project. The Animal Welfare Department will provide support and, if desired, letters of experience for volunteers.

If you are interested in volunteering with us, contact Ben Stevenson at bstevenson@rethinkpriorities.org. Please share either your CV, or a short statement (~4 sentences) about your experience engaging with empirical academic literature. Candidates will be invited to complete a skills assessment. We are accepting applications on a rolling basis, and will update this listing when we are no longer accepting applications.

Please reach out to Ben if you have any questions. If you know anybody who might be interested, please forward this opportunity to them!

Hey Ben! A few quick Qs:

  1. Did the team consider a paid/minimum wage position instead of an unpaid one? How did it decide on the unpaid positions?
  2. Is the theory of change for impact here mainly an "upskill students/early career researchers" thing, or for the benefits to RP's research outputs?
  3. What is RP's current policy on volunteers?
  4. Does RP expect to continue recruiting volunteers for research projects in the future?
     

Hi Bruce, thank you for your questions. I’m leading this project and made the decision to recruit volunteers, so thought I’d be best positioned to respond. (And Ben’s busy protesting for shrimp welfare today anyway!)

  1. Did the team consider a paid/minimum wage position instead of an unpaid one? How did it decide on the unpaid positions?

Yes, we would prefer to offer additional paid positions. However, given the budget for this project, we were not able to offer such positions. We regularly receive unsolicited inquiries from people interested in volunteering for our research. There is not always a good fit, but since this project is highly modular allowing people to meaningfully contribute with just a few hours of time, we decided to provide a formal volunteer opportunity.

  1. Is the theory of change for impact here mainly an "upskill students/early career researchers" thing, or for the benefits to RP's research outputs?

The primary theory of change is to improve the evidence-base for interventions to reduce animal product usage, thus allowing more and better interventions to be implemented and reducing the numbers of animals harmed by factory farming. RP’s research outputs are a mediator in this theory of change. The volunteer opportunity itself also represents an opportunity to upskill, but ultimately the goal for all involved is to benefit non-human animals.

  1. What is RP's current policy on volunteers?

RP occasionally considers and engages with volunteers for some projects, especially where relatively small time-limited contributions are possible.

  1. Does RP expect to continue recruiting volunteers for research projects in the future?

In practice, this will depend on the project and whether there are other opportunities that would be an appropriate fit.

Curated and popular this week
 ·  · 55m read
 · 
Summary Last updated 2024-11-20. It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: * AI is the most important source of risk. * There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. * Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: 1. Why I prioritize x-risk over animal-focused longtermist work and global priorities research. 2. Why I prioritize AI policy over AI alignment research. 3. My beliefs about what kinds of policy work are best. Then I provide a list of organizations working on AI policy and my evaluation of each of them, and where I plan to donate. Cross-posted to my website. I don't like donating to x-risk (This section is about my personal motivations. The arguments and logic start in the next section.) For more than a decade I've leaned toward longtermism and I've been concerned about existential risk, but I've never directly donated to x-risk reduction. I dislike x-risk on an emotional level for a few reasons: * In the present day, aggregate animal welfare matters far more than aggregate human welfare (credence: 90%). Present-day animal suffering is so extraordinarily vast that on some level it feels irresponsible to prioritize anything else, even though rationally I buy the arguments for longtermism. * Animal welfare is more neglected than x-risk (credence: 90%).[1] * People who prioritize x-risk often disregard animal welfare (or t
 ·  · 11m read
 · 
Summary There’s a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that’s just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn’t possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base[1][2][3][4][5][6][7][8] to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification[9]. Everyone seems to agree; other people should be giving more money to the EA projects. The Math  Of course, I couldn’t make this post without breaking down the numbers. The math is really quite simple here. This is the best data I could find for EA funding in 2023 (numbers in $USD). The best numbers I can find for 2023 Open Phil 691mm (source) SFF 33mm (source) Ea funds 14mm (source) ACE 8.3mm (source) Givewell 318mm (source) and (source) Nonlinear Network 1.5mm (source) Polaris Ventures 15mm (estimate) (source) Other/Individual EA donors 15mm (GWWC donors, etc. based on some rough math from (source). I’m very interested if someone has a better or more accurate figure. *I would be extremely happy to add/edit any additional numbers/figures, but I don’t expect this to significantly change the end result. I’m not sure if Givewell and Open Phil is double counting here (since Open Phil gives to Givewell) but I’m going to ignore Givewell as EA funding since a lot of this comes from what many would consider outside of the EA community (many people and philanthropists who wouldn’t consider themselves to be EAs though I’m not sure this is