This is a special post for quick takes by weeatquince. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I think people working on animal welfare have more incentives to post during debate week than people working on global health.

The animal space feels (when you are in it) very funding constrained, especially compared to working in the global health and development space (and I expect gets a higher % of funding from EA / EA-adjacent sources). So along comes debate week and all the animal folk are very motivated to post and make their case and hopefully shift a few $. This could somewhat bias the balance of the debate. (Of course the fact that one side of the debate feels they needs funding so much more is in itself relevant to the debate.) 

I also expect prioritizing animals over global health in EA to correlate with being more engaged in online EA discussion, in part because I would guess:

  1. The animal advocacy community intersects so much with EA, whereas global health has a relatively larger non-intersecting space with EA, including the effective parts of each. Effective animal advocacy supporters engage relatively more with other EAs and people close to EA than do EA global health supporters.
    1. Basically all cost-effectiveness/cost-benefit discussion in animal advocacy is close to the EA community, whereas substantial cost-effectiveness/cost-benefit discussion in global health happens independently of EA. Effective animal advocacy also accounts for a larger share of the relevant research being done in its space. So relevant animal advocacy researchers are closer to EA than are relevant global health researchers to EA, on average.
    2. Global health seems to have less direct use/need for EA and EA-adjacent labour than animal advocacy does, normalizing by the relevant EA subcommunity sizes. The pool of people EA orgs can hire or engage for global health has a larger share of people further from EA than does the pool EA orgs can hire or engage for animals.
  2. EA volunteer opportunities drive further EA engagement, and there are more of those for animal advocacy per effective animal advocacy supporter than there are for global health per effective global health supporter.
  3. Within global health, it's hard to predictably beat GiveWell charities, and cruxes in global health are mostly pretty technical and kind of boring/tedious for most people, like checking studies and calculations in sheets. Many cruxes in animal advocacy are more accessible or interesting, like consciousness, moral weights, indirect effects, backfire risks. That drives more engagement among animal advocates.
  4. Animal advocates are more activisty.
  5. The Hive Slack channel pulls in non-EA animal advocates and directs them to EA in general and the EA Forum in particular (as well as EAA orgs, EA events, and so on). And I'd guess the share of EAA resources spent on community building is larger than the share of EA global health resources spent on community building. So participants are disproportionately likely to come from the animal advocacy community.

"Basically all cost-effectiveness/cost-benefit discussion in animal advocacy is close to the EA community"

Wow that's super interesting and surprises me a bit. How then does an animal advocacy org try to decide whether it tries to target a pig or a chicken corporate farm? I would have thought cost-effectiveness comparisons would have been easier in the animal welfare space given clearer (often) final outcomes and faster feedback loops.

This however I strongly disagree with 

"Within global health, it's hard to predictably beat GiveWell charities, and cruxes in global health are mostly pretty technical and kind of boring/tedious for most people, like checking studies and calculations in sheets. Many cruxes in animal advocacy are more accessible or interesting, like consciousness, moral weights, indirect effects, backfire risks. That drives more engagement among animal advocates."

The complexities and range of global health interventions are fascinating and exciting, with new weird ideas almost infinite (cause ideas lists grow and grow every year, look at orgs like CEARCH). There's enormous discussion between how to value and compare tricky things like income, suffering and now new measures like subjective wellbeing. 

Also how are cruxes in animal advocacy more accessible? Consciousness and moral weights are less accessible and harder to comprehend than human variables where we can rely on direct reports of our own and others experience even triangulate e.g. DALYs vs WELLBYs.Think how much (fantastic) effort had to go into the moral weights project to generate meaningful numbers like that. It took me a long time digging to even partially comprehend what those numbers mean, wheras the makeup of human DALYs and QALYs are far easier to understand.

I don't even think its so hard to beat top GiveWell charities really, its just hard to beat them within the narrow-ish error bars GiveWell requires (which I love that they require). Things like Lead exposure charities aren't even on their top list!

And on Animal advocates are more activisty - I agree with this statement, and maybe also that it measns more engagement on the forum here. Are they more "doey" though? My very uncertain bet would be that Global health EA people are out running orgs and doing direct work more than animal welfare EA people and so have less time to pontificate on the forum, but I could easily be wrong about that. I'm a bit of a weird exception I think.

Anyway I think its all very interesting :D

(I'm going to commit to not replying further, except maybe to quickly clarify, because I've already spent way too long on this comment.)

EAA = effective animal advocacy, basically the intersection of EA and animal advocacy.

Wow that's super interesting and surprises me a bit. How then does an animal advocacy org try to decide whether it tries to target a pig or a chicken corporate farm? I would have thought cost-effectiveness comparisons would have been easier in the animal welfare space given clearer (often) final outcomes and faster feedback loops.

Several animal orgs do make decisions based on cost-effectiveness considerations. I'd just say the ones that do tend to be pretty close to EA. They're likely to be funded by EA grantmakers (Open Phil, EA Funds, ACE). Org leadership and other employees come to EAGs. Other conferences they and their employees attend also have EAs and are supported by EA grantmakers. Their funding and growth have depended a lot on EA funding. EA funders have probably substantially influenced program focus, e.g. towards corporate chicken welfare campaigns.

A lot of this seems true for GiveWell-recommended charities, too, although I'd guess to a lesser extent in relative terms.

 

About engagement with ideas/research between global health and animal welfare, I'd say:

  1. I think engaging with the ideas and research is probably less subjectively useful/decision-relevant to the average EA global health supporter than to the average EAA supporter:
    1. EA global health supporters just trust and defer to GiveWell a lot, and for some good reasons. GiveWell is very careful/rigorous, publishes marginal cost-effectiveness estimates and relies on strong evidence for their recommendations. The situation is worse in EAA.
    2. I'd guess there's more variance in beliefs and more disagreement about EAA priorities than there are about EA global health priorities among the respective supporters, for EAA based on consciousness, moral weights, asks, tactics, region prioritization.
    3. GiveWell recommendations don't change very much over time, and there are few of them. There are more ACE recommendations and Open Phil and EA Animal Welfare Fund grantees, and there's more change in them. (GiveWell All Grants Fund grantees are large in number and probably change a decent amount, though.)
  2. EA global health research can be interesting, but I'd guess doesn't pull in the average EA global health supporter much or even very disproportionately relative to other EAs, because they defer so much to GiveWell and are already less likely to even be on the EA Forum. A large share of the commenters on your soaking beans post seem to be people who prioritize things other than global health, including animal welfare.
  3. Consciousness and moral weights are cruxy, interesting and can drive engagement, even if the details of the research are hard to understand. People will still talk about them.

 

I don't even think its so hard to beat top GiveWell charities really, its just hard to beat them within the narrow-ish error bars GiveWell requires (which I love that they require). Things like Lead exposure charities aren't even on their top list!

Fair, but these charities don't beat GiveWell recommendations if you're sufficiently difference-making ambiguity/risk averse or skeptical, which the average EA global health supporter might be. And EAs not like this have fewer barriers to prioritizing animal welfare, where things are less rigorous and the evidence is weaker.

And GiveWell has made grants to reduce lead exposure through their All Grants Fund, which they believe is more cost-effective in expectation than their recommended charities. But EA global health supporters might just donate directly to that fund, too, without engaging much, instead just trusting GiveWell. But I'm really not sure.

That's true. 

However, even given these incentives, I would have expected more votes/interactions from people favouring global health - given that it is a very established field that feels instinctively good and well-known by most. Animal welfare was supposed to be the underdog here. 

Moreover, there were fewer arguments favouring global health, and they felt much less convincing (personal opinion, but this felt reflected in votes). 

So, although there is probably a bias to factor in, I still think that most people on the forum genuinely think animal welfare is the better choice for an additional $100m. 

Another data point might be how well organized the animal-advocacy folks seemed to be on the Manifund EA Community Choice project (although I believe Manifund did some outreach as well). I assumed some of the same reasons were in play there.

More generally, discussions on this topic have a flavor of GH being the known quantity and AW being the option with much more uncertainty. Or stated differently, the crux for most participants was going to be predominately how good the marginal dollar for AW is, more so than how good that dollar would be in GH. That's an easier topic for AW people to write on. Also, people are typically more motivated to write in favor of their cause area than in an attempt to deflate the effectiveness of a different cause area.

Quickly throwing in a related dynamic. I suspect animal welfare folks have more free time to post online.

Career advancement in animal welfare is much more generalist than global health & development. This means there's not as many career goals to 'grind' towards, leaving more free time for public engagement. Alternative proteins feel like a space where one can specialize, but that's all I can think of. I'd love to know of other examples.

In contrast, global health & development has many distinct specialities that you have to focus on if you want to grow your career. It's not uncommon for someone's career to be built on incredibly narrow topic like, say, the implications of decentralization for regulating groundwater pollution. There are even 'playbooks' for breaking into the space, and they rarely align with writing EA Forum posts, or really any public writing.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe