This is a special post for quick takes by AABoyles. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I recently experienced a jarring update on my beliefs about Transformative AI. Basically, I thought we had more time (decades) than I now believe we will (years) before TAI causes an existential catastrophe. This has had an interesting effect on my sensibilities about cause prioritization. While I applaud wealthy donors directing funds to AI-related Existential Risk mitigation, I don't assign high probability to the success of any of their funded projects. Moreover, it appears to me that there is essentially no room for additional funds in kinds of denominations coming from non-wealthy donors (e.g. me).

I used to value traditional public health goals quite highly (e.g. I would direct donations to AMF). However, given that most of the returns on bed net distribution lie in a future beyond my current beliefs about TAI, this now seems to me like a bad moral investment. Instead, I'm much more interested in projects which can rapidly improve hedonic well-being (i.e. cause the greatest possible welfare boost in the near-term). In other words, the probability of an existential AI catastrophe has caused me to develop neartermist sympathies. I can't find much about other EAs considering this, and I have only begun thinking about it, but as a first pass GiveDirectly appears to serve this  neartermist hedonic goal somewhat more directly. 

If there's at least a 1% chance that we don't experience catastrophe soon, and we can have reasonable expected influence over no-catastrophe-soon futures, and there's a reasonable chance that such futures have astronomical importance, then patient philanthropy is quite good in expectation. Given my empirical beliefs, it's much better then GiveDirectly. And that's just a lower bound; e.g., investing in movement-building might well be even better.

Consider s-risk:

From your comment, I understand that you believe the funding situation is strong and not limiting for TAI, and also that the likely outcomes of current interventions is not promising. 

(Not necessarily personally agreeing with the above) given your view, I think one area that could still interest you is "s-risk". This also relevant for your interests in alleviating massive suffering. 

I think talking with CLR, or people such as Chi there might be valuable (they might be happy to speak if you are a personal donor).

 

Leadership development seems good in longtermism or TAI

(Admittedly it's an overloaded, imprecise statement but) the common wisdom that AI and longtermism is talent constrained seems true. The ability to develop new leaders or work is valuable and can give returns, even accounting for your beliefs being correct. 

 

Prosaic animal welfare

Finally, you and other onlookers should be aware that animal welfare, especially the relatively tractable and "prosaic suffering" of farm animals, is one of the areas that has not received a large increase in EA funding. 

Some information below should be interesting to cause neutral EAs. Note that based on private information:

  1. The current accomplishments in farm animal welfare are real and the current work is good. But there is very large opportunity to help (many times more animals are suffering than have been directly helped so far).
  2. The amount of extreme suffering that is being experienced by farm animals is probably worse, much worse than is commonly believed (this is directly being addressed through EA animal welfare and also motivates welfarist work). This level of suffering is being occluded because it does not help, for example, it would degrade the mental health of proponents to an unacceptable level. However, the suffering levels are illogical to disregard when considering neartermist cause prioritization.

This animal welfare work would benefit from money and expertise. 

Notably, this is an area where EA has been able to claim significant tangible success (for the fraction that has been able to help). 

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att