This is a special post for quick takes by Matt_Lerner. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Has there been any formal probabilistic risk assessment on AI X-risk? e.g. fault tree analysis or event tree analysis — anything of that sort?

Here’s a fault tree analysis: https://arxiv.org/abs/2306.06924

Review of risk assessment techniques that could be used: https://arxiv.org/abs/2307.08823

Applying ideas from systems safety to AI: https://arxiv.org/abs/2206.05862

Applying ideas from systems safety to AI (part 2): https://arxiv.org/abs/2302.02972

Applying AI to ideas from systems safety (lol): https://arxiv.org/abs/2304.01246

I recently learned of this effort to model AI x-risk, which may be similar to the sort of thing you're looking for, though I don't think they actually put numbers on the parameters in their model, and they don't use any well-known formal method. Otherwise I suppose the closest thing is the Carlsmith report, which is a probabilistic risk assessment, but again not using any formal method.

Under what circumstances is it potentially cost-effective to move money within low-impact causes?

This is preliminary and most likely somehow wrong.  I'd love for someone to have a look at my math and tell me if (how?) I'm on the absolute wrong track here.

Start from the assumption that there is some amount of charitable funding that is resolutely non-cause-neutral. It is dedicated to some cause area Y and cannot be budged. I'll assume for these purposes that DALYs saved per dollar is distributed log-normally within Cause Y:

I want to know how impactful it might, in general terms, be to shift money from the median funding opportunity in Cause Y to the 90th percentile opportunity. So I want the difference between the value of spending a dollar at those two points on the impact distribution.

The log-normal distribution has the following quantile function:

So the value to be gained by moving from p = 0.5 to p = 0.9 is given by

This simplifies down to

Or

Not a pretty formula, but it's easy enough to see two things which were pretty intuitive before this exercise. First, you can squeeze out more DALYs from moving money in causes where the  mean DALYs per dollar across all funding opportunities is higher, and, for a given average, moving money is higher-value where there's more variation across funding opportunities (roughly, since variance is proportional to but not precisely given by sigma). Pretty obvious so far.

Okay, what about making this money-moving exercise cost-competitive with a direct investment in an effective cause, with a benchmark of $100/DALY? For that, and for a given investment amount $x, and a value c such that an expenditure of $c causes the money in cause Y to shift from the median opportunity to the 90th-percentile one, we'd need to satisfy the following condition:

Moving things around a bit...

Which, given reasonable assumptions about the values of c and x, holds true trivially for larger means and variances across cause Y.  The catch, of course, is that means and variances of DALYs per dollar in a cause area are practically never large, let alone in a low-impact cause area. Still, the implication is that (a) if you can exert inexpensive enough leverage over the funding flows within some cause Y and/or (b) if funding opportunities within cause Y are sufficiently variable, cost-effectiveness is at least theoretically possible.

So just taking an example: Our benchmark is $100 per DALY, or 0.01 DALYs per dollar, so let's just suppose we have a low-impact Cause Y that is between three and six orders of magnitude less effective than that, with a 95% CI of [0.00000001,0.00001], or one for which you can preserve a DALY for between $100,000 and $100 million, depending on the opportunity. That gives mu = -14.97 and sigma = 1.76. Plugging those numbers into the above, we get approximately...

...suggesting, I think, that if you can get roughly 4000:1 leverage when it comes to spending money to move money, it can be cost-effective to influence funding patterns within this low-impact cause area.

There are obviously a lot of caveats here (does a true 90th percentile opportunity exist for any Cause Y?), but this is where my thinking is at right now, which is why this is in my shortform and not anywhere else.

Interesting. You might get more comments as a top-level post.

I guess a more useful way to think about this for prospective funders is to move things about again. Given that you can exert c/x leverage over funds within Cause Y, then you're justified in spending c to do so provided you can find some Cause Y such that the distribution of DALYs per dollar meets the condition...

...which makes for a potentially nice rule of thumb. When assessing some Cause Y, you need only ("only") identify a plausibly best or close-to-best opportunity, as well as the median one, and work from there.

Obviously this condition holds for any distribution and any set of quintiles, but the worked example above only indicates to me that it's a plausible condition for the log-normal.

School closures

Workplace closures


The usual caveats apply here: cross-country comparisons are often BS, correlation is not causation, I'm presenting smoothed densities instead of (jagged) histograms, etc, etc...

I've combined data on electoral system design and covid response to start thinking about the possible relationships between electoral system and crisis response. Here's some initial stuff: the gap, in days, between first confirmed cases and first school and workplace closures. Note that n= ~80 for these two datasets, pending some cleaning and hopefully a fuller merge between the different datasets.

To me, the potentially interesting thing here is the apparently lower variability of PR government responses. But I think there's a 75% chance that this is an illusion... there are many more PR governments than others in the dataset, and this may just be an instance of variability decreasing with sample size.

If there's an appetite here for more like this, I'll try and flesh out the analysis with some more instructive stuff, with the predictable criticisms either dismissed or validated.

What does PR stand for?

Proportional representation?

Proportional representation

It seems like there's a significant need right now to identify what the plausible relationship is between mask-wearing and covid19 symptoms. The virus is now widespread enough that a very quick Mechanical Turk survey could provide useful information.

Collect the following:

• Age group (5 categories)

• Wear a mask in public 1 month ago? (y/n)

• If yes to above, type of mask? (bandana/N95+/surgical/cloth/other)

• Sick with covid19 symptoms in past month? (y/n)

• Know anyone in everyday life who tested positive for covid19 in past month? (y/n)

• Postal code (for pop. density info)

Based on figures from this Gallup piece, a back-of-the-envelope says we could get usable results from surveying 20,000 Americans -- but we could work with a much smaller sample if we survey in a country where the virus is more prevalent.

Or of course, restrict our sample to a smaller geographic region in the US with more prevalence.

The EA movement is disproportionately composed of highly logical, analytically minded individuals, often with explicitly quantitative backgrounds. The intuitive-seeming folk explanation for this phenomenon is that that EA, with its focus on rigor and quantification, appeals to people with a certain mindset, and that the relative lack of diversity of thinking styles in the movement is a function of personality type.

I want to reframe this in a way that I think makes a little more sense: the case for an EA perspective is really only made in an analytic, quantitative way. In this sense, having a quantitative mindset is actually a soft prerequisite for "getting" EA, and therefore for getting involved.

I don't mean to say that only quantitative people can understand the movement, or that there's something intellectually very special about EAs.

Rather- very few people would disagree that charity should be effective. Even non-utilitarians readily agree that in most contexts we should help as many people as we can. But the essential concepts for understanding the EA perspective are highly unfamiliar to most people.

  • Expected value
  • Cost-benefit analysis
  • Probability
  • An awareness of the abilities and limitations of social science

You don't need to be an expert in any of these areas to "get" EA. You just need to be vaguely comfortable with them in the way that people who have studied microeconomics or analytic philosophy or mathematics are, and most other people aren't.

This may be a distinction without a difference, but I want to raise the perspective that the composition of the EA movement is less about personality types and more about intellectual preparation.

Epistemic status: Pure opinion, but based on a lot of real-world experience

Given the number of non-analytic people involved in EA, I don't think having a quantitative mindset is a prerequisite. 

I've known or known of many people for whom the essential concept of "expected value" read as "if you want to buy something good, buy it for a low price when you can", which doesn't require any major intuitive leaps from everyday life. Same for "probability", which reads to many people as "do what has the best chance of working out" (a lot of people seem to understand this when it applies to EA issues like supporting GiveWell-type charities vs. charities with murkier missions).

I think having intellectual preparation of the kind you mentioned can be helpful, but I also think that there are more important reasons EA seems to have such a quantitative concentration:

  1. The types of EA orgs that exist in the public eye tend to have roles that lean very analytical. It's not surprising that the average GiveWell researcher is very comfortable with quantitative thinking, but this doesn't tell us much about the average Giving What We Can member (most of whom quietly donate a lot of money to excellent charities without rising to "public attention" among EAs).
  2. People interested in EA tend to promote it more to friends than strangers, which creates a natural bubble effect; people who get involved with EA now are more likely than chance to resemble people who got involved early on (e.g. philosophers, economists, and tech folks). If you look at people who got into EA because they read about it in a mainstream news outlet, or happened to pick up Will MacAskill's book when it was on sale somewhere, I think you'll find a weaker quantitative skew than with people who were introduced by friends. (This is a prediction I don't yet have any way to validate.)

Thanks for your thoughts. I wasn't thinking about the submerged part of the EA iceberg (e.g. GWWC membership), and I do feel somewhat less confident in my initial thoughts.

Still, I wonder if you'd countenance a broader version of my initial point- that there is a way of thinking that is not itself explicitly quantitative, but that is nonetheless very common among quantitative types. I'm tempted to call this 'rationality,' but it's not obvious to me that this thinking style is as all-encompassing as what LW-ers, for example, mean when they talk about rationality.

The examples you give of commonsensical versions of expected value and probability are what I'm thinking about here- perhaps the intuitive, informal versions of these concepts are soft prerequisites. This thinking style is not restricted to the formally trained, but it is more common among them (because it's trained into them). So in my (revised) telling, the thinking style is a prerequisite and explicitly quantitative types are overrepresented in EA simply because they're more likely to have been exposed to these concepts in either a formal or informal setting.

The reason I think this might be important is that I occasionally have conversations in which these concepts—in the informal sense—seem unfamiliar. "Do what has the best chance of working out" is, in my experience, a surprisingly rare way of conducting everyday business in the world, and some people seem to find it strange and new to think in that fashion. The possible takeaway is that some basic informal groundwork might need to be done to maximize the efficacy of different EA messages.

I basically agree that having intuitions similar to those I outlined seems very important and perhaps necessary for getting involved with EA. (I think you can be "interested" without those things, because EA seems shiny and impressive if you read certain things about it, but not having a sense for how you should act based on EA ideas will limit how involved you actually get.) Your explanation about exposure to related concepts almost definitely explains some of the variance you've spotted.

I spend a lot of my EA-centric conversations trying to frame things to people in a non-quantitative way (at least if they aren't especially quantitative themselves).

I'm a huge fan of people doing "basic groundwork" to maximize the efficacy of EA messages. I'd be likely to fund such work if it existed and I thought the quality was reasonably high. However, I'm not aware of many active projects in this domain; ClearerThinking.org and normal marketing by GiveWell et al. are all that come to mind, plus things like big charitable matches that raise awareness of EA charities as a side effect. 

Oh, and then there's this contest, which I'm very excited about and would gladly sponsor more test subjects for if possible. Thanks for reminding me that I should write to Eric Schwitzgebel about this.

In general, I'm skeptical about software solutionism, but I wonder if there's a need/appetite for group decision-making tools. While it's unclear exactly what works for helping groups make decisions, it does seem like a structured format could provide value to lots of organizations. Moreover, tools like this could provide valuable information about what works (and doesn't).

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Hi everyone! I’m Caitlin, and I’ve just kicked off a 6-month, full-time career-transition grant to dive deep into AI policy and risk. You can learn more about my work here. What I’m Building I’m launching a TikTok and Instagram channel, @AICuriousGirl, to document my journey as I explore AI governance, misalignment, and the more tangible risks like job displacement and misuse. My goal is to strike a tone of skeptical optimism[1], acknowledging the risks of AI while finding ways to mitigate them. How You Can Help My friends and roommates have already given me invaluable feedback and would be thrilled to hear I'm not just relying on them anymore to be reviewers. I’m now seeking: * Technical and policy experts or other communicators who can * Volunteer 10-15 minutes/week to review draft videos (1-2 minutes in length) and share quick thoughts on: * Clarity * Accuracy * Suggestions for tighter storytelling First Drafts Below are links to my first two episodes. Your early feedback will shape both my content style and how I break down complex ideas into 1- to 2-minute TikToks. 1. Episode 1: What is this channel? 2. Episode 2: What jobs will be left? (Please note: I’ll go into misuse and misalignment scenarios in future videos.) Why TikTok? Short-form video platforms are where many non-technical audiences spend their time, and I’m curious whether they can be a vehicle for thoughtful discussion about AI policy.   If you’re interested, please reply below or DM me, and thank you in advance for lending your expertise! — Caitlin   1. ^ This phrase is not good, please help me think of a better one and I will buy you a virtual coffee or sth.