This is a special post for quick takes by Matt_Lerner. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Has there been any formal probabilistic risk assessment on AI X-risk? e.g. fault tree analysis or event tree analysis — anything of that sort?

Here’s a fault tree analysis: https://arxiv.org/abs/2306.06924

Review of risk assessment techniques that could be used: https://arxiv.org/abs/2307.08823

Applying ideas from systems safety to AI: https://arxiv.org/abs/2206.05862

Applying ideas from systems safety to AI (part 2): https://arxiv.org/abs/2302.02972

Applying AI to ideas from systems safety (lol): https://arxiv.org/abs/2304.01246

I recently learned of this effort to model AI x-risk, which may be similar to the sort of thing you're looking for, though I don't think they actually put numbers on the parameters in their model, and they don't use any well-known formal method. Otherwise I suppose the closest thing is the Carlsmith report, which is a probabilistic risk assessment, but again not using any formal method.

Under what circumstances is it potentially cost-effective to move money within low-impact causes?

This is preliminary and most likely somehow wrong.  I'd love for someone to have a look at my math and tell me if (how?) I'm on the absolute wrong track here.

Start from the assumption that there is some amount of charitable funding that is resolutely non-cause-neutral. It is dedicated to some cause area Y and cannot be budged. I'll assume for these purposes that DALYs saved per dollar is distributed log-normally within Cause Y:

I want to know how impactful it might, in general terms, be to shift money from the median funding opportunity in Cause Y to the 90th percentile opportunity. So I want the difference between the value of spending a dollar at those two points on the impact distribution.

The log-normal distribution has the following quantile function:

So the value to be gained by moving from p = 0.5 to p = 0.9 is given by

This simplifies down to

Or

Not a pretty formula, but it's easy enough to see two things which were pretty intuitive before this exercise. First, you can squeeze out more DALYs from moving money in causes where the  mean DALYs per dollar across all funding opportunities is higher, and, for a given average, moving money is higher-value where there's more variation across funding opportunities (roughly, since variance is proportional to but not precisely given by sigma). Pretty obvious so far.

Okay, what about making this money-moving exercise cost-competitive with a direct investment in an effective cause, with a benchmark of $100/DALY? For that, and for a given investment amount $x, and a value c such that an expenditure of $c causes the money in cause Y to shift from the median opportunity to the 90th-percentile one, we'd need to satisfy the following condition:

Moving things around a bit...

Which, given reasonable assumptions about the values of c and x, holds true trivially for larger means and variances across cause Y.  The catch, of course, is that means and variances of DALYs per dollar in a cause area are practically never large, let alone in a low-impact cause area. Still, the implication is that (a) if you can exert inexpensive enough leverage over the funding flows within some cause Y and/or (b) if funding opportunities within cause Y are sufficiently variable, cost-effectiveness is at least theoretically possible.

So just taking an example: Our benchmark is $100 per DALY, or 0.01 DALYs per dollar, so let's just suppose we have a low-impact Cause Y that is between three and six orders of magnitude less effective than that, with a 95% CI of [0.00000001,0.00001], or one for which you can preserve a DALY for between $100,000 and $100 million, depending on the opportunity. That gives mu = -14.97 and sigma = 1.76. Plugging those numbers into the above, we get approximately...

...suggesting, I think, that if you can get roughly 4000:1 leverage when it comes to spending money to move money, it can be cost-effective to influence funding patterns within this low-impact cause area.

There are obviously a lot of caveats here (does a true 90th percentile opportunity exist for any Cause Y?), but this is where my thinking is at right now, which is why this is in my shortform and not anywhere else.

Interesting. You might get more comments as a top-level post.

I guess a more useful way to think about this for prospective funders is to move things about again. Given that you can exert c/x leverage over funds within Cause Y, then you're justified in spending c to do so provided you can find some Cause Y such that the distribution of DALYs per dollar meets the condition...

...which makes for a potentially nice rule of thumb. When assessing some Cause Y, you need only ("only") identify a plausibly best or close-to-best opportunity, as well as the median one, and work from there.

Obviously this condition holds for any distribution and any set of quintiles, but the worked example above only indicates to me that it's a plausible condition for the log-normal.

School closures

Workplace closures


The usual caveats apply here: cross-country comparisons are often BS, correlation is not causation, I'm presenting smoothed densities instead of (jagged) histograms, etc, etc...

I've combined data on electoral system design and covid response to start thinking about the possible relationships between electoral system and crisis response. Here's some initial stuff: the gap, in days, between first confirmed cases and first school and workplace closures. Note that n= ~80 for these two datasets, pending some cleaning and hopefully a fuller merge between the different datasets.

To me, the potentially interesting thing here is the apparently lower variability of PR government responses. But I think there's a 75% chance that this is an illusion... there are many more PR governments than others in the dataset, and this may just be an instance of variability decreasing with sample size.

If there's an appetite here for more like this, I'll try and flesh out the analysis with some more instructive stuff, with the predictable criticisms either dismissed or validated.

What does PR stand for?

Proportional representation?

Proportional representation

It seems like there's a significant need right now to identify what the plausible relationship is between mask-wearing and covid19 symptoms. The virus is now widespread enough that a very quick Mechanical Turk survey could provide useful information.

Collect the following:

• Age group (5 categories)

• Wear a mask in public 1 month ago? (y/n)

• If yes to above, type of mask? (bandana/N95+/surgical/cloth/other)

• Sick with covid19 symptoms in past month? (y/n)

• Know anyone in everyday life who tested positive for covid19 in past month? (y/n)

• Postal code (for pop. density info)

Based on figures from this Gallup piece, a back-of-the-envelope says we could get usable results from surveying 20,000 Americans -- but we could work with a much smaller sample if we survey in a country where the virus is more prevalent.

Or of course, restrict our sample to a smaller geographic region in the US with more prevalence.

The EA movement is disproportionately composed of highly logical, analytically minded individuals, often with explicitly quantitative backgrounds. The intuitive-seeming folk explanation for this phenomenon is that that EA, with its focus on rigor and quantification, appeals to people with a certain mindset, and that the relative lack of diversity of thinking styles in the movement is a function of personality type.

I want to reframe this in a way that I think makes a little more sense: the case for an EA perspective is really only made in an analytic, quantitative way. In this sense, having a quantitative mindset is actually a soft prerequisite for "getting" EA, and therefore for getting involved.

I don't mean to say that only quantitative people can understand the movement, or that there's something intellectually very special about EAs.

Rather- very few people would disagree that charity should be effective. Even non-utilitarians readily agree that in most contexts we should help as many people as we can. But the essential concepts for understanding the EA perspective are highly unfamiliar to most people.

  • Expected value
  • Cost-benefit analysis
  • Probability
  • An awareness of the abilities and limitations of social science

You don't need to be an expert in any of these areas to "get" EA. You just need to be vaguely comfortable with them in the way that people who have studied microeconomics or analytic philosophy or mathematics are, and most other people aren't.

This may be a distinction without a difference, but I want to raise the perspective that the composition of the EA movement is less about personality types and more about intellectual preparation.

Epistemic status: Pure opinion, but based on a lot of real-world experience

Given the number of non-analytic people involved in EA, I don't think having a quantitative mindset is a prerequisite. 

I've known or known of many people for whom the essential concept of "expected value" read as "if you want to buy something good, buy it for a low price when you can", which doesn't require any major intuitive leaps from everyday life. Same for "probability", which reads to many people as "do what has the best chance of working out" (a lot of people seem to understand this when it applies to EA issues like supporting GiveWell-type charities vs. charities with murkier missions).

I think having intellectual preparation of the kind you mentioned can be helpful, but I also think that there are more important reasons EA seems to have such a quantitative concentration:

  1. The types of EA orgs that exist in the public eye tend to have roles that lean very analytical. It's not surprising that the average GiveWell researcher is very comfortable with quantitative thinking, but this doesn't tell us much about the average Giving What We Can member (most of whom quietly donate a lot of money to excellent charities without rising to "public attention" among EAs).
  2. People interested in EA tend to promote it more to friends than strangers, which creates a natural bubble effect; people who get involved with EA now are more likely than chance to resemble people who got involved early on (e.g. philosophers, economists, and tech folks). If you look at people who got into EA because they read about it in a mainstream news outlet, or happened to pick up Will MacAskill's book when it was on sale somewhere, I think you'll find a weaker quantitative skew than with people who were introduced by friends. (This is a prediction I don't yet have any way to validate.)

Thanks for your thoughts. I wasn't thinking about the submerged part of the EA iceberg (e.g. GWWC membership), and I do feel somewhat less confident in my initial thoughts.

Still, I wonder if you'd countenance a broader version of my initial point- that there is a way of thinking that is not itself explicitly quantitative, but that is nonetheless very common among quantitative types. I'm tempted to call this 'rationality,' but it's not obvious to me that this thinking style is as all-encompassing as what LW-ers, for example, mean when they talk about rationality.

The examples you give of commonsensical versions of expected value and probability are what I'm thinking about here- perhaps the intuitive, informal versions of these concepts are soft prerequisites. This thinking style is not restricted to the formally trained, but it is more common among them (because it's trained into them). So in my (revised) telling, the thinking style is a prerequisite and explicitly quantitative types are overrepresented in EA simply because they're more likely to have been exposed to these concepts in either a formal or informal setting.

The reason I think this might be important is that I occasionally have conversations in which these concepts—in the informal sense—seem unfamiliar. "Do what has the best chance of working out" is, in my experience, a surprisingly rare way of conducting everyday business in the world, and some people seem to find it strange and new to think in that fashion. The possible takeaway is that some basic informal groundwork might need to be done to maximize the efficacy of different EA messages.

I basically agree that having intuitions similar to those I outlined seems very important and perhaps necessary for getting involved with EA. (I think you can be "interested" without those things, because EA seems shiny and impressive if you read certain things about it, but not having a sense for how you should act based on EA ideas will limit how involved you actually get.) Your explanation about exposure to related concepts almost definitely explains some of the variance you've spotted.

I spend a lot of my EA-centric conversations trying to frame things to people in a non-quantitative way (at least if they aren't especially quantitative themselves).

I'm a huge fan of people doing "basic groundwork" to maximize the efficacy of EA messages. I'd be likely to fund such work if it existed and I thought the quality was reasonably high. However, I'm not aware of many active projects in this domain; ClearerThinking.org and normal marketing by GiveWell et al. are all that come to mind, plus things like big charitable matches that raise awareness of EA charities as a side effect. 

Oh, and then there's this contest, which I'm very excited about and would gladly sponsor more test subjects for if possible. Thanks for reminding me that I should write to Eric Schwitzgebel about this.

In general, I'm skeptical about software solutionism, but I wonder if there's a need/appetite for group decision-making tools. While it's unclear exactly what works for helping groups make decisions, it does seem like a structured format could provide value to lots of organizations. Moreover, tools like this could provide valuable information about what works (and doesn't).

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference