Say we have a choice between two types of bednets, each offers the same malaria protection but their production and distribution process is different.

  1. One producer, NetCo[1], is making bednets using a highly variable process. Each bednet costs differently, but NetCo guarantees an average cost of $1 per bednet for the foreseeable future, which we easily verify by looking at their past data. 
  2. Another producer, SleepWell[2], has a solid factory with highly regular production. All bednets will therefore cost the same. However, they have yet to finalize the whole process and cannot yet commit to the cost. They also, annoyingly, estimate that their bednet cost will be centered at $1.

Say that we need to choose only one of the two producers, and we can't wait for any more information. Should we go with NetCo or with SleepWell? Maybe it doesn't matter?

You may reason:

Well, let's first imagine that I'm buying only one bednet. In this case, buying from NetCo will cost me some amount of money $, which is $1 on average.

Hmm, but isn't it the same for SleepWell? There we don't know the cost, but the average estimate is also $1. I mean, while Epistemic and Statistical Uncertainty differ, can't I just also average out over my own world-views? 

This thought process is basically correct[3]. It's also correct if we want to buy a thousand bednets. However, we generally ask "where should I spend my money for the highest returns?" rather than "what would be the cheapest I would need to pay for some amount of bednets?". So, say AMF expects to fundraise $1 million, which producer should they choose?

You, again, reason:

I want AMF to buy as many bednets as possible with the same amount of money. If I choose NetCo, I know that sometimes I'll buy cheap nets and sometimes expensive nets. All in all, the more money I put into it the more nets I'd get and the closer the cost distribution of these nets to the actual cost probability. In this case, we can average everything out (say, by grouping together cheap+expensive nets for the average price of $2 per two nets), and we expect to buy about a million bednets. 

Shouldn't it again be basically the same for SleepWell? No? Okay, I'll do this as accurately as I can.

Going with SleepWell, we would spend $1 million on either cheap or expensive nets, or anything in between. That's complicated to calculate so maybe I should first try a silly edge-case. Say the cost is either $0.01 or $1.99 with equal probabilities. The average cost is indeed $1. We have two cases:

  1. If the cost turns out to be $1.99, then with $1 million we'd get roughly 500 thousand bednets. 
  2. If the cost is $0.01, we can buy 100 million bednets.

Oh shit, that's a lot! 

Wait, even if the cost could have been $199 or $199,999 instead of $1.99 that would still mean that in expectation we'd get more than 50 million bednets, much higher than NetCo. What happened here? Is this because I selected this silly edge case?

The point is that when we ask "how much good can I do with my money" we intuitively guess that the answer is 

Because 

That's true if all these variables are constant. They may still be random variables! ... TODO: explain the proper formula in the statistical uncertainty case. 

 

  1. ^

    Mnemonic: net cost

  2. ^

    Mnemonic: In this case, buyers can not worry about prices changing and they can sleep well

  3. ^

    Except, maybe, that it doesn't account for different types of possible risk aversion. It may be the case that EVM is the best way to deal with statistical uncertainty but that it's rational to be averse to epistemic uncertainty. [Actually, I have a hunch that epistemic uncertainty should generally be rewarded rather than penalized due to option-value / value-of-information, but that should ideally be already modeled in].

Comments4


Sorted by Click to highlight new comments since:

I haven't thought about this carefully yet, but I believe this kind of thinking comes out differently depending on whether you say "the average cost per net is $1" or "the average number of nets I can make for $1 is 1". I think often when we say things like this, we imagine a neat symmetrical normal distribution around the average, but you can't simultaneously have a neat normal distribution around both of these numbers! Perhaps you'd need to look more into where the numbers are coming from to get a better intuition for which shape of distribution is more plausible.

Exactly!

A very minor request: could you edit the title of your post to change "CEA" to "cost effectiveness analysis," simply to reduce ambiguity and confusion with "Center for Effective Altruism?"

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 7m read
 · 
The company released a model it classified as risky — without meeting requirements it previously promised This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page. Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises. In September 2023, Anthropic published its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary. The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place. Earlier today, TIME published then temporarily removed an article revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators found it may be able to assist novices in building bioweapons. (The