I am looking for work, and welcome suggestions for posts.
I am looking for work. I welcome suggestions for posts. You can give me feedback here (anonymous or not). Feel free to share your thoughts on the value (or lack thereof) of my posts.
I can help with career advice, prioritisation, and quantitative analyses.
Thanks for the update, Sarah!
- As a reminder, you can view our team’s half-quarterly OKRs via this public doc that I keep updated. I recently added our Q2.2 plans (May 20 - July 1).
I like this transparency!
Thanks, Michael. For readers' reference, CLR stands for Center on Long-Term Risk.
I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to "benefits"^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to "benefits"*"benefits"^-(1 + alpha) = "benefits"^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
Thanks, Michael.
I personally only care about the expected (posterior) impact. One can get a smaller expected impact by positing a more certain prior impact, but I do not know what would be the justification for being a priori very confident about the impact being 0.
I agree the interventions I considered are not robustly beneficial in expectation. However, I would not single out interventions changing the consumption of animal-based food (among the ones I analysed, all besides the broiler welfare and cage-free campaigns, and HSI). I estimate broiler welfare and cage-free corporate campaigns benefit soil animals 444 and 28.2 times as much as they benefit chickens.
Jim Buhler clarified what would be needed to neglect the uncertain nearterm effects of interventions targeting animals. I think the effects after 100 years or so are negligible, but that people neglecting nearterm effects due to their uncertainty should neglect more uncertain longterm effects even more.
Hi Joel,
I wonder whether your reasons for keeping the calculations private should also make you want to keep the results private (although you only shared them in a qualitative way), as these follow from the same speculative inputs. I suggested sharing the calculations because I assume it would take little time, and could slightly update the views of a few people, and yours too if people comment on them.
Thanks, Toby! Credits go to Michael.
I think "probability of sentience"*"expected welfare conditional on sentience" >> (1 - "probability of sentience")*"expected welfare conditional on non-sentience", such that the expected welfare can be estimated from the 1st expression. However, I would say the expected welfare conditional on non-sentience is not exactly 0. For this to be the case, one would have to be certain that a welfare of exactly 0 follows from failing to satisfy the sentience criteria, which is not possible. Yet, in practice, it could still be the case that there is a decent probability mass on a welfare close to 0.
Hi Henry,
I personally think one should only care about expected welfare, so I would be happy to act based on a very low probability of their welfare being sufficiently high to matter. What is your criteria for caring about animals of a given species? Do you have a minimum probability of sentience? If so, why that specific value? RP estimated a probability of 6.8 % of adult nematodes being sentient. People routinely care about events which are much less likely, although the welfare of nematodes conditional on sentience would still have to be sufficiently high for them to matter conditional on sentience.
I confirm the post is not parody. I found that remark funny in a good way.
I would also be curious to hear from people enthusiastic about invertebrate welfare, but not nematode welfare. RP estimated a probability of 8.2 % of silkworms being sentient, which is just 1.21 (= 0.082/0.068) times their probability of adult nematodes being sentient.
Some people like me have been referring to your mainline welfare ranges as median welfare ranges, but this is not technically correct. The median welfare range is 0 for a probability of sentience of 50 % or lower. Your mainline estimates refer to the product between the probability of sentience, rate of subjective experience as a fraction of that of humans, and median welfare range conditional on sentience, and the rate of subjective experience of humans. Going forward, I will refer to your mainline welfare ranges as simply this.
Thanks for all your efforts contributing to a better world, Matthew!