Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules
In case anyone is interested, I also have:
I'd argue that cheaper higher animal welfare and alternative proteins in X years suggest that interventions will be more cost-effective in X years, which might imply that we should "save and invest" (either literally, in capital, or conceptually, in movement capacity). Do you have any thoughts on that?
I agree they could be cheaper (in relative terms), but also possibly far more likely to happen without us saving and investing more on the margin. It's probably worth ensuring a decent sum of money is saved and invested for this possibility, though.
Your 4 priorities seem reasonable to me. I might aim 2, 3 and 4 primarily at potentially extremely high payoff interventions, e.g. s-risks. They should beat 1 in expectation, and we should have plausible models for how they could.
It seems likely to me that donation opportunities will become less cost-effective over time, as problems become increasingly solved by economic growth and other agents. For example, the poorest people in the future will be wealthier and better off than the poorest people today. And animal welfare in the future will be better than today (although things could get worse before they get better, especially for farmed insects).
Thanks for writing this!
What works today may be obsolete tomorrow
I'd like to reinforce and expand on this point. I think it pushes us towards interventions that benefit animals earlier or with potentially large lasting counterfactual impacts through an AI transition. If the world or animal welfare donors specifically will be far wealthier in X years, then higher animal welfare and satisfying alternative proteins will be extremely cheap in relative terms in X years and we'll get them basically for free, so we should probably severely discount any potential counterfactual impacts past X years.
I would personally focus on large payoffs within the next ~10 years and maybe work to shape space colonization to reduce s-risks, each when we're justified in believing the upsides outweigh the backfire risks, in a way that isn't very sensitive to our direct intuitions.
I'm not sure it needs a whole other large project, especially one started from scratch. You could just have a few people push further on these points, which seem like the most likely cruxes:
And then have them come up with their own models and estimates. They could mostly rely on the studies and data RP collected on animals, although they could check the ones that seem most cruxy, too.
Against option 3, you write:
There are many different ways of carving up the set of “effects” according to the reasoning above, which favor different strategies. For example: I might say that I’m confident that an AMF donation saves lives, and I’m clueless about its long-term effects overall. Yet I could just as well say I’m confident that there’s some nontrivially likely possible world containing an astronomical number of happy lives, which the donation makes less likely via potentially increasing x-risk, and I’m clueless about all the other effects overall. So, at least without an argument that some decomposition of the effects is normatively privileged over others, Option 3 won’t give us much action guidance.
Wouldn't you also say that the donation makes these happy lives more likely on some elements of your representor via potentially increasing x-risk? So then they're neither made determinately better off nor determinately worse off in expectation, and we can (maybe) ignore them.
Maybe you need some account of transworld identity (or counterparts) to match these lives across possible worlds, though.
I haven't read much of this post, so just call me out if this is totally off base, but I suspect you're treating events as more "independent" than you should.
Relevant: A nuclear war forecast is not a coin flip by David Johnston.
I also illustrated in a comment there:
On the other extreme, we could imagine repeatedly flipping a coin with only heads on it, or a coin with only tails on it, but we don't know which, but we think it's probably the one only with heads. Of course, this goes too far, since only one coin flip outcome is enough to find out what coin we were flipping. Instead, we could imagine two coins, one with only heads (or extremely biased towards heads), and the other a fair coin, and we lose if we get tails. The more heads we get, the more confident we should be that we have the heads-only coin.
To translate this into risks, we don't know what kind of world we live in and how vulnerable it is to a given risk, and the probability that the world is vulnerable to the given risk at all an upper bound for the probability of catastrophe. As you suggest, the more time goes on without catastrophe, the more confident we should be that we aren't so vulnerable.
Hi Nicolas, thanks for commenting!
Whether or not you think you can add separate components seems pretty important for the hedging approach.
Indeed, if a portfolio dominates the default on each individual component, then some interventions in the portfolio must dominate the default overall.[1] So if you can compare interventions based on their total effects, the existence of such portfolios imply that some interventions dominate the default.
Ah, good point. (You're assuming the separate components can be added directly (or with fixed weights, say).)
I guess the cases where you can't add directly (or with fixed weights) involve genuine normative uncertainty or incommensurability. Or, maybe some cases of two envelopes problems where it's too difficult or unjustifiable to set a unique common scale and use the Bayesian solution.
In practice, I may have normative uncertainty about moral weights between species.
Intuitively then, you would prefer investing in one of those interventions over hedging?
If you're risk neutral, probably. Maybe not if you're difference-making risk averse. Perhaps helping insects is robustly positive in expectation, but highly likely to have no impact at all. Then you might like a better chance of positive impact, while maintaining 0 (or low) probability of negative impact.
Given the above, a worry I have is that the hedging approach doesn't save us from cluelessness, because we don't have access to an overall-better-than-the-default intervention to begin with.
For my illustration, that's right.
However, my illustration treats the components as independent, so that you can get the worst case on each of them together. But this need not be the case in practice. You could in principle have interventions A and B, both with ranges of (expected) cost-effectiveness [-1, 2], but whose sum is exactly 1. Let the cost-effectiveness of B be 1-"the cost-effectiveness of A". Having things cancel out so exactly and ending up with a range that's a single value is unrealistic, but I wonder if we could at least get a positive range this way.
(Although a complication I haven't thought about is that you should compare interventions with one another too, unless you think the default has a privileged status.)
Ya, the default doesn't seem privileged if you're a consequentialist. See this post.
Also, the primary beneficiaries of GiveWell-recommended charities are mostly infants and children, who eat less.