I'm a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals.
Hi Ramiro. No, we haven't collected the CURVE posts as an epub. At present, they're available on the Forum and in RP's Research Database. However, I'll mention your interest in this to the powers that be!
I agree with Ariel that OP should probably be spending more on animals (and I really appreciate all the work he's done to push this conversation forward). I don't know whether OP should allocate most neartermist funding to AW as I haven't looked into lots of the relevant issues. Most obviously, while the return curves for at least some human-focused neartermist options are probably pretty flat (just think of GiveDirectly), the curves for various sorts of animal spending may drop precipitously. Ariel may well be right that, even if so, the returns probably don't fall off so much that animal work loses to global health work, but I haven't investigated this myself. The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now. (That being said, I'd love to see more extensive investigation into field building for animals! If EA field building in general is cost-competitive with other causes, then I'd expect animal field building to look pretty good.)
I should also say that OP's commitment to worldview diversification complicates any conclusions about what OP should do from its own perspective. Even if it's true that a straightforward utilitarian analysis would favor spending a lot more on animals, it's pretty clear that some key stakeholders have deep reservations about straightforward utilitarian analyses. And because worldview diversification doesn't include a clear procedure for generating a specific allocation, it's hard to know what people who are committed to worldview diversification should do by their own lights.
Thanks for all this, Hamish. For what it's worth, I don't think we did a great job communicating the results of the Moral Weight Project.
Thanks for your question, Moritz. We distinguish between negative results and unknowns: the former are those where there's evidence of the absence of a trait; the latter are those where there's no evidence. We penalized species where there was evidence of the absence of a trait; we gave zero when there was no evidence. So, not having many negative results does produce higher welfare range estimates (or, if you prefer, it just reduces the gaps between the welfare range estimates).
Thanks so much for the vote of confidence, JWS. While we'd certainly be interested in working more on these assumptions, we haven't yet committed to taking this particular project further. But if funding were to become available for that extension, we would be glad to keep going!
Hi Teo. Those are important uncertainties, but our sequences doesn't engage with them. There's only so much we could cover! We'd be glad to do some work in this vein in the future, contingent on funding. Thanks for raising these significant issues.
Hi David. There are two ways of talking about personal identity over time. There's the ordinary way, where we're talking about something like sameness of personality traits, beliefs, preferences, etc. over time. Then, there's "numerical identity" way, where we're talking about just being the same thing over time (i.e., one and the same object). It sounds to me like either (a) you're running these two things together or (b) you have a view where the relevant kinds of changes in personality traits, beliefs, preferences, etc. result in a different thing existing (one of many possible future Davids). If the former, then I'll just say that I meant only to be talking about the "numerical identity" sense of sameness over time, so we don't get the problem you're describing in the intra-individual case. If the latter, then that's a pretty big philosophical dispute that we're unlikely to resolve in a comment thread!
Thanks for this. You're right that we don't give an overall theory of how to handle either decision-theoretic or moral uncertainty. The team is only a few months old and the problems you're raising are hard. So, for now, our aims are just to explore the implications of non-EVM decision theories for cause prioritization and to improve the available tools for thinking about the EV of x-risk mitigation efforts. Down the line---and with additional funding!---we'll be glad to tackle many additional questions. And, for what it's worth, we do think that the groundwork we're laying now will make it easier to develop overall giving portfolios based on people's best judgments about how to balance the various kinds and degrees of uncertainty.
Thanks for the idea, Pablo. I've added summaries to the sequence page.