O

OGTutzauer🔸

Engineering Physics Student @ Lund University
57 karmaJoined Pursuing a graduate degree (e.g. Master's)Lund, Sweden

Bio

Participation
4

I lead Effective Altruism Lund in southern Sweden, while wrapping up my M.Sc. in Engineering Physics specializing in machine learning. I'm a social team player who likes high ceilings and big picture work. Scared of AI, intrigued by biorisk, hopeful about animal welfare. 

My interests outside of EA, in hieroglyphs: 🎸🧙🏼‍♂️🌐💪🏼🎉📚👾🎮✍🏼🛹

Comments
8

Thanks for such an in depth reply! I have two takes on your points but before that I want to give the disclaimer that I'm a mathematician, not a philosopher by training. 

First, we're not saying that the lightcone solution implies we should always save Jones. Indeed, there could still be a large enough number of viewers. What we are saying is this: previously, you could say that for any suffering S Jones is experiencing, there is some number of viewers X whose mild annoyance A would in aggregate be greater than S. What's new here is the upper bound to X, so A*X > S could still be true (and we let Jones suffer), but it can't necessarily be made true for any Y by picking a sufficiently large X. 

As to your point about there being different number of viewers X in different worlds, yep I buy that! I even think it's morally intuitive that if more suffering A*X is caused by saving Jones then we have less reason to do so. This for me isn't a case of moral rules not holding across worlds because the situations are different, but we're still making the same comparison (A*X vs Y). I'll caveat this by saying that I've never thought too hard about moral consistency across worlds. 

I'm not sure I follow. Are you saying that accepting that there is a finite amount of potential suffering in our future would imply x-risk reduction being problematic? 

I buy that. One way of putting it would be to say that if you use a parliamentary method of resolving moral uncertainty, the "non-totalist population ethics rep" and the "non-longtermist rep" should both say that farmed animal welfare as greater in scale than biorisk. Does that seem more useful? 

To throw some numbers in here, point no2 would need for a lot of countries to all decide it's not worth it to fill the funding gap even a little. Let's say there are 50 countries that could (I'd estimate half of them to be in Europe), and they decide not to fund with probability .

The probability that they all decide not too fund is then . If p is something like half a percent, there's a 78% risk of no country filling the funding gap. If all three steps have 78% probability then yeah, we do approach 50% of them all happening. 

For the question of whether to "save to give," MacAskill's paper on the topic was very useful for me. One crucial consideration is whether my donations would grow more in someone else's hands. 

E.g. I give $100k to AMF means fewer die from malaria, which means more economic growth. Does this generate more than the ~7%/year my stocks might? I find that people often neglect this counterfactual. 

Introduction

In this post, I share some thoughts from this weekend about the scale of farmed animal suffering, compared to the expected lives lost from engineered pandemics. I make the case that animal welfare as a cause has a 100x higher scale than biorisk. I'd happily turn this in to a post if you have more you'd like to add either for or against.

 

Scale Comparisons

Farmed Animal Suffering. I was thinking about the scale of farmed animal suffering, which is on the order of  lives per year. These animals endure what might be among the worst conditions on the planet, considering only land animals. My estimate for the moral weight of the average land animal is approximately 1% to 0.1% that of a human. At first glance, this suggests that farmed animal suffering is equivalent to the annual slaughter of between 100 million and 1 billion humans, without considering the quality of their lives before death. I want to make the case that the scale of this could be 100x or a 1000x that of engineered pandemics. 

Engineered Pandemics. In The Precipice, Toby Ord lists engineered pandemics as yielding a 1 in 30 extinction risk this century. Since The Precipice was published in 2020, this equates to a 1 in 30 chance over 80 years, or approximately a 1 in 2,360 risk of extinction from engineered pandemics in any given year. If that happens,  human lives would be lost, resulting in an expected loss of approximately four million human lives per year. 

 

Reasons I might be wrong

Tractability & Neglectedness. If engineered pandemic preparedness is two orders of magnitude higher in neglectedness and/or tractability, that would outweigh the scale and make them tractable. I'd be happy to hear someone more knowledgeable give some comparisons here. 

Extinction is Terrible. Human extinction might not equate to just  lives lost, due to future lives lost. Further, the The Precipice only discusses extinction level pandemics, but as suggested by Rodriguez here, one in 100k people surviving might only stand a ~50% chance of further survival. I personally don't ascribe much value to future people not existing as I subscribe more to the Person-affecting View. Under moral uncertainty, that's relevant even if you mostly don't agree about that view. 

Collapse Facilitating Lock-in. A sufficiently large population loss could mean that in re-building civilization, a system is constructed which can centralize enough for a value lock-in. This could be in the shape of an authoritarian system oppressing everyone, or a lock-in of a system similar to the current one with an equal or greater amount of animals slaughtered annually. 

When EA Lund does tabling at student association fairs, one thing that's gotten a laugh out of some people is having two plates of cookies people can take from. One of them gets a sticky saying "this cookie saves one (1) life", and the other gets a sticky saying "this cookie saves 100 lives!" 

This calls to mind The Technological Completion Conjecture, which suggests we should focus on the order of inventions rather than whether we want then invented at all.

We could posit some "Moral Completeness Conjecture" in the same way. Then we only need look for the order in which we want interventions (like ASRS risk mitigation & stopping animal factory farming) that improve the world. It's already trivial that som paths to utopia are much worse than others.