Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.
No, that's not what I think. I think it's rather dangerous and probably morally bad to seek out "negative lives" in order to stop them. And I think we should not be interfering with nature in ways we do not really understand. The whole idea of wild animal welfare seems to me not only unsupported morally but also absurd and probably a bad thing in practice.
In principle - though I can't say I've been consistent about it. I've supported ending our family dog's misery when she was diagnosed with pretty bad cancer, and I still stand behind that decision. On the other hand I don't think I would ever apply this to an animal one has had no interaction with.
On a meta level, and I'm adding this because it's relevant to your other comment: I think it's fine to live with such contradictions. Given our brain architecture, I don't expect human morality to be translatable to a short and clear set of rules.
I don't think this is a point against valuing animal lives (to some extent) as much as it's a point against utilitarianism. Which I agree with. I didn't downvote because I don't think a detailed calculation in itself is harmful, but when you reach these kinds of conclusions is probably the point to acknowledge pure utilitarianism might be a doomed idea.
Yes, but if at some point you find out, for example, that your model of morality leads to a conclusion that one should kill all humans, you'd probably conclude that your model is wrong rather than actually go through with it.
It's an extreme example, but at its basis every model is somehow an approximation stemming from our internal moral intuition. Be it that life is better than death, or happiness better than pain, or satisfying desires better than frustration, or that following god's commands is better than ignoring them, etc.