Bio

Participation
3

Working in healthcare technology.

MSc in applied mathematics/theoretical ML.

Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.

Comments
967

Topic contributions
1

Yes, but if at some point you find out, for example, that your model of morality leads to a conclusion that one should kill all humans, you'd probably conclude that your model is wrong rather than actually go through with it.

It's an extreme example, but at its basis every model is somehow an approximation stemming from our internal moral intuition. Be it that life is better than death, or happiness better than pain, or satisfying desires better than frustration, or that following god's commands is better than ignoring them, etc.

Is not every moral theory based on assumptions that X must be better than Y, around which some model is built?

No, that's not what I think. I think it's rather dangerous and probably morally bad to seek out "negative lives" in order to stop them. And I think we should not be interfering with nature in ways we do not really understand. The whole idea of wild animal welfare seems to me not only unsupported morally but also absurd and probably a bad thing in practice.

If I somehow ran into such a dog and decided the effort to take them to an ultrasound etc. was worth it, then probably yes - but I wouldn't start e.g. actively searching for stray dogs with cancer in order to do that.

In principle - though I can't say I've been consistent about it. I've supported ending our family dog's misery when she was diagnosed with pretty bad cancer, and I still stand behind that decision. On the other hand I don't think I would ever apply this to an animal one has had no interaction with.

On a meta level, and I'm adding this because it's relevant to your other comment: I think it's fine to live with such contradictions. Given our brain architecture, I don't expect human morality to be translatable to a short and clear set of rules.

I assume you're looking for a rational explanation, but it's rather based on personal experience. It's because I think my life with constant chronic pain has more negative experiences than positive ones but I have decided I should keep on living.

While I didn't karma-vote on the main post, I downvoted this comment because I think the idea of net-negative lives for naturally occurring creatures is not only false but even harmful.

I don't think this is a point against valuing animal lives (to some extent) as much as it's a point against utilitarianism. Which I agree with. I didn't downvote because I don't think a detailed calculation in itself is harmful, but when you reach these kinds of conclusions is probably the point to acknowledge pure utilitarianism might be a doomed idea.

Guy Raveh
6
3
1
44% disagree

Vote power should scale with karma

It's Ok to give users with really small karma less power, but otherwise EA has the wrong idea that if someone has read much/thought a lot about something it means they understand it better.

Load more