Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
V

VeryJerry

45 karmaJoined

Posts
1

Sorted by New

Comments
22

I left religion, and had to explore ethics/morality beyond "whatever the bible says is right". I went in a pretty utilitarian direction, and then arguing with my dad about how you can have morality without god, he said "but wouldn't that include animals too?" And I initially said yes, any reasonable moral framework should be able to tell you that e.g. kicking dogs is wrong, but thinking about it more got me to veganism. 

 

it's definitely not reason alone, I really don't like suffering/pain, so I'm probably more emotionally against it in general (i.e. even when it's not me experiencing it) than a lot of people? 

 

I will say too that I don't dislike all animals, I like hanging out with some cats and dogs

I'm the same way. I don't like animals all that much, I find them kind of gross and annoying tbh (especially farmed animals), and my favorite food ever used to be bbq, by a wide margin. But even someone like me is still able to recognize they don't deserve to suffer for me, and stop, so how is it so hard? I feel like I'm not even all that compassionate either

Double standard regarding acts and omissions

Most people tend to feel more accountable for harmful actions than for equally or even more harmful omissions. Some may even believe they bear no moral responsibility for failing to help at all. These attitudes reflect omission bias.

Omission bias can influence how we view negligence, particularly in cases involving harm from natural processes, since such harms continue without any direct action from us.

I think many people also tend to have "do no harm" focused ethics, so when someone from their group (i.e. humanity) harms others (i.e. animals), we feel much more of a responsibility to stop them from causing harm than we do to get them to do good. 

Not sure how to share a file over the ea forum, if you direct message me your email address or message me on signal I can send it to you (or anyone else reading, feel free to do either of those).

In terms of self alignment, it seems pretty geared towards human psychology, so there's no guarantee it would work for AI, but one strategy discussed in it is recognizing that {what you choose now will affect what decisions you'll make in the future} can make it easier to re-prioritize the long term. For example, if you're trying to lose weight and are tempted to eat a cookie, it may be very difficult to resist just by thinking about the end goal, but if you think "if I eat it now, it'll be more likely that I'll give in to temptation next time too" then that can help make it easier to resist. Another strategy is to find ways to make limiting decisions when the bad option and good option are both in the future (it's easier to not buy cookies at the store than it is to resist them when they're in the pantry. It's easier to not get them from the pantry than to resist them sitting next to you). 

Obviously the cookie is an amoral choice, but a morally relevant one like the dam example might be "If we're the type of society that will build the dam despite the long term costs (which should outweigh the short term gain), then we will be more likely to make bad-in-the-long-run choices in the future, which will lead to much worse outcomes overall", and that future of compounded bad decisions might be bad enough to tip the scale to choosing to make the long-run-good choices more often. 

Have you read the breakdown of will by George Ainslie? I've only read the precis, I can't find a free download link but I'm sure you can find it on libgen, or feel free to ask me for my copy, but it goes in to how specifics of how we discount the future, as well as exploring different ways we're able to influence our future self into doing anything at all. 

Morality is Objective

I think morality is objective in the sense that there as some stable state of the universe with the maximum pleasure over time, which is the morally ideal state, but I don't think we'll ever know exactly how close we are to that ideal state. But it is still an objective fact about the territory, we just don't have an accurate map of it

Yeah, the alt protein does seem to be the biggest push to that end I've seen in EA. Has anyone done the math on lives saved (and hours of torture averted) per person who goes vegan or something, to see if a movement in that direction would be worth prioritizing more? 

Definitely agree on your second paragraph, it'll be interesting to see how things shake out. And I especially agree with you last sentence :)

My ungenerous, broad strokes takeaway, which I'm not holding strongly but does motivate me to look a lot more into SMA, and which is based on my extremely limited time/experience in ea, and having never heard of SMA till this post, and mainly the impact section, is that if there were babies constantly drowning in ponds, ea would maximize babies pulled out of the water per dollar, whereas sma might be more inclined to looking into finding the source of all these babies and stopping that. How close to your model of the two is that? What am I missing there? 

Cause frankly I've been surprised how many ea orgs are like "we're still torturing/murdering/etc the animals to eat them, but now it's just a little less torturous at some stages of life" instead of, what seem to me like the obvious long term view of "we need to find ways to shut down animal agriculture for good"

This reminds me of https://www.lesswrong.com/posts/MFNJ7kQttCuCXHp8P/the-goddess-of-everything-else

Yeah, that part I'm less sure about, especially since it's in large part a subset of aligning ai to any goals in the first place. I plan to write a post soon on what makes different values "better" or "worse" than others, maybe we can set up a brainstorming session on that post soon? I think that one will be much more directly applicable to AI moral alignment

Load more