V

VeryJerry

38 karmaJoined

Posts
1

Sorted by New

Comments
20

Double standard regarding acts and omissions

Most people tend to feel more accountable for harmful actions than for equally or even more harmful omissions. Some may even believe they bear no moral responsibility for failing to help at all. These attitudes reflect omission bias.

Omission bias can influence how we view negligence, particularly in cases involving harm from natural processes, since such harms continue without any direct action from us.

I think many people also tend to have "do no harm" focused ethics, so when someone from their group (i.e. humanity) harms others (i.e. animals), we feel much more of a responsibility to stop them from causing harm than we do to get them to do good. 

Not sure how to share a file over the ea forum, if you direct message me your email address or message me on signal I can send it to you (or anyone else reading, feel free to do either of those).

In terms of self alignment, it seems pretty geared towards human psychology, so there's no guarantee it would work for AI, but one strategy discussed in it is recognizing that {what you choose now will affect what decisions you'll make in the future} can make it easier to re-prioritize the long term. For example, if you're trying to lose weight and are tempted to eat a cookie, it may be very difficult to resist just by thinking about the end goal, but if you think "if I eat it now, it'll be more likely that I'll give in to temptation next time too" then that can help make it easier to resist. Another strategy is to find ways to make limiting decisions when the bad option and good option are both in the future (it's easier to not buy cookies at the store than it is to resist them when they're in the pantry. It's easier to not get them from the pantry than to resist them sitting next to you). 

Obviously the cookie is an amoral choice, but a morally relevant one like the dam example might be "If we're the type of society that will build the dam despite the long term costs (which should outweigh the short term gain), then we will be more likely to make bad-in-the-long-run choices in the future, which will lead to much worse outcomes overall", and that future of compounded bad decisions might be bad enough to tip the scale to choosing to make the long-run-good choices more often. 

Have you read the breakdown of will by George Ainslie? I've only read the precis, I can't find a free download link but I'm sure you can find it on libgen, or feel free to ask me for my copy, but it goes in to how specifics of how we discount the future, as well as exploring different ways we're able to influence our future self into doing anything at all. 

Morality is Objective

I think morality is objective in the sense that there as some stable state of the universe with the maximum pleasure over time, which is the morally ideal state, but I don't think we'll ever know exactly how close we are to that ideal state. But it is still an objective fact about the territory, we just don't have an accurate map of it

Yeah, the alt protein does seem to be the biggest push to that end I've seen in EA. Has anyone done the math on lives saved (and hours of torture averted) per person who goes vegan or something, to see if a movement in that direction would be worth prioritizing more? 

Definitely agree on your second paragraph, it'll be interesting to see how things shake out. And I especially agree with you last sentence :)

My ungenerous, broad strokes takeaway, which I'm not holding strongly but does motivate me to look a lot more into SMA, and which is based on my extremely limited time/experience in ea, and having never heard of SMA till this post, and mainly the impact section, is that if there were babies constantly drowning in ponds, ea would maximize babies pulled out of the water per dollar, whereas sma might be more inclined to looking into finding the source of all these babies and stopping that. How close to your model of the two is that? What am I missing there? 

Cause frankly I've been surprised how many ea orgs are like "we're still torturing/murdering/etc the animals to eat them, but now it's just a little less torturous at some stages of life" instead of, what seem to me like the obvious long term view of "we need to find ways to shut down animal agriculture for good"

This reminds me of https://www.lesswrong.com/posts/MFNJ7kQttCuCXHp8P/the-goddess-of-everything-else

Yeah, that part I'm less sure about, especially since it's in large part a subset of aligning ai to any goals in the first place. I plan to write a post soon on what makes different values "better" or "worse" than others, maybe we can set up a brainstorming session on that post soon? I think that one will be much more directly applicable to AI moral alignment

I recently found out about the EA Gather town and I really like it, can that be linked here? It doesn't easily show up in the online spaces link https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge

Re: assumption 1, "The underlying effect of life events is exactly the same", what if that's actually not the case? A couple brainstorming ideas on ways it's not

  • maybe some new environmental factor, like microplastics or hormone disruptors or something is changing the way we experience good and bad events, making them less salient?
  • maybe more hyper salient stuff like junk food, or emotional experiences from media like movies, is affecting how we experience those things?
  • (Idk how to indent on mobile) for example with movies, maybe vicariously experiencing an intense event, accompanied with a music score and everything else, leaves the real life event feeling dull in comparison? I've heard Sam harris touch on a similar point, where it used to be you only really got an up close, face-to-face experience with someone by actually being close to them, and you're "implicated" in it, your actions affect them and how they see you, whereas a movie you get part of the feeling of intimacy without being implicated, you can be slobbing it up on the couch and the result is the same
  • (Another indent) perhaps other forms of ragebait in the news and social media are more salient than life events, leaving actually frustrating things to have less of an impact?
  • afaik depression rates are increasing, maybe depressed people experience things less saliently? And we see effects of that across the spectrum even for "subclinical" depression?
  • Maybe if you know you'll be mostly ok even if a bad thing happens, whether from social safety nets or good planning or whatever, then it happening is less salient? Or for a good thing, being ok before it happens makes it less exciting, you're going from not-ok to ok, rather than ok to better

I'm sure there are others, but those are the main things I could think of. Not sure if they're true or not though

Load more