quila

111 karmaJoined www.lesswrong.com/users/quila

Bio

autistic, ai alignment focused, suffering-focused-altruist

my lesswrong account

Posts
1

Sorted by New
2
quila
· · 1m read

Comments
54

[strong upvoted for being well-formed criticism]

Almost any form of maximization as a lifestyle is likely to be neutral at best, unhealthy at worst, with maximization of any rational endeavor skewing towards predictably unhealthy and harmful. Maximization is fundamentally incompatible with good mental health. You can't "just have a little bit of maximization"; it's all or nothing.

how would you respond to the idea that good mental health is instrumental to maximization? that's a standard position in my impression.

(Commenting as I read)

In light of the conflicting research cited above, it would be overly simplistic to assume that those with high levels of malevolence are consistently aware of and endorse their traits, with an internal monologue[9] that goes something like this: "I'm so evil and just want to maximize my own power and gratify my own desires, no matter how much suffering this causes for everyone else, hahaha."[10] Although some people may think like that, it would be wrong to assume that everyone with high levels of malevolence thinks in this way.

I think the reason that inner monologue feels implausible is that the statement is explicit. If someone really held that attitude/goal, I'd expect it to be implicit: where their inner monologue wouldn't directly say, "I just want to gratify my own desires at the expense of others", but it would contain object-level reasoning about how to do that, and judgements of others that strongly correlate with whether they advance or are barriers to the goal, where the goal is an implicit background factor.

And as you note, most people do have some non-negligible level of this:

Everyday experience suggests, for example, that most people care a lot more about their self-interest than is remotely justified by impartial benevolence

a few pieces of this advice seem to be about how to manipulate others in subtle ways.

You can talk about specific things while being pleasant, I dare say, agreeable [...] pragmatically, people will be much likely more susceptible to help you if they associate you with someone who is fun/agreeable [...] try to be as agreeable as possible

i interpret 'try to be agreeable' to mean 'try to appear to agree more than you would if you were being fully honest' - because, given this is advice i.e. meant to be actionable, it's not just saying that people who (by coincidence) genuinely agree have a natural advantage. it's saying, actually intentionally try to seem agreeable, to cause them to associate you with a positive feeling, to make them 'more susceptible to help you'.

Adapt/mirror people's behaviour. If someone has a very focused way of talking about things, speaking fast, being curt and concise, mirror this. If someone likes to expand on personal anecdotes, shows a slower pacing, comments on the food, do that too. They will feel more comfortable. [...] If the vibe is good, it means that you'll be able to reach out later for more content.

i don't know if others may not mind this, but at least personally, i would not want people to do this with me. if someone is trying to influence my mind in ways i am not aware of, i want to know they are doing this so i can try to account for the effect (or, realistically, ask them not to, or not befriend them if they seemed to practice a wide range of such techniques - i've unfortunately met people who do).

i'd guess that mirroring behavior causes the one being mirrored to subtly intuit that they are more similar than they really are, leading to feeling more comfortable around that person.


i think {the net effects we'd observe on how friendships/allyships form in worlds where all EAs try to subtly manipulate each other} are not net good. i imagine it would lead to friendships/allyships being determined more (relative to before) by who's good at applying these (and other) tactics, and so less by the substantive factors which should matter.

also, i think there is possibility for nuance about where the line is between {being kind and trying to create a positive environment} and manipulation. some forms of trying to influence how someone feels seem okay to me, like if someone is sad, and you want to make them feel less sad, (and they know you'll be doing this and that's why they're talking to you). i guess the relevant line to me is whether it's intended to help the person, like in that case, or whether it's intended to influence how they perceive you to gain some sort of advantage from them. the two pieces of advice i quoted seem to be the latter kind.

 

(to be clear, this criticism doesn't apply to most of the points, which are probably good advice; i write this because i know criticism can feel bad, and i don't want to cause that.)

  1. ^

    if someone told me they were doing it, i would actually ask them not to.

    if it seemed like they were someone for whom this was just one thing in a wide arsenal of other such subtle tactics, i'd also probably want to not become friends with them.

i agree with some other comments, just sharing some thoughts that haven't been posted here yet.

i think that, purely consequentially, you can say that you personally do more good by continuing to purchase products derived from animal suffering (or continuing to do any other deontologically bad thing to others), because doing so makes you happier, or is more convenient, and this lets you be more effective - and that might really be true. to that extent, this isn't even an objection.

that said, when i consider situations involving the use of animal products, i tend to imagine what i would prefer, and how i would feel -- if i were still me, with my current values and mind -- but the roles were swapped; if it was me in a factory farm, and some alien altruist in the equivelant position to the one i am in, in an alien civilization similar to humans'. i ask myself, would i be okay with them doing <whatever> with <thing derived from my suffering>? 

and sometimes the answer is yes. if they're cold at night and they're in a situation where the only blanket is made of material derived from my suffering (analogy to wool), and they're feeling conflicted, then okay, they can use it. they're on my side.

if the request was, "can i eat your flesh because i think i derive personal enjoyment from it and i think that lets me be more effective, given i don't feel particularly disturbed by this situation?" then i would (metaphorically) conclude that i am in hell. that this is the altruist angel who is supposedly going to save us. that this is their moral character.[1]

again, this is not an objection per se - it's separate from whether the consequential argument is true, and if it is i guess i prefer you to follow it - it's just some related thoughts about the moral status of the world in which it is true. i am not saying you are wrong, but that if you are not wrong it is wrong for the world to be this way.

  1. ^

    to be clear, i'm not saying you are evil and i don't want you to feel bad from reading this.

5. the value of something like, how EA looks to outsiders? that seems to be the thing behind multiple points (2, 4, 7, and 8) in this which was upvoted, and i saw it other times this debate week (for example here) as a reason against the animal welfare option.

(i personally think that compromising epistemics for optics is one way movements ... if not die, at least become incrementally more of a simulacrum, no longer the thing they were meant to be. and i'm not sure if such claims are always honest, or if they can secretly function to enforce the relevance of public attitudes one shares without needing to argue for them.)

Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?

that is a form of evidence. if people's beliefs all had some truly-independent probability of being correct, then in a large society it would become extreme evidence for any belief that >50% of people have, but it's not actually true that people's beliefs are independent.

human minds are similar, and human cultural environments are similar. often people's conclusions aren't actually independent, and often they're not actually conclusions but are unquestioned beliefs internalized from their environment (parents, peers, etc). often people make the same logical mistakes, because they are similar entities (humans).

you still have to reason about that premise, "peoples conclusions about <subject> are independent", as you would any other belief.

and there are known ways large groups of humans can internalize the same beliefs, with detectable signs like 'becoming angry when the idea is questioned'.

(maybe usually humans will be right, because most beliefs are about low level mundane things like 'it will be day tomorrow'. but the cases where we'd like to have such a prior are exactly those non-mundane special cases where human consensus can easily be wrong.)

And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent.

Thank you for acknowledging that.

Considering or trying on different arguments is good, but I'd suggest doing it explicitly. For example, instead of "I meant X, not Y" (unless that's true), "How about new-argument X?" is a totally valid thing to say, even if having (or appearing to have) pinned-down beliefs might be higher status or something.

 

Some object-level responses:

I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn't diverge as much.

This sounds like it's saying: "to make it easier to recruit others, our beliefs should genuinely be closer to theirs." I agree that would not entail lying about one's beliefs to the public, but I think that would require EAs lying to themselves[1] to make their beliefs genuinely closer to what's popular.

For one's beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.

I don't think EAs should (somehow subtly) lie to themselves. If I imagine the EA which does this, it's actually really scary, in ways I find hard to articulate.

And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you're wrong, or they're wrong, and we could all be wrong

Sure, there can be epistemic compromise in that other sense, where you know there's some probability of your reasoning being incorrect, or where you have no reason to expect yourself to be correct over someone who is as good at reasoning and also trying to form correct beliefs.

But it's not something done because 'we need popular support to get things done'. 

  1. ^

    this reminded me of this: If we can’t lie to others, we will lie to ourselves by Paul Christiano.

    Many apparent cognitive biases can be explained by a strong desire to look good and a limited ability to lie; in general, our conscious beliefs don’t seem to be exclusively or even mostly optimized to track reality. If we take this view seriously, I think it has significant implications for how we ought to reason and behave.

we can't know what their "morality" is

Agreed, I mean that just for this subject of factory farming, it's tractable to know their preferences.

My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.

That's not the position I was responding to. Here is what you wrote:

It's fair to point out that the majority has been wrong historically many times. I'm not saying this should be our final decision procedure and to lock in those values. But we need some kind of decision procedure for things, and I find when I'm uncertain, that "asking the audience" or democracy seem like a good way to use the "wisdom of crowds" effect to get a relatively good prior.

That seems like you're proposing actually giving epistemic weight to the beliefs of the public, not just { pretending to have the views of normal humans, possibly only during outreach }. My response is to that.

From your current comment:

Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal

Epistemic (and related terms you used, like priors) are about how you form beliefs about what is true. They are not about how you should act, so there cannot be an 'epistemic compromise with the human public' in the sense you wrote - that would instead be called, 'pretending to have beliefs closer to theirs, to persuade them to join our cause'. To say you meant the latter thing by 'epistemic weight' seems like a definitional retreat to me: changing the definition of some term to make it seem like one meant something different all along.

(Some humans perform definitional retreats without knowing it, typically when their real position is not actually pinned down internally and they're coming up with arguments on the spot that are a compromise between some internal sentiment and what others appear to want them to believe. But in the intentional case, this would be dishonest.)

I agree that ideally, if we could, we should also get those other preferences taken into consideration. I'm just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.

There's not actually any impractical 'ideal-ness' to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.

(Restating your position as this also seems dishonest to me; you've displayed awareness of animals' preferences from the start, so you can't believe that it's intractable to consider them.)

the average person on the street is likely to view the idea that you could ever elevate the suffering of any number of chickens above that of even one human child to be abhorrent.

the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.

[note: i only swapped the order of humans/animals. my mind predicts that, at least without this text, this statement, but not the quoted one, would elicit negative reactions or be perceived as uncivil, despite the symmetry, because this kind of rhetoric is only normal/socially acceptable in the original case.]

if giving epistemic weight to to popular morality (as you wrote you favor)[1], you'd still need to justify excluding from that the moralities of members of non-dominant species, otherwise you end up unjustly giving all that epistemic weight to whatever might-makes-right coalition takes over the planet / excludes others from 'the public' (such as by locking the outgroup in factory slaughter facilities, or extermination camps, or enslaving them), because only their dominant morality is being perceived.

otherwise, said weight would be distributed in a way which is inclusive of animals (or nazi-targeted groups, or enslaved people, in the case of those aforementioned moral catastrophes). 

You can counter with a lot of math that checks out and arguments that make logical sense

this seems to characterize the split as: supporting humans comes from empathy, supporting animal minds comes from 'cold logic and math'. but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.

  1. ^

    (to be clear, i don't agree, this is just a separate point)

Load more