suffering-focused-altruist/longtermist, ASI-alignment focused, autistic/traumatized
a moral intuition i have: to avoid culturally/conformistly-motivated cognition, it's useful to ask:
if we were starting over, new to the world but with all the technology we have now, would we recreate this practice?
example: we start and out and there's us, and these innocent fluffy creatures that can't talk to us, but they can be our friends. we're just learning about them for the first time. would we, at some point, spontaneously choose to kill them and eat their bodies, despite us having plant-based foods, supplements, vegan-assuming nutrition guides, etc? to me, the answer seems obviously not. the idea would not even cross our minds.
(i encourage picking other topics and seeing how this applies)
What is malevolence? On the nature, measurement, and distribution of dark traits was posted two weeks ago (and i recommend it). there was a questionnaire discussed in that post which tries to measure the levels of 'dark traits' in the respondent.
i'm curious about the results[1] of EAs[2] on that questionnaire, if anyone wants to volunteer theirs. there are short and long versions (16 and 70 questions).
(or responses to the questions themselves)
i also posted the same quick take to LessWrong, asking about rationalists
i'm not bothered by your comments.
your first reply seemed to be about how i worded the point (you wrote "obnoxiously posed", and reworded it) rather than pedanticness/irrelevance. i mentally replaced "this is obnoxious" with "this makes me feel annoyed", which i think is okay to say. i also considered letting you know i'm autistic, which makes me word things differently or more literally[1] or in ways that can seem to have unintended emotional content. (i wonder if that's what made it feel like "marking it up in red pen")
onto object-level: what i wrote actually seemed substantive to me, i.e. it really did seem to me that the quote in point 2 was strongly misrepresenting the position the post intended to argue against, so i wouldn't consider it pedantic. (it could separately be false, of course)
If quila really cares about where the scout mindset metaphor falls apart they could have probed that instead of just dinging me as they are the referee
it did not occur to me that you might endorse the scout/soldier metaphor, and just be using the existence of scout/soldier in 'scout/soldier mindset' to bring it up; so yes, if that's actually the case, it would have been better to notice that and then either not comment on it or probe it as you say. using a metaphor is not invalid.
here's how i perceived it at the time: 'scout mindset' and 'soldier mindset' have particular meanings, so whether traditional soldiers are necessary for traditional scouts is a different topic. writing about them instead seemed 'opportunistic' in some sense, as if the text was using the terminological overlap to sneak through an argument about one as about the other.
i wonder if this thread could have been mitigated if i were more clear about that in my initial comment. if anyone has advice it is welcome.
maybe 'more structured like the thought is structured internally'
I think we have all the info we need to contradict the fear of not being a scout in her metaphor. Scouts are important for success in battle because accurate information is important to draw up a good battle plan. But those battle plans are worthless without soldiers to fight the battle! “Everyone Should be a Mapmaker and Fear that Using the Map to Actually Do Something Could Make Them a Worse Mapmaker” would be a much less rousing title, but this is how many EAs and rationalists have chosen to interpret the book.
seems locally invalid.[1]
'locally invalid' means 'this is not a valid argument', separate from the truth of the premises or conclusion
in a thread there i mentioned that even for a described 'ultimate neartermist', the best action is actually to cause acausal trade (i.e. by causing aligned ASI) with an ASI at an earlier point in time. for a hypothetical value which only cares about near-term beings, this would also be true, because most near-term beings are not on earth.
also, if i consider a hypothetical value which just cares about near-term beings on earth, it may prefer to destroy earth instead of slowly reducing animal suffering. 'would want to destroy earth' is a classical response to the idea of pure negative utilitarianism, but it would apply to standard utilitarianism too if the things valued (in this hypothetical case, just near-term beings on earth) experienced more bad than good which could not be mitigated enough in the near-term.
(disclaimer: the 'neartermism' of actual humans is probably importantly different to these, probably more reliant on moral intuition than these literal interpretations. i'm a longtermist myself.)
maybe wacky anthropics stuff?
this comes to mind
[strong upvoted for being well-formed criticism]
Almost any form of maximization as a lifestyle is likely to be neutral at best, unhealthy at worst, with maximization of any rational endeavor skewing towards predictably unhealthy and harmful. Maximization is fundamentally incompatible with good mental health. You can't "just have a little bit of maximization"; it's all or nothing.
how would you respond to the idea that good mental health is instrumental to maximization? that's a standard position in my impression.
(Commenting as I read)
In light of the conflicting research cited above, it would be overly simplistic to assume that those with high levels of malevolence are consistently aware of and endorse their traits, with an internal monologue[9] that goes something like this: "I'm so evil and just want to maximize my own power and gratify my own desires, no matter how much suffering this causes for everyone else, hahaha."[10] Although some people may think like that, it would be wrong to assume that everyone with high levels of malevolence thinks in this way.
I think the reason that inner monologue feels implausible is that the statement is explicit. If someone really held that attitude/goal, I'd expect it to be implicit: where their inner monologue wouldn't directly say, "I just want to gratify my own desires at the expense of others", but it would contain object-level reasoning about how to do that, and judgements of others that strongly correlate with whether they advance or are barriers to the goal, where the goal is an implicit background factor.
And as you note, most people do have some non-negligible level of this:
Everyday experience suggests, for example, that most people care a lot more about their self-interest than is remotely justified by impartial benevolence
a few pieces of this advice seem to be about how to manipulate others in subtle ways.
You can talk about specific things while being pleasant, I dare say, agreeable [...] pragmatically, people will be much likely more susceptible to help you if they associate you with someone who is fun/agreeable [...] try to be as agreeable as possible
i interpret 'try to be agreeable' to mean 'try to appear to agree more than you would if you were being fully honest' - because, given this is advice i.e. meant to be actionable, it's not just saying that people who (by coincidence) genuinely agree have a natural advantage. it's saying, actually intentionally try to seem agreeable, to cause them to associate you with a positive feeling, to make them 'more susceptible to help you'.
Adapt/mirror people's behaviour. If someone has a very focused way of talking about things, speaking fast, being curt and concise, mirror this. If someone likes to expand on personal anecdotes, shows a slower pacing, comments on the food, do that too. They will feel more comfortable. [...] If the vibe is good, it means that you'll be able to reach out later for more content.
i don't know if others may not mind this, but at least personally, i would not want people to do this with me. if someone is trying to influence my mind in ways i am not aware of, i want to know they are doing this so i can try to account for the effect (or, realistically, ask them not to, or not befriend them if they seemed to practice a wide range of such techniques - i've unfortunately met people who do).
i'd guess that mirroring behavior causes the one being mirrored to subtly intuit that they are more similar than they really are, leading to feeling more comfortable around that person.
i think {the net effects we'd observe on how friendships/allyships form in worlds where all EAs try to subtly manipulate each other} are not net good. i imagine it would lead to friendships/allyships being determined more (relative to before) by who's good at applying these (and other) tactics, and so less by the substantive factors which should matter.
also, i think there is possibility for nuance about where the line is between {being kind and trying to create a positive environment} and manipulation. some forms of trying to influence how someone feels seem okay to me, like if someone is sad, and you want to make them feel less sad, (and they know you'll be doing this and that's why they're talking to you). i guess the relevant line to me is whether it's intended to help the person, like in that case, or whether it's intended to influence how they perceive you to gain some sort of advantage from them. the two pieces of advice i quoted seem to be the latter kind.
(to be clear, this criticism doesn't apply to most of the points, which are probably good advice; i write this because i know criticism can feel bad, and i don't want to cause that.)
if someone told me they were doing it, i would actually ask them not to.
if it seemed like they were someone for whom this was just one thing in a wide arsenal of other such subtle tactics, i'd also probably want to not become friends with them.
As one counterexample, EA is really rare in humans, but does seem more fueled by principles than situations.
(Otoh, if situations make one more susceptible to adopting some principles, is any really the "true cause"? Like plausibly me being abused as a child made me want to reduce suffering more, like this post describes. But it doesn't seem coherent to say that means the principles are overstated as an explanation for my behavior.
I dunno why loneliness would be different; first thought is that loneliness means one has less of a community to appeal to, so there's less conformity biases preventing such a person from developing divergent or (relatively) extreme views; the fact that they can find some community around said views and have conformity pressures towards them is also a factor of course; and that actually would be an 'unprincipled' reason to adopt a view so i guess for that case it does make sense to say, "it's more situation(-activated biases) than genuine (less-biasedly arrived at) principles".
An implication in my view is that this isn't particularly about extreme behavior; less biased behavior is just rare across the spectrum. (Also, if we narrow in on people who are trying to be less biased, their behavior might be extreme; e.g., Rationalists trying prevent existential risk from AI seems deeply weird from the outside))