LG

Lukas_Gloor

6401 karmaJoined

Sequences
1

Moral Anti-Realism

Comments
520

Somewhat relatedly, what about using AI to improve not your own (or your project's) epistemics, but improve public discourse? Something like "improve news" or "improve where people get their info on controversial topics."

Edit: To give more context, I was picturing something like training LLMs to pass ideological turing tests and then create a summary of the strongest arguments for and against, as well as takedowns of common arguments by each side that are clearly bad. And maybe combine that with commenting on current events as they unfold (to gain traction), handling the tough balance of having to compete in the attention landscape while still adhering to high epistemic standards. The goal then being something like "trusted source of balanced reporting," which you can later direct to issues that matter the most (after gaining traction earlier by discussing all sorts of things).

Off the cuff answers that may change as I reflect more:

  • Maybe around 25% of people in leadership positions in the EA ecosystem qualify? Somewhat lower for positions at orgs that are unusually "ambitious;" somewhat higher for positions that are more like "iterate on a proven system" or "have a slow-paced research org that doesn't involve itself too much in politics."
  • For the ambitious leaders, I unfortunately have no examples where I feel particularly confident, but can think of a few examples where I'm like "from a distance, it looks like they might be good leaders." I would count Holden in that category, even though I'd say the last couple of years seem suboptimal in terms of track record (and also want to flag that this is just a "from a distance" impression, so don't put much weight on it).
  • Why we're bad at identifying: This probably isn't the only reason, but the task is just hard. If you look at people who have ambitious visions and are willing to try hard to make them happen, they tend to be above-average on dark triad traits. You probably want someone who is very much not high on psychopathic traits, but still low enough on neuroticism that they won't be anxious all the time. Similarly, you want someone who isn't too high on narcissism, but they still need to have that ambitious vision and belief in being exceptional. You want someone who is humble and has inner warmth so they will uplift others along the way, so high on honesty-humility factor, but that correlates with agreeableness and neuroticism – which is a potential problem because you probably can't be too agreeable in the startup world or when running an ambitious org generally, and you can't be particularly neurotic.
    • (Edit) Another reason is, I think people often aren't "put into leadership positions" by others/some committee; instead, they put themselves there. Like, usually there isn't some committee with a great startup idea looking for a leader; instead, the leader comes with the vision and accumulates followers based on their conviction. And most people who aren't in leadership positions simply aren't vigilant or invested enough to care a lot about who becomes a leader. 

I think incentives matter, but I feel like if they're all that matters, then we're doomed anyway because "Who will step up as a leader to set good incentives?" In other words, the position "incentives are all that matters" seems self-defeating, because to change things, you can't just sit on the sidelines and criticize "the incentives" or "the system." It also seems too cynical: just because, e.g., lots of money is at stake, that doesn't mean people who were previously morally motivated and cautious about their motivations and trying to do the right thing, will suddenly go off the rails.

To be clear, I think there's probably a limit for everyone and no person is forever safe from corruption, but my point is that it matters where on the spectrum someone falls. Of the people that are low on corruptibility, even though most of them don't like power or would flail around helplessly and hopelessly if they had it, there are probably people who have the right mix of traits to create, maintain and grow pockets of sanity (well-run, well-functioning organizations, ecosystems, etc.). 

If you're saying "it takes more than good intentions to not get corrupted," I agree.

But then the question is, "Is Dario someone who's unusually unlikely to get corrupted?"

If you're saying "it doesn't matter who you put in power; bad incentives will corrupt everything," then I don't agree.

I think people differ a lot wrt how easily they get corrupted by power (or other bad incentive structures). Those low on the spectrum tend to shape the incentives around them proactively to create a culture that rewards what they don't want to lose about their good qualities.

No need to reply to my musings below, but this post prompted me to think about what different distinctions I see under “making things go well with powerful AI systems in a messy world.”

That said, my current favourite explanation of what cooperative AI is is that while AI alignment deals with the question of how to make one powerful AI system behave in a way that is aligned with (good) human values, cooperative AI is about making things go well with powerful AI systems in a messy world where there might be many different AI systems, lots of different humans and human groups and different sets of (sometimes contradictory) values.

First of all, I like this framing! Since quite a lot of factors feed into making things go well in such a messy world, I also like highlighting “cooperative intelligence” as a subset of factors you maybe want to zoom in on with the specific research direction of Cooperative AI.

Another recurring framing is that cooperative AI is about improving the cooperative intelligence of advanced AI, which leads to the question of what cooperative intelligence is. Here also there are many different versions in circulation, but the following one is the one I find most useful so far:

Cooperative intelligence is an agent's ability to achieve their goals in ways that also promote social welfare, in a wide range of environments and with a wide range of other agents.

As you point out, a lot of what goes under “cooperative intelligence” sounds dual-use. For differential development to have a positive impact, we of course want to select aspects of it that robustly reduce risks of conflict (and escalation thereof). CLR’s research agenda lists rational crisis bargaining and surrogate goals/safe pareto improvements. Those seem like promising candidates to me! I wonder at what level to best to intervene with a goal of installing these skills and highlighting these strategies. Would it make sense to put together a “peaceful bargaining curriculum” for deliberate practice/training? (If so, should we add assumptions like availability of safe commitment devices to any of the training episodes?) Is it enough to just describe the strategies in a “bargaining manual?” Do they also intersect with an AI's “values” and therefore have to be considered early on in training (e.g., when it comes to surrogate goals/safe pareto improvements)? (I feel very uncertain about these questions.)

I can think of more traits that can fit into, “What specific traits would I want to see in AIs, assuming they don’t all share the same values/goals?,” but many of the things I’m thinking of are “AI psychologies”/“AI character traits.” They arguably lie closer to “values” than (pure) “capabilities/intelligence,” so I’m not sure to what degree they aren’t already covered by alignment research. (But maybe Cooperative AI could be a call for alignment research to pay special attention to desiderata that matter in messy multi-agent scenarios.)

To elaborate on the connection to values, I think of “agent psychologies” as something that is in between (or “has components of both”) capabilities and values. On one side, there are “pure capabilities,” such as the ability to guess what other agents want, what they’re thinking, what their constraints are. Then, there are “pure values,” such as caring terminally about human well-being and/or the well-being (or goal achievement) of other AI agents. Somewhere in between, there are agent psychologies/character traits that arose because they were adaptive (in people it was during evolution, in AIs it would be during training) for a specific niche. These are “capabilities” in the sense that they allow the agent to excel at some skills beneficial in its niche. For instance, consider the cluster of skills around “being good at building trust” (in an environment composed of specific other agents). It’s a capability of sorts, but it’s also something that’s embodied, and it comes with tradeoffs. For comparison, in role-playing games, you often have only a limited number of character points to allocate to different character dimensions. Likewise, the AI that’s best-optimized for building trust probably cannot also be the one best at lying. (We can also speculate about training with interpretability tools and whether it has an effect on an agent's honesty or propensity to self-deceive, etc.) 

To give some example character traits that would contribute towards peaceful outcomes in messy multi-agent settings:

(I’m mostly thinking about human examples, but for many of these, I don’t see why they wouldn’t also be helpful in AIs as well with “AI versions” of these traits.)

Traits that predispose agents to steer away from unnecessary conflicts/escalation: 

  • Having an aversion to violence, suffering, other “typical costs of conflict.” 
  • ‘Liking’ to see others succeed alongside you (without necessarily caring directly about their goal achievement).
  • a general inclination to be friendly/welcoming/cosmopolitan. Lack of spiteful or (needlessly) belligerent instincts.

Agents with these traits will have a comparatively stronger interest in re-framing real-world situations with PD-characteristics into different, more positive-sum terms.

Traits around “being a good coalition partner” or “being good at building peaceful coalitions” (these have considerable overlap with the bullet points above):

  • Integrity, solid communication, honesty, charitable, not naive (i.e., is aware of deceptive or reckless agent phenotypes, is willing to dish out altruistic punishment if necessary), self-aware/low propensity to self-deceive, able to accurately see other’s perspective, etc.

“Good social intuitions” about other agents in one’s environment: 

  • In humans, there are also intuition-based skills like “being good at noticing when someone is lying” or “being good at noticing when someone is trustworthy.” Maybe there could be AI equivalents of these skills. That said, presumably AIs would learn these skills if they’re being trained in multi-agent environments that also contain deceptive and reckless AIs, which opens up the question: Is it a good idea to introduce such potentially dangerous agents solely for training purposes? (The answer might well be yes, but it obviously depends on the ways this can backfire.)

Lastly, there might be trust/cooperation-relevant procedures or technological interventions that become possible with future AIs, but cannot be done with humans:

  • Inspecting source codes. 
  • Putting AIs into sandbox settings to see/test what they would do in specific scenarios.
  • Interpretability, provided it makes sufficient advances. (In theory, neuroscience could make similar advances, but my guess is that mind-reading technology will arrive earlier in ML, if it arrives at all.) 

To sum up, here are a couple of questions I'd focus on if I were working in this area: 

  • To what degree (if any) does Cooperative AI want to focus on things that we can think of as “AI character traits?” If this should be a focus, how much conceptual overlap is there with alignment work in theory, and how much actual overlap is there with alignment work in practice as others are doing it at the moment? 
  • For things that go under the heading of “learnable skills related to cooperative intelligence,” how much of it can we be confident is more likely good than bad? And what’s the best way to teach these skills to AI systems (or make them salient)?
  • How good or bad would it be if AI training regimes are the way they are with current LLMs (solo-competition, AI is scored by human evaluators) vs whether training is multi-agent or “league-based” (AIs competing with close copies, training more analogous to human evolution). If AI developers do go into multi-agent training despite its risks (such as the possibility for spiteful instincts to evolve), what are important things to get right?
  • Does it make sense to deliberately think about features of bargaining among AIs that will be different from bargaining among humans, and zoom in on studying those (or practicing with those)?

A lot of the things I pointed out are probably outside the scope of "Cooperative AI" the way you think about it, but I wasn't sure where to draw the boundary, and I thought it could be helpful to collect my thoughts about this entire cluster of things in once place/comment.

As potentially relevant here, the differential includes particularly bipolar spectrum disorders, but also major depression, schizophrenia, attention-deficit/hyperactivity disorder, and posttraumatic stress disorder.

If one of the ways a person is acting unusually is holding grudges against people they once thought highly of (or against movements they were formerly a part of), I'd also consider NPD and pathological narcissism for the differential diagnosis (the latter has a vulnerable subtype that has some overlap with BPD but is separate construct). I'm adding this to underscore your point that a specific diagnosis is difficult without a lot of context.

I also agree with not wanting to add to the stigma against people with personality disorders. A stigma means some commonly held association that is either wrong or unfairly negative. I think the risk with talking about diagnoses instead of specific symptoms is that this can unfairly harm the reputation of other people with the same diagnosis. BPD in particular has 9 symptom criteria, of which people have to only meet 5 in order to be diagnosed. So, you can have two people with BPD who share 1 symptom out of 9.

Another way in which talk about personality disorders can be stigmatizing is if the implication or connotation is something like "this person is irredeemable." To avoid this connotation (if we were to armchair-diagnose people at all), I would add caveats like "untreated" or "and they seem to lack insight." Treatment success for BPD without comorbid narcissism is actually high, and for NPD it's more difficult but I wouldn't completely give up hope.

Edit: Overall, I should say that I still agree with the comments that sometimes it can make sense to highlight that a person's destructive behavior makes up a pattern and is more unusual than what you see in conflicts between people without personality disorders. However, I don't know if it is ever necessary for forum users to make confident claims about what specific type of cluster b personality disorder (or other, related condition) someone may have. More generally, for the reasons I mentioned in the discussion around stigma, I would prefer if this subject was handled with more care than SuperDuperForecasting was giving it. I overall didn't downvote their initial comment because I think something in the vicinity of what they said is an important hypothesis to put out there, but SuperDuperForecasting is IMO hurting their own cause/camp in the way they were talking about it. 

virologists believing rumors that humans are getting infected

What are you referring to here?

We already have confirmation that it happened hundreds of times that people got infected with H5N1 from contact with animals (only 2 cases in the US so far, but one of them very recently). We can guess that there might be some percentage of unreported extra cases, but I'd expect that to be small because of the virus's high mortality rate in its current form (and how much vigilance there is now).

So, I'm confused whether you're referring to confirmed information with the word "rumors," or whether there are rumors of some new development that's meaningfully more concerning than what we already have confirmations of. (If so, I haven't come across it – though "virus particles in milk" and things like that do seem concerning.) 

I agree with what you say in the last paragraph, including the highlighting of autonomy/placing value on it (whether in a realist or anti-realist way).

I'm not convinced by what you said about the effects of belief in realism vs anti-realism.

If you hold fixed people's first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.

Sure, but that feels like it's begging the question.

Let's grant that the people we're comparing already have liberal intuitions. After all, this discussion started in a context that I'd summarize as "What are ideological risks in EA-related settings, like the FTX/SBF setting?," so, not a setting where authoritarian intuitions are common. Also, the context wasn't "How would we reform people who start out with illiberal intuitions" – that would be a different topic.

With that out of the way, then, the relevant question strikes me as something like this:

Under which metaethical view (if any) – axiological realism vs axiological anti-realism – is there more of a temptation for axiologically certain individuals with liberal intuitions to re-think/discount these liberal intuitions so as to make the world better according to their axiology?

Here's how I picture the axiological anti-realist's internal monologue: 

"The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There's no tension here."

By contrast, here's how I picture the axiological realist:

"I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there's a sense in which that will be better for them than if I didn't do it. Perhaps this justifies going against the common-sense principles of liberalism, if I'm truly certain enough and am not self-deceiving here? So, I'm kind of torn..."

I'm not just speaking about hypotheticals. I think this is a dynamic that totally happens with some moral realists in the EA context. For instance, back when I was a moral realist negative utilitarian, I didn't like that my moral beliefs put my goals in tension with most of the rest of the world, but I noticed that there was this tension. It feels like the tension disappeared when I realized that I have to agree to disagree with others about matters of axiology (as opposed to thinking, "I have to figure out whether I'm indeed correct about my high confidence, or whether I'm the one who's wrong").

Sure, maybe the axiological realist will come up with a for-them compelling argument why they shouldn't impose the correct axiology on others. Or maybe their notion of "correct axiology" was always inherently about preference fulfillment, which you could say entails respecting autonomy by definition. (That said, if someone were also counting "making future flourishing people," as "creating more preference fulfillment," then this sort of axiology is at least in some possible tension with respecting the autonomy of present/existing people.) ((Also, this is just a terminological note, but I usually think of preference utilitarianism as a stance that isn't typically "axiologically realist," so I'd say any "axiological realism" faces the same issue with there being at least a bit of tension with belief in and and valuing autonomy in practice.))

When I talked about whether there's a "clear link" between two beliefs, I didn't mean that the link would be binding or inevitable. All I meant is that there's some tension that one has to address somehow.

That was the gist of my point, and I feel like the things you said in reply were perhaps often correct but they went past the point I tried to convey. (Maybe part of what goes into this disagreement is that you might be strawmanning what I think of as "anti-realism" with "relativism".)

I feel like it's more relevant what a person actually believes than whether they think of themselves as uncertain. Moral certainty seems directly problematic (in terms of risks of recklessness and unilateral action) only when it comes together with moral realism: If you think you know the single correct moral theory, you'll consider yourself justified to override other people's moral beliefs and thwart the goals they've been working towards.

By contrast, there seems to me to be no clear link from "anti-realist moral certainty in some subjectivist axiology" to "considers themselves justified to override other people's life goals." On the contrary, unless someone has an anti-social personality to begin with, it seems only intuitive/natural to me to go from "anti-realism about morality is true" to "we should probably treat moral disagreements between morally certain individuals more like we'd ideally treat political disagreements." How would we want to ideally treat political disagreements? I'd say we want to keep political polarization at a low, accept that there'll be view differences, and we'll agree to play fair and find positive-sum compromises. If some political faction goes around thinking it's okay to sabotage others or use their power unfairly (e.g., restricting free expression of everyone who opposes their talking points), the problem is not that they're "too politically certain in what they believe." The problem is that they're too politically certain that what they believe is what everyone ought to believe. This seems like an important difference! 

There's also something else that I find weird about highlighting uncertainty as a solution to recklessness/fanaticism. Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn't feel like a stable solution. (Not to mention that, as EAs tell themselves it's virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.) 

So, while I'm on board with cautioning against overconfidence and would probably concede that there's often a link between overconfidence and unjustified moral or metaehtical confidence, I feel like it's misguided in more than one way to highlight "moral certainty" as the thing that's directly bad here.

(You're of course free to disagree.) 

Sorry, I hate it when people comment on something that has already been addressed.

FWIW, though, I had read the paper the day it was posted on the GPI fb page. At that time, I didn't feel like my point about "there is no objective axiology" fit into your discussion.

I feel like even though you discuss views that are "purely deontic" instead of "axiological," there are still some assumptions from the axiology-based framework that underly your conclusion about how to reason about such views. Specifically, when explaining why a view says that it would be wrong to create only Amy but not Bobby, you didn't say anything that suggests understanding of "there is no objective axiology about creating new people/beings."

That said, re-reading the sections you point to, I think it's correct that I'd need to give some kind of answer to your dilemmas, and what I'm advocating for seems most relevant to this paragraph:

5.2.3. Intermediate wide views

Given the defects of permissive and restrictive views, we might seek an intermediate wide view: a wide view that is sometimes permissive and sometimes restrictive. Perhaps (for example) wide views should say that there’s something wrong with creating Amy and then later declining to create Bobby in Two-Shot Non-Identity if and only if you foresee at the time of creating Amy that you will later have the opportunity to create Bobby. Or perhaps our wide view should say that there’s something wrong with creating Amy and then later declining to create Bobby if and only if you intend at the time of creating Amy to later decline to create Bobby.

At the very least, I owe you an explanation of what I would say here.

I would indeed advocate for what you call the "intermediate wide view," but I'd motivate this view a bit differently.

All else equal, IMO, the problem with creating Amy and then not creating Bobby is that these specific choices, in combination, and if it would have been low-effort to choose differently (or the other way around), indicate that you didn't consider the interests of possible people/beings even to a minimum degree. Considering them to a minimum degree would mean being willing to at least take low-effort actions to ensure your choices aren't objectionable from their perspective (the perspective of possible people/beings). Adding someone with +1 when you could've easily added someone else with +100 just seems careless. If Alice and Bobby sat behind a veil of ignorance, not knowing which of them will be created with +1 or +100 (if someone gets created at all), the one view they would never advocate for is "only create the +1 person." If they favor anti-natalist views, they advocate for creating no one. If they favor totalist views, they'd advocate for creating both. If one favors anti-natalism and the other favors totalism, they might compromise on creating only the +100 person. So, most options here really are defensible, but you don't want to do the one thing that shows you weren't trying at all.

So, it would be bad to only create the +1 person, but it's not "99 units bad" in some objective sense, so this is not always the dominant concern and seems less problematic if we dial up the degree of effort that's needed to choose differently, or when there are externalities like "by creating Amy at +1 instead of Bob at +100, you create a lot of value for existing people." I don't remember if it was Parfit or Singer who first gave this example of delaying pregnancy for a short number of days (or maybe it was three months?) to avoid your future child suffering from a serious illness. There, it seems mainly objectionable not to wait because of how easy it would be to wait. (Quite a few people, when trying to have children, try for years, so a few months is not that significant.)

So, if you're at age 20 and contemplate having a child at happiness level 1, knowing that 15 years later they'll invent embryo-selection therapy to make new babies happier and guarantee happiness level 100, having only the child at 20 is a little selfish, but it's not like "wait 15 years," when you really want a child, is a low-effort accommodation. (Also, I personally think having children is under pretty much all circumstances "a little selfish," at least in the sense of "you could spend your resources on EA instead." But that's okay. Lots of things people choose are a bit selfish.) I think it would be commendable to wait, but not mandatory. (And like Michael ST Jules points out, not waiting is the issue here; after that's happened, it's done, and when you contemplate having a second child 15 years later, it's now a new decision and it no longer matters what you did earlier.)

And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if they were committed to that controversial claim.

The intentions are relevant here in the sense of: You should always act with the intention of at least taking low-effort ways to consider the interests of possible people/beings. It's morally frivolous if someone has children on a whim, especially if that leads to them making worse choices for these children than they could otherwise have easily made. But it's okay if the well-being of their future children was at least an important factor in their decision, even if it wasn't the decisive factor. Basically, "if you bring a child into existence and it's not the happiest child you could have, you better have a good reason for why you did things that way, but it's conceivable for there to be good reasons, and then it's okay."

I feel like you're trying to equivocate "wrong or heartless" (or "heartless-and-prejudiced," as I called it elsewhere) with "socially provocative" or "causes outrage to a subset of readers." 

That feels like misdirection.

I see two different issues here:

(1) Are some ideas that cause social backlash still valuable?

(2) Are some ideas shitty and worth condemning?

My answer is yes to both.

When someone expresses a view that belongs into (2), pointing at the existence of (1) isn't a good defense.

You may be saying that we should be humble and can't tell the difference, but I think we can. Moral relativism sucks.

FWIW, if I thought we couldn't tell the difference, then it wouldn't be obvious to me that we should go for "condemn pretty much nothing" as opposed to "condemn everything that causes controversy." Both of these seem equally extremely bad.

I see that you're not quite advocating for "condemn nothing" because you write this bit:

perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)

It depends on what you mean exactly, but I think this may not be going far enough. Some people don't cult-founder-style invent new beliefs with some ulterior motive (like making money), but the beliefs they "honestly" come to may still be hateful and prejudiced. Also, some people might be aware that there's a lot of misanthropy and wanting to feel superior in their thinking, but they might be manipulatively pretending to only be interested in "truth-seeking," especially when talking to impressionable members of the rationality community, where you get lots of social credit for signalling truth-seeking virtues.

To get to the heart of things, do you think Hanania's views are no worse than the examples you give? If so, I would expect people to say that he's not actually racist.

However, if they are worse, then I'd say let's drop the cultural relativism and condemn them.

It seems to me like there's no disagreement by people familiar with Hanania that his views were worse in the past. That's a red flag. Some people say he's changed his views. I'm not per se against giving people second chances, but it seems suspicious to me that someone who admits that they've had really shitty racist views in the past now continues to focus on issues where they – even according to other discussion participants here who defend him – still seem racist. Like, why isn't he trying to educate people on how not to fall victim to a hateful ideology, since he has personal experience with that. It's hard to come away with "ah, now the motivation is compassion and wanting the best for everyone, when previously it was something dark." (I'm not saying such changes of heart are impossible, but I don't view it as likely, given what other commenters are saying.)

Anyway, to comment on your examples:

Singer faced most of the heat for his views on preimplantation diagnostics and disability before EA became a movement. Still, I'd bet that, if EAs had been around back then, many EAs, and especially the ones I most admire and agree with, would've come to his defense.

I just skimmed that eugenics article you link to and it seems fine to me, or even good. Also, most of the pushback there from EA forum participants is about the strategy of still using the word "eugenics" instead of using a different word, so many people don't seem to disagree much with the substance of the article.

In Bostrom's case, I don't think anyone thinks that Bostrom's comments from long ago were a good thing, but there's a difference between them being awkward and tone-deaf, vs them being hateful or hate-inspired. (And it's more forgivable for people to be awkward and tone-deaf when they're young.)

Lastly, on Scott Alexander's example, whether intelligence differences are at least partly genetic is an empirical question, not a moral one. It might well be influenced by someone having hateful moral views, so it matters where a person's interest in that sort of issue is coming from. Does it come from a place of hate or wanting to seem superior, or does it come from a desire for truth-seeking and believing that knowing what's the case makes it easier to help? (And: Does the person make any actual efforts to help disadvantaged groups?) As Scott Alexander points out himself:

Somebody who believes that Mexicans are more criminal than white people might just be collecting crime stats, but we’re suspicious that they might use this to justify an irrational hatred toward Mexicans and desire to discriminate against them. So it’s potentially racist, regardless of whether you attribute it to genetics or culture.

So, all these examples (I think Zach Davis's writing is more "rationality community" than EA, and I'm not really familiar with it, so I won't comment on it) seem fine to me. 

When I said,

None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it).

This wasn't about, "Can we find some random people (who we otherwise wouldn't listen to when it comes to other topics) who will be outraged."

Instead, I meant that we can look at people's views at the object level and decide whether they're coming from a place of compassion for everyone and equal consideration of interests, or whether they're coming from a darker place.

And someone can have wrong views that aren't hateful:

Many of my extended family members consider the idea that abortion is permissible to be hateful and wrong. I consider their views, in addition to many of their other religious views, to be hateful and wrong.

I'm not sure if you're using "hateful" here as a weird synonym to "wrong," or whether your extended relatives have similarities to the Westboro Baptist Church.

Normally, I think of people who are for abortion bans as merely misguided (since they're often literally misguided about empirical questions, or sometimes they seem to have an inability to move away from rigid-category thinking and not understand the necessity of having a different logic for non-typical examples/edge cases).

When I speak of "hateful," it's something more. I then mean that the ideology has an affinity for appealing to people's darker motivations. I think ideologies like that are properly dangerous, as we've seen historically. (And it applies to, e.g., Communism just as well as to racism.)

I agree with you that conferences do very little "vetting" (and find this is okay), but I think the little vetting that they do and should do includes "don't bring in people who are mouthpieces to ideologies that appeal to people's dark instincts." (And also things like, "don't bring in people who are known to cause harm to others," whether that's through sexually predatory behavior or the tendency to form mini-cults around themselves.)

Load more