N

nathan98000

250 karmaJoined

Comments
99

The issue is, the objector can TELL, because people are good at knowing this kind of thing, that the EA talking to them simply believes they are wrong. From the objector's perspective, the EA is not respecting their valid, debatable opinion. Instead of engaging with them as an equal, the EA is condescendingly explaining to them why they're wrong.

There is an alternative to explaining something in a condescending way. You can explain something in an excited way!

I think this video does a good job of modeling this mode of explanation. It treats the other person as someone who's competent and truth-seeking, just lacking a bit of information. Start by assuming they would love to learn of the opportunity to help people more effectively. And get excited about sharing information about how we know different charities can be effective.

Examples:

  • "Have you heard of the randomized controlled trials that tested the effects of insecticide treated bednets? When some villages were randomly given bednets, they had a lower incidence of malaria. And we can quantify a) how much it costs to provide the bednets and b) how many lives this kind of intervention saves. Turns out saving a life is surprisingly feasible for even ordinary Americans!"
  • "Did you know the Make a Wish Foundation says each wish costs about $7,000? There are studies of other charities that show they can do a LOT more with that money. Rather than flying kids out to Hawaii for a week, they can save multiple kids' lives by providing vitamin A supplements!"

Interesting study! Could you say more about what the intervention consisted of? Who were the people administering the intervention? What were their instructions/training? What was the structure of the program? Etc.

I don't think you accurately summarize the article. For example, you say he describes a trauma junkie as: "Anyone sharing their trauma or cautioning about causing trauma."

This is not how he describes the concept. Instead a trauma junkie is someone who:

  • "Bends every gathering and interaction into a hunt for the problematic elements or people within it."
  • "Loudly centers themselves as leading the charge to making a place or group or scene “safe” for everyone."
  • "[Recasts] normal interactions as traumatic ordeals so there will be a victimization to rally against."

I do think it would be genuinely concerning if prominent rationalists were generally dismissive of sexual assault and harassment. But that's not what the article is about. Instead, the author is dismissing people who describe "brief awkward conversations" as traumatic.

This post seems to have a liberal bias. It references a previous post that argues that "donating to moderate candidates in contested states could be highly effective." It then exclusively appeals to liberals:

[Y]our liberal family members may be willing to donate to political candidates.

And assumes conservative family members wouldn't be interested in donating to moderate candidates or effective causes.

Not all capabilities matter. For example, the capability to burp really loudly is not a morally important one. If we were trying to improve the world, trying to give people the capability to burp loudly is not in the top 1000 list of things I'd suggest prioritizing.
And plausibly, the reason why this capability doesn't matter is because it doesn't promote wellbeing. And more generally, this might be true of any capability. The reason why the capability to get an education or access healthcare are important is precisely because they reliably lead to people living better lives.

From a strictly utilitarian perspective, there are good reasons to spend time with loved ones, cultivate emotional bonds, and pursue personal hobbies:

  1. Your own well-being counts too. A utilitarian doesn’t just care about the welfare of strangers. They care about everyone’s welfare, including their own. If living altruistically leaves you miserable, then by utilitarian standards that’s a loss of value, not a gain.
  2. Not all goods are fungible. Money can be redirected to different causes, but emotional attachments and sources of intrinsic motivation aren’t as transferable. So it doesn't make sense for a utilitarian to demand that people rewire these aspects of themselves. Human being just don’t work that way.
  3. Burnout is real. A person who devotes themselves single-mindedly to a cause might be admirable, but they risk exhaustion and loss of motivation in the long run. And conversely, a person who feels joy about life will be more motivated to continue working towards the ends they care about.
  4. Role models can inspire others. If utilitarianism looks unpleasant, fewer people will want to adopt it for themselves. But if people see utilitarians living rich, balanced lives, they’re more likely to be inspired by example and join in.

That said, I don't think EA is committed to utilitarianism. Instead, I think EA is more centered around beneficentrism, the idea that it's really important to help others. The difference is that beneficentrism doesn't entail maximizing the world's total welfare. Rather, it's consistent with this view to be partial to one's family and loved ones and to have carveouts for one's own personal projects.

Scott Alexander discusses this in his post here. I'm skeptical that humans will able to align AI with morality anytime soon. Humans have been disagreeing about what morality consists of for a few thousand years. It's unlikely we'll solve the issue in the next 10.

A couple comments:


1. Evolution does not imply that every organism has an "intrinsic desire for survivability and reproduction." Rather, it implies that organisms will tend to act in ways that would lead to survival and reproduction in their ancestral environment, but these actions need not be motivated by a drive to survive and reproduce. In slogan form: We are adaptation executors, not fitness maximizers.

For example, the reason people nowadays eat Twinkies is not because we want to survive or reproduce, but because we like the taste! This preference for sugary foods would have been fitness enhancing in our ancestral environment, but it is maladaptive in our modern one. Yet people continue eating Twinkies anyways.

2. The "core" of your argument doesn't seem sound to me. You say that hyper-sentient (V1) aliens wouldn't eat humans because other super duper sentient (V2) aliens might eat them. And V3 aliens might eat the V2 aliens, and so on. But... the mere possibility of other aliens is not a strong reason to do anything. After all, it's hypothetically possible that the V2 aliens would be positively elated that the V1 aliens are eating humans and reward the V1 aliens even more. What matters though is not what's possible but rather about what the expected effects of one's actions are.

If V2 aliens don't actually exist and the V1 aliens know this, what other prudential reason would the V1 aliens have for refraining from eating humans? I don't see any.

Load more