I don't think you accurately summarize the article. For example, you say he describes a trauma junkie as: "Anyone sharing their trauma or cautioning about causing trauma."
This is not how he describes the concept. Instead a trauma junkie is someone who:
I do think it would be genuinely concerning if prominent rationalists were generally dismissive of sexual assault and harassment. But that's not what the article is about. Instead, the author is dismissing people who describe "brief awkward conversations" as traumatic.
This post seems to have a liberal bias. It references a previous post that argues that "donating to moderate candidates in contested states could be highly effective." It then exclusively appeals to liberals:
[Y]our liberal family members may be willing to donate to political candidates.
And assumes conservative family members wouldn't be interested in donating to moderate candidates or effective causes.
Not all capabilities matter. For example, the capability to burp really loudly is not a morally important one. If we were trying to improve the world, trying to give people the capability to burp loudly is not in the top 1000 list of things I'd suggest prioritizing.
And plausibly, the reason why this capability doesn't matter is because it doesn't promote wellbeing. And more generally, this might be true of any capability. The reason why the capability to get an education or access healthcare are important is precisely because they reliably lead to people living better lives.
From a strictly utilitarian perspective, there are good reasons to spend time with loved ones, cultivate emotional bonds, and pursue personal hobbies:
That said, I don't think EA is committed to utilitarianism. Instead, I think EA is more centered around beneficentrism, the idea that it's really important to help others. The difference is that beneficentrism doesn't entail maximizing the world's total welfare. Rather, it's consistent with this view to be partial to one's family and loved ones and to have carveouts for one's own personal projects.
Scott Alexander discusses this in his post here. I'm skeptical that humans will able to align AI with morality anytime soon. Humans have been disagreeing about what morality consists of for a few thousand years. It's unlikely we'll solve the issue in the next 10.
A couple comments:
1. Evolution does not imply that every organism has an "intrinsic desire for survivability and reproduction." Rather, it implies that organisms will tend to act in ways that would lead to survival and reproduction in their ancestral environment, but these actions need not be motivated by a drive to survive and reproduce. In slogan form: We are adaptation executors, not fitness maximizers.
For example, the reason people nowadays eat Twinkies is not because we want to survive or reproduce, but because we like the taste! This preference for sugary foods would have been fitness enhancing in our ancestral environment, but it is maladaptive in our modern one. Yet people continue eating Twinkies anyways.
2. The "core" of your argument doesn't seem sound to me. You say that hyper-sentient (V1) aliens wouldn't eat humans because other super duper sentient (V2) aliens might eat them. And V3 aliens might eat the V2 aliens, and so on. But... the mere possibility of other aliens is not a strong reason to do anything. After all, it's hypothetically possible that the V2 aliens would be positively elated that the V1 aliens are eating humans and reward the V1 aliens even more. What matters though is not what's possible but rather about what the expected effects of one's actions are.
If V2 aliens don't actually exist and the V1 aliens know this, what other prudential reason would the V1 aliens have for refraining from eating humans? I don't see any.
There is an alternative to explaining something in a condescending way. You can explain something in an excited way!
I think this video does a good job of modeling this mode of explanation. It treats the other person as someone who's competent and truth-seeking, just lacking a bit of information. Start by assuming they would love to learn of the opportunity to help people more effectively. And get excited about sharing information about how we know different charities can be effective.
Examples: