I don't think you accurately summarize the article. For example, you say he describes a trauma junkie as: "Anyone sharing their trauma or cautioning about causing trauma."
This is not how he describes the concept. Instead a trauma junkie is someone who:
I do think it would be genuinely concerning if prominent rationalists were generally dismissive of sexual assault and harassment. But that's not what the article is about. Instead, the author is dismissing people who describe "brief awkward conversations" as traumatic.
This post seems to have a liberal bias. It references a previous post that argues that "donating to moderate candidates in contested states could be highly effective." It then exclusively appeals to liberals:
[Y]our liberal family members may be willing to donate to political candidates.
And assumes conservative family members wouldn't be interested in donating to moderate candidates or effective causes.
Not all capabilities matter. For example, the capability to burp really loudly is not a morally important one. If we were trying to improve the world, trying to give people the capability to burp loudly is not in the top 1000 list of things I'd suggest prioritizing.
And plausibly, the reason why this capability doesn't matter is because it doesn't promote wellbeing. And more generally, this might be true of any capability. The reason why the capability to get an education or access healthcare are important is precisely because they reliably lead to people living better lives.
From a strictly utilitarian perspective, there are good reasons to spend time with loved ones, cultivate emotional bonds, and pursue personal hobbies:
That said, I don't think EA is committed to utilitarianism. Instead, I think EA is more centered around beneficentrism, the idea that it's really important to help others. The difference is that beneficentrism doesn't entail maximizing the world's total welfare. Rather, it's consistent with this view to be partial to one's family and loved ones and to have carveouts for one's own personal projects.
Scott Alexander discusses this in his post here. I'm skeptical that humans will able to align AI with morality anytime soon. Humans have been disagreeing about what morality consists of for a few thousand years. It's unlikely we'll solve the issue in the next 10.
A couple comments:
1. Evolution does not imply that every organism has an "intrinsic desire for survivability and reproduction." Rather, it implies that organisms will tend to act in ways that would lead to survival and reproduction in their ancestral environment, but these actions need not be motivated by a drive to survive and reproduce. In slogan form: We are adaptation executors, not fitness maximizers.
For example, the reason people nowadays eat Twinkies is not because we want to survive or reproduce, but because we like the taste! This preference for sugary foods would have been fitness enhancing in our ancestral environment, but it is maladaptive in our modern one. Yet people continue eating Twinkies anyways.
2. The "core" of your argument doesn't seem sound to me. You say that hyper-sentient (V1) aliens wouldn't eat humans because other super duper sentient (V2) aliens might eat them. And V3 aliens might eat the V2 aliens, and so on. But... the mere possibility of other aliens is not a strong reason to do anything. After all, it's hypothetically possible that the V2 aliens would be positively elated that the V1 aliens are eating humans and reward the V1 aliens even more. What matters though is not what's possible but rather about what the expected effects of one's actions are.
If V2 aliens don't actually exist and the V1 aliens know this, what other prudential reason would the V1 aliens have for refraining from eating humans? I don't see any.
A couple thoughts on this:
1. Perhaps it's true that elections are mostly sold to the highest bidder in developing poor countries. (I'm not familiar with the research on this, and I'd be reluctant to simply trust your Wikipedia link.) Should EAs help the "better" candidate buy their way to power? It seems like this risks undermining the legitimacy of their elections.
2. It's not clear to me that it's easy to figure out who the better candidate is. In one's own country that can often be difficult. Understanding the politics of a foreign country would be even harder. And I'm skeptical that we can just defer to whatever the majority of a country wants because a) it won't always be clear what the majority wants and b) there are reasons to think the majority will be mistaken due to bias or ignorance.
And I don't see how the footnote you cite on this point supports your position. It summarizes research about the effects of information dissemination on voters' choices. It finds that in some cases voters change their decisions after receiving information about social policies or political candidates. In other words, it shows that the citizens were ignorant--they did not know what was happening in politics. As the researchers note:
Voters may lack information about the qualifications and policy positions of candidates, making it difficult to make an informed vote choice.
Moreover, the upshot of the research is that sometimes people change their minds when given information about policies or candidates. This doesn't show that they ended up choosing the better policy or candidate.
I think the standard of evidence needs to be much higher before EAs get involved in foreign countries' political affairs.
Interesting study! Could you say more about what the intervention consisted of? Who were the people administering the intervention? What were their instructions/training? What was the structure of the program? Etc.