The Forum is getting a bit swamped with discussions about Bostrom's email and apology. We’re making this thread where you can discuss the topic.
All other posts on this topic will be marked as “Personal Blog” — people who opt in or have opted into seeing “Personal Blog” posts will see them on the Frontpage, but others won’t; they’ll see them only in Recent Discussion or in All Posts. (If you want to change your "Personal Blog" setting, you can do that by following the instructions here.)
(Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread, or by reaching out to forum@effectivealtruism.org.)
Please also note that we have received an influx of people creating accounts to cast votes and comments over the past week, and we are aware that people who feel strongly about human biodiversity sometimes vote brigade on sites where the topic is being discussed. Please be aware that voting and discussion about some topics may not be representative of the normal EA Forum user base.
If you choose to participate in this discussion, please remember Forum norms. Chiefly,
- Be kind.
- Stay civil, at the minimum. Don’t sneer or be snarky. In general, assume good faith. We may delete unnecessary rudeness and issue warnings or bans for it.
- Substantive disagreements are fine and expected. Disagreements help us find the truth and are part of healthy communication.
Please try to remember that most people on the Forum are here for collaborative discussions about doing good.
Thanks a lot for raising this, Geoffrey. A while back I mentioned some personal feelings and possible risks related to the current Western political climate, from one non-Westerner's perspective. You've articulated my intuitions very nicely here and in that article.
From a strategic perspective, it seems to me that if AGI takes longer to develop, the more likely it is that the expected decision-making power would be shared globally. EAs should consider that they might end up in that world and it might not be a good idea to create and enforce easily-violated, non-negotiable demands on issues that we're not prioritizing (e.g. it would be quite bad if a Western EA ended up repeatedly reprimanding a potential Chinese collaborator simply because the latter speaks bluntly from the perspective of the former). To be clear, China has some of this as well (mostly relating to its geopolitical history) and I think feeling less strongly about those issues could be beneficial.