I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail.
I believe Rice's theorem applies to a programmable calculator. Do you think it is impossible to prove that a programmable handheld calculator is "safe"? Do you think it is impossible to make a programmable calculator safe?
My point is, just because you can't formally, mathematically prove something, doesn't mean it's not true.
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
My leading view is that there will be some sort of bubble pop, but with people still using genAI tools to some degree afterwards (like how people kept using the internet after the dot com burst).
Still major uncertainty on my part because I don't know much about financial markets, and am still highly uncertain about the level where AI progress fully stalls.
I think if someone is running a blog, it should be socially acceptable to ban people from commenting for almost any reason, including just finding someone annoying. According to the definition used in this article, this counts as "suppression of speech". Maybe it is in the literal sense, but I don't think smuggling in the bad feelings associated with government censorship is fair.
Or say you are s run a fish and chips shop, and it turns out the person you hired at the front is an open racist who drives customers away by telling them how much he despises albanian people. Are you meant to sacrifice your own money and livelihood for the sake of "protecting the man's speech"?
People have a right to curate their spaces for their actual needs. The questions become thornier in a case like college campuses, because academic debate and discussion is part of the needs of such an institution. Organisations have to determine the pros and cons of what they allow people to say on their platforms.
I don't see anything in the OP about asking for disproportionate representation of minorities. They seem to be advocating for proportionate representation, and noticing that EA fails to live up to this expectation.
I also don't think that EA truly is only a "sacrifice". For one thing, plenty of EA jobs pay quite well. EA is also an opportunity to do good. EA also has a lot of influence, and directs substantial amounts of money. It's totally reasonable to be concerned that the people making these decisions are not representative of the people that they affect, and may lack useful insight as a result.
I definitely agree that EA should aim to be cooler and more accessible to average people, but you need to be careful. Aiming for maximum virality can come at a cost to truth-seeking and epistemological rigour.
For example, if EA follows your advice and grows to a million members off the back of a "sam altman is a villain" campaign, that campaign will become the source of 99% of EAs members, all of whom will have been preferentially selected for having an anti-openAI stance. If it turns out that openAI is actually good for humanity (somehow), it would be very hard for the cool EA to come to that conclusion.
It doesn't matter if they state that is a "major breakthrough required" if they don't provide sufficient evidence that this breakthrough is in any way likely to happen in the immediate future. Yarrow is provided plenty of argumentation as to why it won't happen: if you disagree you should feel free to cite actual counter-evidence rather than throwing false accusations of bad faith around.
Longtermism doesn't get you out of the crazy train at all. In a lot of crazy train frameworks, the existence of people is net negative, so a large future for humanity is the worst thing that could happen.
This is one of the reasons I'm concerned about people taking these sort of speculative expected value calculations too seriously: I don't want someone trying to end humanity because they futzed up on a math problem.
I believe you are correct, and will probably write up a post explaining why in detail at some point.