I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail.
I don't see anything in the OP about asking for disproportionate representation of minorities. They seem to be advocating for proportionate representation, and noticing that EA fails to live up to this expectation.
I also don't think that EA truly is only a "sacrifice". For one thing, plenty of EA jobs pay quite well. EA is also an opportunity to do good. EA also has a lot of influence, and directs substantial amounts of money. It's totally reasonable to be concerned that the people making these decisions are not representative of the people that they affect, and may lack useful insight as a result.
I definitely agree that EA should aim to be cooler and more accessible to average people, but you need to be careful. Aiming for maximum virality can come at a cost to truth-seeking and epistemological rigour.
For example, if EA follows your advice and grows to a million members off the back of a "sam altman is a villain" campaign, that campaign will become the source of 99% of EAs members, all of whom will have been preferentially selected for having an anti-openAI stance. If it turns out that openAI is actually good for humanity (somehow), it would be very hard for the cool EA to come to that conclusion.
It doesn't matter if they state that is a "major breakthrough required" if they don't provide sufficient evidence that this breakthrough is in any way likely to happen in the immediate future. Yarrow is provided plenty of argumentation as to why it won't happen: if you disagree you should feel free to cite actual counter-evidence rather than throwing false accusations of bad faith around.
Longtermism doesn't get you out of the crazy train at all. In a lot of crazy train frameworks, the existence of people is net negative, so a large future for humanity is the worst thing that could happen.
This is one of the reasons I'm concerned about people taking these sort of speculative expected value calculations too seriously: I don't want someone trying to end humanity because they futzed up on a math problem.
In the world of conventional academic knowledge, there is nothing but highly intelligent people trying to build successful careers.
This is pretty weird thing to say. You understand that "academic knowledge" encompasses basically all of science, right? I know plenty of academics, and I can't think of anyone I know IRL that is not committed to truthseeking, often with signficantly more rigor than is found in effective altruism.
Let me restate the "5% means smaller" case, because I don't think you are responding to the strongest version of the argument here.
The concern is that these are cases of anchoring bias, and it's inherent in the methodology because you are asking in terms of percentages. The vast majority of times we encounter percentages, they are in the range from 1-99%. I'm guessing that in the actual questionaire, they have been answering other percentage questions in that same range. Answering these questions with an answer like 0.0001%, for a question when they are just guessing and having done any precise calculations, does not come naturally to people.
So when someone has the viewpoint that AI x-risk is "extremely unlikely but not impossible", and they are asked to give something in percentage terms, the answer they give is anchored to the 1-99% range, and they give something that seems "extremely low" when you are thinking in percentage terms.
But as the other paper showed, when you switch to talking about 1 in n odds, suddenly people are not anchored to 1-99% anymore. When placed next to "1 in 300 thousand odds of asteroid strike", "1 in 20 odds" sounds incredibly high, not extremely low. This explains why people dropped their estimate by six orders of magnitude in this framing, compared to the percentage one. In an odds framework, "1 in a million" feels more like "extremely unlikely but not impossible".
I'm a little concerned that you dismissed this as a fluke when it seems like it has a completely normal explanation.
I think these peoples actual opinion is that AI doom is "extremely unlikely but not impossible". The numbers people give are ill thought out quantifications by people who are not used to quantifying that kind of thing. Worse, people who have given out their ill thought out quantifications in the percentage form are now anchored to that, and will have difficulty changing their mind later on.
I personally would not recommend people to donate to lightcone. However, I do not think you have made a very persuasive case here. In particular a paragraph like the following concerns me:
After a while in a conversation that involved me repeatedly referring to Lawfulness of the kind exhibited by Keltham from Yudkowsky’s planecrash, he said that he didn’t actually read planecrash. (A Keltham, met with a request like that relating to a third party with very opposite goals, would sigh, saying the request should’ve been made in advance, and then not screw someone over, if they’re not trying to screw you over and it’s not an incredibly important thing.)
Like, you seem to be saying that it is a point against Ollie that he disagrees with a character from an obscure web fiction serial. Why should anyone care about this?
This is preceeded by a whistleblowing discussion that seems to be the bulk of your complaint, but there is not enough detail to tell what's going on. I feel it very much depends on what the information is and who the third party is.
I believe Yarrow is referencing this series of articles from David Thorstadt, which quotes primary sources extensively.
What happens if this is true and AI improvements will primarily be inference driven?
It seems like this would be very bad news for AI companies, because customers would have to pay for accurate AI results directly, on a per-run basis. Furthermore, they would have to pay exponentional costs for a linear increase in accuracy.
As a crude example, would you expect a translation agency to pay four times as much for translations with half as many errors? In either case, you'd still need a human to come along and correct the errors.
I think if someone is running a blog, it should be socially acceptable to ban people from commenting for almost any reason, including just finding someone annoying. According to the definition used in this article, this counts as "suppression of speech". Maybe it is in the literal sense, but I don't think smuggling in the bad feelings associated with government censorship is fair.
Or say you are s run a fish and chips shop, and it turns out the person you hired at the front is an open racist who drives customers away by telling them how much he despises albanian people. Are you meant to sacrifice your own money and livelihood for the sake of "protecting the man's speech"?
People have a right to curate their spaces for their actual needs. The questions become thornier in a case like college campuses, because academic debate and discussion is part of the needs of such an institution. Organisations have to determine the pros and cons of what they allow people to say on their platforms.