Another way to think about this (imo) is "do you screen falsehoods immediately, such that none ever enter, or do you prune them later at leisure?"
Sometimes, assembling false things (such as rough approximations or heuristics!) can give you insight as to the general shape of a new Actually True thing, but discovering the new Actually True thing using only absolutely pure definite grounded vetted airtight parts would be way harder and wouldn't happen in expectation.
And if you're trying to (e.g.) go "okay, men are stronger than women, and adults are smarter than kids" and somebody interrupts to go "aCtUaLlY this is false" because they have a genuinely correct point about, e.g., the variance present in bell curves, and there being some specific women who are stronger than many men and some specific children who are smarter than many adults ... this whole thing just derails the central train of thought that was trying to go somewhere.
(And if the "aCtUaLlY" happens so reliably that you can viscerally feel it coming, as you start to type out your rough premises, you get demoralized before you even begin, close your draft, and go do something else instead.)
Selfish piggyback plug for the concept of sazen.
The essay itself is the argument for why EAs shouldn't steelman things like the TIME piece.
(I understand you're disagreeing with the essay and that's :thumbsup: but, like.)
If you set out to steelman things that were generated by a process antithetical to truth, what you end up with is something like [justifications for Christianity]; privileging-the-hypothesis is an unwise move.
If one has independent reasons to think that many of the major claims in the article are true, then I think the course most likely to not-mislead one is to follow those independent reasons, and not spend a lot of time anchored on words coming from a source that's pretty clearly not putting truth first on the priority list.
This language is inflammatory ("overwhelming", "incestuous"), but we can boil this down to a more sterile sounding claim
A major part of the premise of the OP is something like "the inflammatory nature is a feature, not a bug; sure, you can boil it down to a more sterile sounding claim, but most of the audience will not; they will instead follow the connotation and thus people will essentially 'get away' with the stronger claim that they merely implied."
The accuser doesn’t offer concrete behaviors, but rather leaves the badness as general associations. They don’t make explicit accusations, but rather implicit ones. The true darkness is hinted at, not named. They speculate about my bad traits without taking the risk of making a claim. They frame things in a way that increases my perceived culpability.
I think it is a mistake to steelman things like the TIME piece, for precisely this reason, and it's also a mistake to think that most people are steelmanning as they consume it.
So pointing out that it could imply something reasonable is sort of beside the point—it doesn't, in practice.
I am at best 1/1000th as "famous" as the OP, but the first ten paragraphs ring ABSOLUTELY TRUE from my own personal experience, and generic credulousness on the part of people who are willing to entertain ludicrous falsehoods without any sort of skepticism has done me a lot of damage.
I mean, I don't have this hypothetical document made in my head (or I would've posted it myself).
But an easy example is something of the shape:[EDIT: The below was off-the-cuff and, on reflection, I endorse the specific suggestion much less. The structural thing it was trying to gesture at, though, of something clear and concrete and observable, is still the thing I would be looking for, that is a prerequisite for enduring endorsement.]
"We commit to spending at least 2% of our operational budgets on outreach to [racial group/gender group/otherwise unrepresented group] for the next 5 years."
Maybe the number is 1%, or 10%, or something else; maybe it's 1 year or 10 years or instead of years it's "until X members of our group/board/whatever are from [nondominant demographic]."
The thing that I like about the above example in contrast with the OP is that it's clear, concrete, specific, and evaluable, and not just an applause light.
I would like for all involved to consider this, basically, a bet, on "making and publishing this pledge" being an effective intervention on ... something.
I'm not sure whether the something is "actual racism and sexism and other bigotry within EA," or "the median EA's discomfort at their uncertainty about whether racism and sexism are a part of EA," or what.
But (in the spirit of the E in EA) I'd like that bet to be more clear, so since you were willing to leave a comment above: would you be willing to state with a little more detail which problem this was intended to solve, and how confident you (the group involved) are that it will be a good intervention?
I am opposed to this.
I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as "fear"].
Here are some things that are true:
Intelligent, moral, and well-meaning people will frequently disagree about to-what-extent a given situation is explained by various bigotries as opposed to other factors. Intelligent, moral, and well-meaning people will frequently disagree about which actions are wise and appropriate to take, in response to the presence of various bigotries.
By taking anti-racism and anti-sexism and other anti-bigotry positions which are already overwhelmingly popular and overwhelmingly agreed-upon within the Effective Altruism community, and attempting to convert them to Anti-Racism™, Anti-Sexism™, and Anti-Bigotry™ applause lights with no clear content underneath them, all that's happening is the creation of a motte-and-bailey, ripe for future abuse.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would've been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.
But by just saying "hey, [thing] is bad! We're going to create social pressure to be vocally Anti-[thing]!" you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to "oh, so you're NOT opposed to bigotry, huh?"
Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it's a bad idea for various pragmatic or logistical reasons, it's pretty easy to imagine that being rounded off to "the opposition is racist."
(I can imagine people saying "we won't do that!" and my response is "great—you won't. Are you claiming no one will? Because at the level of 1000+ person groups, this is how this always goes.")
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It's saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.
The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.
I would support that.