Self-Criticism Serves a Crucial but Limited Role
This isn't intended to be a criticism of effective altruism. Someone can add the 'criticism of effective altruism' tag if they like but I don't mean this as a criticism. The marginal value of more criticisms of EA declines as more research essays are submitted to a contest to win cash prizes for the best criticisms of EA.
Those who most often respond to these incentives to criticize are themselves participants in EA. That's why they have insights into EA necessary to win prizes for best criticism. They have a stake in ensuring the criticism is constructive because they really care if EA improves.
I concur it's the right move for EA as a movement to begin incentivizing better criticisms of itself. This has all become necessary because the sum total of criticism by outside actors has proven inadequate. Yet that this has become necessary proves the point EA needs to do much more than only criticize itself, or even make the best of others' criticisms.
Self-criticism may be a necessary component of achieving the real goal of improving EA for the better. Yet it's not sufficient. The extent of self-criticism is now excessive.
EA must reflect on what it's missing out on doing through the hyper-focus on self-criticism. The proof of that is in how all the criticism of EA in the world from outside actors has totally failed to stymie or significantly change EA.
Outside Criticism of EA Has Mostly Achieved Nothing
If someone were to read a cross-section of the most hyperbolic criticisms of EA that have been published during the last decade, they might get the impression EA is totally crazy, evil and/or extremely dangerous. Yet when those in EA ask the authors of such op-eds what should be done differently, they tend not to have satisfactory answers to that question.
This main approach taken by those who hate and fear in EA, criticizing it to death, has failed. EA taking self-criticism to a similarly logical conclusion looks something Will MacAskill spending the next two years researching and writing a book about decision-theoretic approaches to not becoming a supervillain by mistake.
Even those who respect EA in general, and only prefer the movement pursue some of its goals differently, almost never know how to efficiently provoke change in EA. It's only those already in EA working together who've ever been able to put any recommended changes into practice. It's necessary to have a deep(er) understanding of the subject matter on a domain-general level (e.g., the high-level methodologies of EA as a whole) or a domain-specific level (e.g., the applied methodologies common to one focus area).
More must be done. I can't provide a complete answer to the question of what else EA needs to do to solve its own problems. I will suggest putting more time and effort into proposing solutions too as a starting point.
This post was originally published on July 4th, 2022, but as of July 22nd, it has been significantly edited for readability and clarity.