I'm a 40-year Rationalist first, FIRE advisor second, EA third. Working, two young children, very busy. Based on Spain. Active on Spanish-speaking EA Slack.
To me, this piece reads as advocacy for multi-issue positioning, despite the initial framing. Both the amount of arguing for one side and the positioning makes me think so.
Your analysis seems to me to be very susceptible to the amount of salience a certain matter may have on both axis. Or said another way. Supposing you're weighting your chances by the area of your graphics, I think you may have drawn them too close to origin of coordinates, specially the multi-issue one. Suppose that, on your multi-issue graphic, people who care bout L won't support you unless your position on L is at least 50% of the way to the current extreme. Or else, suppose there is a "top right" quadrant where you can't really make progress, because if you take your positions on the L issue to the extreme, you'll lose the support on only of the anti-L, but also of the moderate-on-L. That is, you've drawn a coordinates system but at the same time you've assumed a binary of positions, only L or not-L. Thus, by amount of area, it may very well be that A is, in reality, bigger than B.
Complicating things further, much of the time there is not a single-issue L you have to care about, but a memeplex of which L is just a part, and by taking a position on L, not only you're committing yourself to having a position on all of the memeplex, but you're assumed by default of having such a position, unless you actively communicate against that (risking to antagonize your pro-L supporters). With this in mind, your point of "Adopting a Single-Issue Policy Provides a Principled Way to Stay Out of Other Issues" becomes extremely important.
In summary, I fear that, for a slightly more complicated framing of the question, the single-issue becomes simpler and easier to defend, and thus better.
Let's suppose we agree this is so, as a working hypothesis.
How do you propose a community which would cater to the aesthetic tastes of its majority would avoid evaporative cooling of group beliefs? This a grave concern of mine. That and O'Sullivan's Curse, itself related to group polarization.
EA should avoid using AI art for non-research purposes?
Ethical concerns aside, I'd rather EA not place themselves on yet another political axis.
I'm concerned about the extent (and this is a fully general argument, I'm aware) of the recent phenomenon where some highly-connected, highly-concerned, highly-online group is able to unilaterally polarize the discussion of <insert-topic-here>, by simply taking a certain stance and being very vocal about it, plus or minus associating this viewpoint with one of the big political camps. Thus forcing everybody else to be on one of two camps, because the matter is formulated so as to leave no possible middle ground of position of neutrality.
To this I say we should say "mu", in "I explicitly reject your formulation of the question as invalid". I say we should carve a middle ground even if it doesn't seem possible. And wherever there is a supposed binary of "do or don't", I say we should explicitly reject that binary. You can do X and mean Y, with Y being <thing-supposedly-associated-with-X>. Or you can do X and mean not-Y. Or you can do X and mean Z, with Z orthogonal to Y. The same when you do not-X.
Having said all that, please use AI art, or do not use AI art, and ignore the resulting noise. For if "this causes noise" is to be the main drive for all of your decisions, you've already abdicated all power to the noise-makers.
EDIT: I voted "strong disagree" (as in "we should do it if it makes sense to do it"), though I don't see it reflected in this comment).
After further reflection, I realize I embedded an implicit assumption in my reasoning. There is a default, neutral position, and it is "what everybody else is doing". I've always thought that rule of "every blog post must have an image, by decree of the SEO gods" was silly, but we have to deal with it. I've seen a strong move towards AI-generated images around me, instead of clip art collections. This reflects simple economic logic: image collection sites are more expensive than AI (the paid ones), less useful than AI (the free ones), or both. Deviating from what is quickly becoming a de facto norm around us will tend to group us with one or another political camp. Thus, the safest norms are almost always either the "everybody is doing this", the "it's the price, stupid", or both.
That's why my own approach is "FIRE [Financial Independence, Retire Early] first". In which one first plans for a frugal retirement (which, for the USA, requires way less than $1M, possibly less than half of that, so it's highly achievable, and mainly depends on the strength of your frugal muscles, not your above-average earning power). That takes about 7 to 10 years, which can be shortened to 5 if you work hard or are lucky. That amount is than set apart in case your life takes a wrong left turn.
Then you keep working, and either donate everything (since you're already set for life), or at least as high of a percentage you're comfortable with.
As for Average Joe... most limiting resource isn't money at all, but willpower and other cognitive powers. Fortunately, it's not like the Average Joe is EA or vice versa.
In any case, consider that my answer of "how much to put into your own financial security vs. donating". Not in terms of splitting a wage, but of bypassing the question entirely.
EA is a movement that should be striving for excellence. Merely being “average” is not good enough. What matters most is whether EA is the best it could reasonably be, and if not, what changes can be made to fix that.
There is a lot of content packed within that "the best". An org, movement, or even person, can only be "the best" on a certain amount of measures, before you're asking for a solution to an optimization problem with too many constraints, or if I can name it on a simpler way, superpowers.
Should EA be ethical? Sure. Wholesome? Maybe, why not. A place for intellectual and epistemic humbleness? Very important. Freedom for intellectual debate? Very useful for arriving at the truth. A safe space? Depending what you mean by that. A place completely free from interpersonal drama? That would be nice, certainly. Can all of this be achieved at the same time? No, I don't think so. Some values do funge against others to some extent. I hope I don't need to offer specific examples.
I'm worried about this (and other recent developments) on EA. Calling for a more perfect world is, by itself, good. But asking for optimizations in one front frequently means, implicitly, de-prioritizing other causes, if only because the proposed optimizations take a good chunk of the limited collective time, attention span and ability to intelligently communicate and discuss.
Do I think that changes can be made to EA to make it more ethical, with less misconduct (including sexual misconduct), etc? Yes, certainly. Do I think this will have a cost? Yes, there is no such thing as a free lunch. Do I think this will cause, all things considered, more or less suffering in the world? I'm not sure. But since what EA is unquestionably "the best" at, is in identifying opportunities to do the most good, in the margin. And while all improvements are changes, not all changes are improvements. I think that any changes (the more sweeping, the worse on expectation) on community composition, governance, etc. will be in the best of possible worlds, neutral to distracting for the main goal of doing most good with available resources, and in the worst, actively harmful. Thus, my proposal should be that any proposed changes should pass the bar, not only of improving the situation they purport to improve (and these articles with practical examples of what to do, how to do it, and how did it all result are certainly useful for that), but also, a reasonable case that they are at least neutral to the main mission (doing good better, if I may quote).
So, am I advocating for an "abandon all hope, every one for themselves" policy? Not at all. I'm merely stating, if on a roundabout way, that "average" ability, as an organization, to keep your members safe, sane, and wholesome is good. Quite probably, good enough. And this is key. Since you cannot optimize for everything at the same time, one must find a compromise for most things that are not the main mission. I think sexual misconduct is one of those many, many things.
(Edit: Vasco Grilo's comment says it better, and in less words)
Since temporalis hasn't answered to the literal question, I think I can.
The literal text of the OP, as cited, says "Everyone that was accused of assault was banned from the club". That, to me, does not sound like the qualifiers you offer here, where "we talked to both sides and to relevant witnesses and found the accusers to be credible". That would be best summarized as "Everybody that was credibly accused of assault was banned", and even with that, the full explanation of how was an accusation found credible should better follow, and not long after, if we are to think that it was a more-or-less like a trial process, and not a more-or-less like a witch-hunt process.
As for the direct, object-level questions you make: Yes, it's very helpful and informative, and I don't see any major errors. In fact, the "additional funds required" and "extra time to FIRE" charts are very useful, and to my knowledge, haven't been done before. Congrats!
As for something missing, something I sometimes get requested to calculate is how great is the impact of donating now vs. donating later. Like, what if I want to extra-quickly grow my nest egg for 6-10 years or until 60-80% of the way into FI (amounts to be written-in, of course), before donating? Would that lower my donations much? That is, the reverse of "how much delay for this amount of donations?".
As general commentary / small things / nitpicks:
As another early retiree - at least, I was, for some time, before I un-retired (hopefully temporarily) to pursue an expensive startup project, as a funder - I think you underestimate the power of the FIRE's income. By the time most of us are ready to "pull the plug", the usual question is not, "How probable is that I never run out of money?", but "How much time did I overspent working, since this safety margin is, with benefit of hindsight, obviously excessive?". Thus, most FIRE types should have more than enough to maintain donation rate, and probably increase it.
See for example, https://www.mrmoneymustache.com/2022/07/18/never-run-out-of-money/
Besides voicing my general agreement with the OP, I'd like to bring a recent reflection by Scott Alexander (from https://www.astralcodexten.com/p/if-its-worth-your-time-to-lie-its), into this.
"If one side [does X] to make all of their arguments sound 5% stronger, then over long enough it adds up. Unless they want to be left behind, the other side has to make all of their arguments 5% stronger too. Then there’s a new baseline - why not 10%? Why not 20%? This mechanism might sound theoretical when I describe it this way, but go to any space where corrections are discouraged, and you will see exactly this."
In the original post by Scott,"does X" is "lies". But I think the use of adjectives, specially when, as otherwise pointed, they are just generally used to indicate a general valence, and not a specific mistake, may be akin to this "small lies to make your arguments sound 5% stronger". And I furthermore think this is why it doesn't matter if the adjective is "bad" or "wonderful".