I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
This would benefit from stating a bottom line up front: e.g., Using Shapira's Doom Train analytic framework, I estimate a 31% p(doom). However, after adjustments -- especially for the views of superforcasters and AI insiders -- my adjusted p(doom) is 2.76%.
More substantively, I suggest your outcome is largely driven by the Bayes factors -- I think the possible range of outcomes is 0% to 9% on the stated factors. And my guess is that you might have chosen greater or lesser factors depending on where your own analysis ended up -- so the range of plausible outcomes is even less as a practical matter.
That's one reason I recommend the BLUF here -- someone who doesn't take the 24 minutes to read the whole thing needs to understand how much of a role the Bayes factors are playing in the titular p(doom) estimate vs. the Doom Train methodology.
I think this critique is stronger as applied to other posts in which Vasco's comment runs a more significant risk of derailing the original poster's topic and intended discussion. Here, I think Vasco's point can be understood as somewhat complementary to the original idea. If dairy is not that bad, then the possibility that anti-dairy advocacy could have undesirable downstream effects on other animals may be an additional reason for deprioritizing such advocacy. In contrast, I think posting a comment like this in (e.g.) a global-health thread runs an elevated risk of the "discussion . . . descending into a discussion about moral weights, or the effect of every single intervention on nematodes."
It's unclear how demanding this promise is -- I find it to be considerably more vague than the GWWC 10% Pledge in terms of how much sacrifice is expected, but let's assume for the time being that it is ~equally demanding as the 10% Pledge. It's taken many years and FTEs to get GWWC to ~10,000 pledgers, a rate of progress which makes me think that gaining promisers would be considerably more difficult than your model assumes.
It's true that there is a theoretical "benefit to those who join the promise" in that they obtain the ability to ask other promisers for material resources. However, for those who currently have enough food, water, peace, and shelter, there is zero marginal benefit to pledging now as one could always defer pledging until one had a need. One could perhaps get around this with an open season and a registry (e.g., people can only promise from Jan 1 to Jan 15 of each year, or else they can't claim the promise until the next year)? But even then, this system needs a balance between people who have excess resources and people who need basic resources, or ~everyone will likely get frustrated and give up. I'm not sure you'd get that.
The idea of closed communities of promisers in the postscript is interesting, although there would be awkwardness about who is allowed / not allowed into the group, whether people would be screened on various underwriting criteria, and so on.
Picture it. The year is 2035 (9 years after the RSI near-miss event triggered the first Great Revolt). You ride your bitchin' electric scooter to the EA-adjacent community center where you and your friends co-work on a local voter awareness campaign, startup idea, or just a fun painting or whatever. An intentional community.
One could think of religious congregations as a sort of rough analogue here. At least in theory, they have both member-service and broader-benefit objectives (of course, your opinion on the extent to which this is true may depend on the congregation and religion in question). While something that near-exclusively benefits the broader community may get external funding (e.g., the church soup kitchen), at least in the US everything else is probably being paid for by member/attendee donations.
And in a sense, the self-funding mechanism provides something of a check on concerns that a membership-based democratic organization will weight its members' welfare too much. If self-funding is predominant, then the members have implicitly decided that the extent to which they value the personal benefits of the organization plus their estimate of the organization's broader altruist achievements justifies the expenses.
In contrast, I would be hesitant to draw too many conclusions from EA Norway's ability to attract non-member/supporter funding. As a practical matter, "EA org in a small country" may be a pseudo-monopoly in the sense that having multiple organizations in the same ecological niche may not be healthy or sustainable. External funder decisions could merely reflect the reality that the niche is occupied adequately enough, rather than a belief that the EA Norway approach would outcompete alternative approaches. That's relevant insofar as other meta functions may have a larger organizational carrying capacity than "EA org in a small country" does.
If I'm reading Patrick's comment correctly, there are two different ideas going on:
These effects should be, in theory, somewhat separate -- one could envision a nationally focused org without membership/democracy, or a big transnational group with it. Do you think your list of advantages is more about localness or more about being democratic?
(I express no opinion on whether ACE's recommendations in 2025 are being influenced by "woke ideology" in a way a meaningful number of donors would find objectionable, so I wrote the comment below about an evaluator more generically.)
Pressuring an organization to commit to flagging cases in which "woke ideology" (or similar controversial factor) upgraded or downgraded a classification might be more viable. That's imperfect, but so is the idea of a secondary organization trying to identify and flag those cases.
An evaluator's best defense against claims of bias might be that it's a private organization that can consider whatever it wants (as long as it is sufficiently transparent about that so would-be donors are not misled). I could respect that, but I think that rationale would affect the extent to which other community actors should be deferring to the evaluator absent flagging. For instance, when effective-giving organizations defer to an evaluator to decide which organizations can receive donations on their website, it is implicitly ratifying the evaluator's idiosyncrasies in a sense. That strikes me as more problematic than the direct effect of evaluator's recommendations -- it closes off third-party opportunities for disfavored organizations, gives one organization's views on a controversial topic too much weight, and makes interorganizational cooperation unreasonably difficult.
Current language for Movement Grants is: "However, we are not able to fund groups or projects that: . . . . Conflict with our commitment to representation, equity, and inclusion." That is indeed softer than the requirements language in the 2021 Forum post.
This is plausible, but not obvious, to me:
I don't know which effect would be stronger, but I don't think you can assume (1) predominates.
I think your argument would be stronger for most object-level charities than for a charity evaluator. I'd think the target audience for the latter is a smaller group of people who are predisposed to be sympathetic to the cause. The key win would be getting someone excited enough to donate; the shared real-world outcome for moderately supportive through strongly opposed is that the person won't defer to the org's recommendations. What follows is a oversimplified model.
If "woke signaling" moves someone from moderately supportive to unsympathetic, that isn't great but the counterfactual loss in donations is still $0. But moving someone from moderately supportive to highly supportive has more concrete value if it triggers a counterfactual donation. If there are more people at moderately supportive who would respond positively to "woke signaling" than there are people at highly supportive who would respond negatively, it could be a strategic move.
I asked ChatGPT about the average marketing spend of auto manufacturers was (it said 7-8%) and the average fundraising spend of the largest US charities (it said ~10%, which is consistent with my intuition). While I'm not endorsing these percentages as optimal for auto manufacturers or non-EA charities -- much less advocating that they should be applied to EA charities -- they could provide some sort of ballpark starting point.
Automotive marketing, as I understand it, is considerably about creating vague positive brand associations that will pay off when the consumer is ready to make a purchase decision. That's a viable strategy in part because there aren't too many differences between (e.g.) a Ford and a GM truck. It's not obvious to me that would-be EA donors would respond well to that kind of campaign, and this may limit the extent to which their marketing budgets and strategies serve as a useful guide here.