A lightly edited final version of this post can be found here

Hey everyone, this is an old draft from 2022 for a post I’ve been hand-wringing about posting any version of – but it was the most popular draft amnesty idea from last year that I didn’t do, and I’m finally at the point where I think something basic like this post is good enough to get my contribution to this discussion out there:

Well before I regularly visited the EA Forum, I remember one of the first debates I ran into on it were the criticisms, from 2021, about ACE’s then recent social justice activities. The anonymous user “Hypatia” published a set of criticisms of some of these activities including accusations of promoting unrigorous, vague anti-racist statements, and canceling a speaker who was critical of BLM. Some people involved with ACE contested how these events were characterized, and it seems that as of today much of the debate has cooled down and no one has changed their minds very much.

One of the criticisms Hypatia brought up was that ACE was evaluating organizations, in part, based on how good it thought their values were outside of just their effectiveness at helping animals. Hypatia pointed out that it seemed valuable to have an organization that evaluated pure animal welfare effectiveness, without assuming other values on the part of donors, and then donors could decide for themselves what else mattered to them. At the time, this seemed mostly reasonable to me, and I couldn’t think of a lot wrong with this reasoning. One sort of pushback I’ve seen from ACE, for instance Leah Edgerton brought it up in her interview with Spencer Greenberg, is to deny the premise. Looking at aspects of the internal culture of these organizations is useful to predict harder to measure aspects of how healthy an organization is, and how much good it can ultimately do for animals. Another piece of pushback which I have not seen, but suspect would be commonsense to many people, is to bite the bullet. Yes, as an organization we are willing to look at things other than pure animal welfare. If an animal rights organization was going around lighting the houses of factory farmers on fire, a good evaluation organization would be perfectly reasonable in the decision not to rank them at all even if they did effectively help animals.

Both tactics seem respectable in theory, but in practice carry some burden of proof ACE may not meet. Is failure to meet certain norms ACE prioritizes really so predictive of eventual failure? Certainly Hypatia is skeptical in some cases, and I find it hard to believe that these social justice norms are precisely the ones you would look for in an organization if your sole goal was predicting stability. And what about the setting fires example, this is proof of concept that sometimes conflicts of values are strong enough to justify disqualification, but these organizations don’t seem to be doing anything a fraction that serious. After listening to the Edgerton interview, I thought about this debate for the first time in a while, and found that I actually had something that, as far as I know, hasn’t been mentioned yet, that seems to me like it is ACE’s strongest defense, and generally may be a consideration EA cause evaluators should consider more.

One of my favorite recent (as of my initial drafting) posts on EA culture from the forum was “Unsurprising things about the EA movement that surprised me” by Ada-Maaria Hyvarinen. In it, she raises the very relatable point that

“In particular, there is no secret EA database of estimates of the effectiveness of every possible action (sadly). When you tell people effective altruism is about finding effective, research-based ways of doing good, it is a natural reaction to ask: ‘so, what are some good ways of reducing pollution in the Baltic Sea/getting more girls into competitive programming/helping people affected by [current crisis that is on the news]’ or ‘so, what does EA think of the effectiveness of [my favorite charity]’. Here, the honest answer is often ‘nobody in EA knows’, and it is easy to sound dismissive by adding ‘and we are not going to find out anytime soon, since it is obvious that the thing you wanted to know about is not going to be the most effective thing anyway’.”

Maybe it is obvious to some people, but Hypatia’s reaction to ACE looking at values other than animal effectiveness makes the most sense in worlds where this point is not true, in which all EA organizations are sort of like Charity Navigator, and seek, as a primary part of their mission, to evaluate as comprehensive a list of charities as possible. Saying that ACE should stick to animal effectiveness is sort of like asking for an organizational model in which every cause evaluator is engaging in a different effective altruism. Insofar as none of the other measures of how good an organization are fit the specific version of EA they are dedicated to, it is simply none of their business. ACE doesn’t look at the impact of its organizations on the global poor, and Givewell doesn’t look at the impact of its organizations on animals, and none of them ask about how good or bad their organizations are for squishier, harder to measure values like promoting a more tolerant, welcoming movement culture. In the real world where EA evaluators aren’t like Charity Navigator, no one is checking the impact of Give Directly on chickens because it isn’t a candidate for the best charity in the world for the chicken-interested effective altruism.

I cannot think of any reason to like this world, no justification other than, “it is not my problem”. The alternative solution, the one that seems to be the default expectation of many EA organizations right now, is a “buyer beware” mentality, in which people looking to donate to one of the recommended causes can personally decide against it based on their own research into how well these organizations fit their own values.

It seems to me as though a world in which individual donors, possibly just by checking how the organization talks about itself or obvious things about its approach (whether it sets houses on fire) are good fits for their values, is strictly worse than a world in which EA evaluators are expected to, in some sense, finish the job. To consider lots of possible values that might go into deciding where to donate, and carefully investigating all of them in its most promising choices. Alright, this point has a complicated relationship to what ACE’s initiatives actually look like. They are, after all, moving money through grants, and not just evaluations. I think this is a more complicated issue, though similar considerations to the ones I bring up here will, I think, vindicate some approach that looks at values other than just animal welfare (regardless of whether this looks like the stuff ACE is currently looking at, or if it should look different in some way).

ACE could also be more upfront about separating out the results of these evaluations so that donors can more easily weight them for themselves. They could also look into a wider range of values, or different ones if you dislike ACE’s value picks for independent reasons. There is a ton of room for improvement, but this is roughly where my ranking for EA charity evaluator approaches currently lines up, from best to worst:

  1. There are charity evaluators for every single important value. Global health, animal welfare, existential risk, social justice, and everything else donors have a reason to care about. They all do their research as broadly as Charity Navigator, and as deeply as effective altruist charity evaluators. A different, independent evaluator, aggregates these results, and can give you rankings based on different weights you specify for each value.
  2. There are separate charity evaluators for different types of values. They all look at the top picks of one another, and give thorough feedback about how they interact with the values the reviewing organization is most interested in.
  3. An independent charity evaluator is dedicated to doing research into the top causes of different evaluators, and providing reports of how these organizations interact with values other than the ones each evaluator prioritizes.
  4. Each charity evaluator organization evaluates only the top organizations in its own field, but investigates how each of these organizations interact with values other than the ones the evaluator is meant to most prioritize, and reports on this.
  5. Each charity evaluator looks into the top organizations in its own field, looks at how these organizations relate to other values they think are important, and publish recommendations that incorporate both things (being transparent about how they weigh different values)
  6. Each charity evaluator only looks at one value, like animal welfare. It finds top charities in this field, ranks them, and just shows this to the public.

ACE seemed to be in an uncomfortable middle ground between the 4th and 5th best option. Arguably it is worse than that, it only looks at a handful of values other than animal welfare, and if Edgerton’s testimony is representative, they do not even concern themselves with these other values beyond instrumental value to animal welfare. I find this latter claim hard to believe, and not necessary to justify ACE’s practices (or some idealized version of those practices), but if I do believe this claim about how ACE considers other values, I think they have reason to go even further.

Even if their practices are subpar, I think there is reason to expect them to be strictly better than 6, what I see as the default for charity evaluators. Meanwhile, 1 is pretty much impossible. 2-3 all seem probably possible, but 2 is just not done and a broad culture shift within EA would be necessary for it, and 3 is easier, but does require a new organization. It happens in a less focused way through scattered forum posts and some global priorities research, but not in the most thorough way it could be. All seem pretty good to me in the grand scheme of things, especially compared to 6, and thinking about them gives me the impression that there is a great deal of room for impactful entrepreneurship in this field.

I’m not sure why this point doesn’t seem very discussed. One possibility is that it just isn’t actually a good idea. For instance, maybe figuring out how well charities indirectly fare on different values from the ones they are targeting is just harder to evaluate than how they fare on target values, or maybe when this impact is easier to measure, it is so obvious that reports from a charity evaluator aren’t necessary. Taken together, this could mean that the resources that would go into a project like this wouldn’t be worth it.

That said, the fact that ACE did come to different conclusions about these charities when it looked to values that aren’t directly related to animal welfare makes me think this is not so obvious. Another possibility is that although I haven’t really heard discussion about it, it really is something that people talk about, or that these organizations take part in, and I just don’t notice. I can certainly think of organizations that do things like this, but mostly they are grant-giving organizations like Open Philanthropy, and I don’t think quite fit what I’m talking about here. Regardless, I thought these thoughts were worth bringing up in case they really did touch on a factor that is neglected, not just in the ACE debate, but charity evaluation more broadly.

31

1
0

Reactions

1
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

Thanks for this interesting perspective on how to balance different values within the work of evaluations, Devin. Considering you drafted this in 2022, we do want to note that a lot has changed at ACE in the last three years, not least of which has been a shift to new leadership. Since early 2022, ACE has transitioned to a new Executive Director, Programs Director, Charity Evaluations Manager, Movement Grants Manager, Operations Director, and Communications Director. 

That said, ACE continues to assess organizational health as part of our charity evaluations—we assess whether any aspects of an organization’s governance or work environment pose a risk to its effectiveness or stability, thereby reducing its potential to help animals. Furthermore, bad actors and toxic practices could negatively affect the reputation of the broader animal advocacy movement, which is highly relevant for a growing social movement, as well as advocates’ wellbeing and their willingness to remain in the movement. You can read more about our reasoning here and about our current evaluation criteria here.

Thanks for your thought-provoking piece. We are continually refining our evaluation methods so we will consider your points further about the kinds of instrumental information we might want to gather and how we could do so in a pragmatic way.

Thanks, Elisabeth

Thanks for the response, I appreciate it!

Looking at aspects of the internal culture of these organizations is useful to predict harder to measure aspects of how healthy an organization is, and how much good it can ultimately do

This also could've helped with other orgs over the years, where the "culture" stuff turned out to have important signal. E.g. FTX, Leverage Research.

Executive summary: Charity evaluators like ACE should aim to assess not just their primary focus (e.g., animal welfare) but also other relevant values, to provide a fuller picture for donors and improve decision-making in effective altruism.

Key points:

  1. The debate over ACE’s evaluation methods highlights a tension between prioritizing pure animal welfare and considering broader values like social justice or organizational culture.
  2. Some argue that ACE should focus solely on effectiveness in animal welfare, while others defend its broader approach as useful for predicting an organization’s overall impact.
  3. Many charity evaluators operate in silos, ignoring cross-cutting impacts; a better system would involve coordination between evaluators to assess organizations from multiple value perspectives.
  4. An ideal system would either feature comprehensive evaluators assessing all major values or independent aggregators synthesizing findings from specialized evaluators.
  5. While ACE’s current approach is imperfect, it is still preferable to narrowly focused evaluations that ignore externalities and broader ethical considerations.
  6. The EA community may benefit from new initiatives or organizations dedicated to filling these evaluation gaps, though practical challenges in assessing indirect impacts remain a key obstacle.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed