Introduction
I’m Zach, the new CEO of the Centre for Effective Altruism (CEA). As I step into my role, I want to explain the principles that I think make EA special and share how CEA will continue to promote them.
In this post, I will:
- Highlight the principles that I think are core to EA, and explain why CEA will continue to promote them above and beyond any single or set of cause area(s).
Explain what being principles-first means in practice for CEA[1].
- Explain how encouraging people to act on EA principles can still lead to some prioritization decisions between causes, how CEA has navigated those decisions in the past, and what factors influence those decisions.
- Share a little bit about my background and how I’ve personally engaged with these principles.
CEA will continue a “principles-first” approach to EA
In my role at CEA, I embrace an approach to EA that I (and others) refer to as “principles-first”. This approach doubles down on the claim that EA is bigger than any one cause area. EA is not AI safety; EA is not longtermism; EA is not effective giving; and so on. Rather than recommending a single, fixed answer to the question of how we can best help others, I think the value of EA lies in asking that question in the first place and the tools and principles EA provides to help people approach that question.
Four core principles that I and others think characterize the EA approach to doing good are[2]:
- Scope sensitivity: Saving ten lives is more important than saving one, and saving a thousand lives is a lot more important than saving ten.
- Scout mindset: We can better help others and understand the world if we think clearly and orient towards finding the truth, rather than trying to defend our own ideas and being unaware of our biases.
Impartiality: With the resources we choose to devote to helping others, we strive to help those who need it the most without being partial to those who are similar to us or immediately visible to us. (In practice, this often means focusing on structurally neglected and disenfranchised groups, like people in low-income countries, animals, and future generations[3].)
- Recognition of tradeoffs: Because we have limited time and money, we need to prioritize when deciding how we might improve the world.
CEA has historically taken a principles-first approach and I don’t expect to make big changes to this aspect of CEA’s mission[4]. With that being said, I recognize that being principles-first may mean something different to different people, and it may mean something different to me than it did to CEA’s previous CEO. In talking with staff who have been around CEA for longer than I, I didn’t find consensus in how principles-first was interpreted. Instead of trying to come up with a comprehensive history for this post, I instead want to focus on some specific actions that CEA has taken in the recent past that reflect this approach, as well as provide clarity on how I interpret a principles-first approach and where I still feel uncertain about how CEA will approach promoting EA principles in the future.
Why principles-first?
CEA will continue to promote these core principles and nurture a community based on them. While we’ll sometimes prioritize between causes (see examples and reasoning), cause-specific work won’t be CEA’s main focus.
I think EA principles are impactful and worth promoting, and CEA is one of the best-suited organizations to promote them[5]. I also think cause-specific field-building can be impactful, and I don’t feel confident in sweeping claims about how either a principles-first or cause-specific approach is much better than the other. I think it makes sense for organizations trying to do good via community-building and field-building, like CEA, 80,000 Hours, and (to some extent) Open Philanthropy, to take a variety of approaches in a community-building portfolio. We can argue about the specific allocations across that portfolio—and we have—but it seems extremely likely to me that promoting core EA principles and nurturing a community of people who take those principles seriously should be part of the portfolio.
Here are some benefits of the principles-first approach:
- Promoting EA principles has inspired and empowered thousands of people to be more altruistic and impactful with their careers and donations. EA’s core principles have served as a beacon for many sincere, talented people to pivot significant energy towards doing good in the world. It turns out that “how can I do more good with my career?" and "how can I do more good with my donations?” are questions that people actually ask. I worry that outreach only for specific causes would never catch the eye of people who are asking the big-picture questions that EA’s frameworks and principles try to help answer. More generally, principles that can draw thousands of people to common ground for the sake of helping others are something to protect and hard to reproduce.
- EA principles, and a community of talented people who take them seriously, are adaptable. Our knowledge of the world and the environment around us will inevitably change, and it’s valuable to have a group of people who can reprioritize as we learn more. If we lose a focus on EA principles, I think we risk losing our ability to notice what others may be missing. I think the EA community’s scout mindset and attention to neglected problems are behind some of our most impactful achievements, such as prioritizing campaigns for farmed animal welfare and an early focus on pandemics and AI safety. Looking ahead at possible wild futures, I think a community focused purely on making AI safe, for example, would be significantly less capable of tackling other potential challenges posed by emerging technology, such as post-AGI governance or digital sentience.
Promoting principles that draw people from many causes allows for a productive cross-pollination of ideas and changing of minds. Drawn together by a wish to help others, EA spaces can enable connections between people who wouldn’t meet otherwise, but who can benefit from one another. For example, AI safety advocates have sought advice from experienced animal-welfare advocates to inform potential approaches to regulation and campaigns for labs to voluntarily implement safety protocols[6]. It seems unlikely these groups would have collaborated without EA. I also think the epistemics of the community benefit from people with different backgrounds meeting while sharing principles like scout mindset. Seeing people around us hold the same values but come to different conclusions invites us to challenge our own cause prioritization in a way that, say, attracting people to work on malaria purely via anti-malaria campaigns doesn’t. In a testament to the impact of a commitment to a scout mindset, hundreds of people have engaged with others aspiring to do good, pressure-tested one another's ideas, and pivoted their work’s focus as a result[7].
With that being said, I don’t want to make it seem like “what should CEA do? or “what should EA be?” are questions with obvious answers.
There are reasons to think that it could be better to shift more resources to specific causes or, for example, advocate for greater splintering of different parts of EA:
Existing problems with the EA community: There are criticisms of the EA community that may be warranted. For example, I think criticisms about diversity issues in EA, echo chambers, and conflicts of interest have merit. If you believe these shortcomings can’t or won’t be addressed, that may be an argument to shift towards focusing on specific causes instead of trying to create an alternative community focused on the same principles[8].
- Downsides of interconnectedness: Having a large interconnected community may create shared risk between projects and people that would otherwise not be tied together. For example, a scandal involving someone working on AI safety can end up harming the credibility of animal welfare activists and someone focused on AI may be criticized for not being vegan.
- Concerns about how some people approach these principles (e.g. maximization): Other arguments against focusing on principles-first EA include concerns that EA can lead to a perilous focus on maximization or encourage unsustainable and unhealthy dedication to work.
- Benefits of a narrow focus: It may just be the case that certain causes are much more important to work on directly. And if so, there are benefits to focusing on one cause to build expertise and relevant relationships rather than emphasizing principles.
I’m sympathetic to these concerns about a principles-first approach and the case for spending more resources on building specific fields. In particular, I believe there are real concerns about the EA community. But I believe we should improve EA, not abandon it. I don't see the community or core EA principles as fatally flawed. I also don’t see a clear high-impact, non-EA community that exists without flaws. I want to stay open to criticisms like those above and guide CEA to improve what it can, and I’m grateful to others who also contribute to making our community better.
Overall, I think the benefits of a principles-first approach outweigh the concerns. I feel good about honestly saying, “Yes, cause-specific efforts can be very valuable, but so can a principles-first approach. Both should exist, and I’m focusing on the latter.”
What exactly does principles-first mean for CEA?
CEA’s mission is to nurture a community of people who are thinking carefully about the world’s most pressing problems and taking impactful action to solve them.
We currently enact our mission via five main efforts[9]. We think these programs all promote EA principles more than they promote any specific answer to how to do the most good, and we’ll continue to prioritize this approach in the future.
CEA program | Examples that demonstrate a commitment to principles |
Events: We run conferences like EA Global and support community-organized EAGx conferences. We also run some bespoke events for subject matter experts (see more below).
| EA Global and EAGx admissions weigh how well applicants understand EA principles and put those principles into practice. This means we end up accepting applicants from a range of causes[10], including non-standard EA causes if the applicant can make the case for their work’s impact.
We also platform a lot of cause-agnostic content, like cause-prioritization workshops and skill- or career-stage-based meetups. |
Groups: We fund and advise hundreds of local effective altruism groups, ranging from university groups to national groups. We also run virtual introductory EA programs.
| Our Groups program supports EA groups that engage with members who prioritize a variety of causes.
|
Online: We build and moderate the EA Forum, an online hub for discussing the ideas of effective altruism. We also produce the Effective Altruism Newsletter. | We don’t approve or reject EA Forum posts based on cause prioritization, and we curate content on the EA Forum and EA Newsletter that is relevant to a variety of causes. The EA Forum runs events like Career Conversations Week and the Donation Election, which encourage engagement with EA principles and don't pre-suppose an answer. |
Community Health: We aim to prevent and address interpersonal and community problems that can prevent community members and projects from doing their best work.
| This work supports individuals, projects, and organizations across the EA space and across cause areas. |
Communications: We work to communicate about EA principles, ideas, and work with a variety of audiences and stakeholders. This involves working with the media, advising and assisting communicators in the EA community, and supporting the creation of content about EA.
| We support communications at organizations across cause areas and dedicate part of our content focus to highlighting EA principles. |
Sometimes we’ll prioritize some causes over others
While we want CEA’s work to be principles-first, I don’t think it makes sense for CEA’s work to be principles only. Part of what makes EA special is that it goes beyond a group of people thinking about how to do good—it’s a group of people doing good. We want to encourage a journey from learning about EA principles to applying these principles to concrete problems. And insofar as we introduce concrete problems, it’s inevitable that we run into tricky questions about what causes we prioritize.
Moreover, I think there are ample reasons to want CEA to be an ally for people working directly on priority causes, even as we continue to have a principles-first focus. Both approaches emphasize solving problems that can save or improve the lives of people and animals. We have a lot to learn from cause-area experts, whether they explicitly engage with EA or not, about what interventions are most promising in their field, what talent and projects would help the most, and in what ways we could harm the field. And cause-area experts can benefit by conveying their ideas to a wider audience and attracting donations and talented people to work on important issues. I worry that in the past the EA community has been too insular and perhaps dismissive of non-EA expertise, and I’d be excited to see more humility in the future.
Cause prioritization examples
In light of CEA deciding to not solely focus on principles, I‘lI give some examples of the cause-prioritization decisions CEA has made recently:
- EAGs are an opportunity for attendees to learn more about specific causes. How should we distribute object-level sessions across cause areas?
For EA Globals in 2023, 33% of our content on the three main-stages[11] ended up covering cross-cause issues (growing effective altruism, cause prioritization, skills, etc.). Of the cause-specific content, 64% was focused on existential risk reduction, 15% was on animal welfare, and 21% was on global health and development.
- CEA has the events team with the most capacity in the EA ecosystem, which means we can enable high-value events that object-level experts either couldn’t run or couldn’t run nearly as well without us. If we think cause-specific events are more valuable than another ‘meta EA’ event, which cause-area specific events should we support?
- Our Partner Events Team has supported events like two Summits on Existential Security and an Effective Giving Summit.
We also experimented with an EAG in the Bay Area focused on Global Catastrophic Risks[12].
- After introducing EA principles in the EA intro program, we want to highlight concrete problems in the world to ground the application of EA principles and emphasize the value of actually doing things. Which areas should we spotlight?
- In the EA intro program syllabus, the first three weeks explain differences in impact via global health and development examples and radical empathy via animal welfare readings. The next three weeks explain the “most important century” thesis, longtermism, and risks from AI. The final weeks emphasize the importance of thinking for yourself, less common causes, and putting these ideas into practice.
The examples above demonstrate that when CEA has prioritized between causes, AI safety has received more attention than other areas. While I’ve only been full-time in this role for a few months[13] and don’t yet have a clear perspective on what I think the “correct” balance of attention should be between specific causes going forward, I do expect AI safety to continue to receive the most attention (though I wouldn’t be surprised if the relative weighting of causes looked different). At the same time, I sometimes worry this can go too far, and I expect we’ll experiment with different approaches. It is important to me that people engaging across all core-EA causes can find value and feel like their work is valued when they engage with the EA community.
Factors that shape CEA’s cause prioritization
To understand how CEA prioritizes (and, for example, why AI safety currently receives more attention than other specific causes), here are some of the factors that weigh into our cause-prioritization (insofar as we’re not just promoting cross-cause tools)[14].
- The opinions of CEA staff: I want to actively encourage CEA staff to be thoughtful about their own cause prioritization and have opinions about how they can accomplish the most good with the time they’re spending on their career at CEA. CEA staff’s constant judgment calls influence CEA’s programs. An informal 2023 survey of CEA staff suggests that staff, on average, thought that there were around five key priorities, with mitigating existential risk selected the most, followed by AI existential security. We also shared a post about where CEA staff donated in 2023.
- Our funders: The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies. While I don’t think it’s necessary for us to share the exact same priorities as our funders, I do feel there are some constraints based on donor intent, e.g. I would likely feel it is wrong for us to use the GCRCB team’s resources to focus on a conference that is purely about animal welfare. There are also practical constraints insofar as we need to demonstrate progress on the metrics our funders care about if we want to be able to successfully secure more funding in the future. I’m interested in doing more to support a broader array of causes (e.g. running more events targeted at animal welfare or global health and development), though I expect there to be some barriers in terms of different willingness-to-pay for community building from different funders, team bandwidth, and in some cases staff interest. Over time, I’d like to see CEA diversify its funding to better reflect a principles-first approach.
- With that being said, there have been significant changes in staffing and funding practices at both Open Philanthropy and CEA, and I think it’s uncertain how Open Philanthropy will approach funding CEA in the future (e.g. if the funding continues to come from one grantmaking portfolio or if it will be spread out). We expect this to be an active topic of conversation before our next funding cycle.
The views of people who have thought a lot about cause prioritization: CEA has historically shown some deference to heavily-engaged people who serve key roles in organizations that embrace EA principles and relevant cross-cause or cause-specific experts[15]. I feel significant ambivalence about this approach. On the one hand, CEA doesn’t have deep in-house expertise in cause prioritization, and I think deferring to an aggregate of well-informed experts can represent an appropriate degree of humility. On the other hand, I worry that this creates an echo chamber. For example, people could point to surveys from the Meta Coordination Forum to justify focusing more on existential risk, which then means there’s a disproportionate emphasis placed on existential risk when inviting attendees to future Meta Coordination Forums, creating a self-reinforcing cycle. Ultimately, I don’t think resolving how much weight to put on this factor is essential, because both this point and the ones mentioned above suggest CEA will emphasize existential risks more than other causes.
Some argue that we should instead mirror back the cause prioritization of the community as a whole, e.g. based on community surveys. I think this is wrong. Not only does that presuppose that there should be equal weighting of views between people who have not necessarily engaged equally with the question of what causes are worth prioritizing (and, as is discussed above in this post, the cause prioritization of people engaging with EA is liable to change), but it also presupposes that CEA exists solely to serve the EA community. I view the community as CEA’s team, not its customers. While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).
As the factors above demonstrate, we care about (and are incentivized to care about) prioritizing existential risk reduction work when we need to prioritize between causes[16]. But prioritizing between causes isn’t at the heart of CEA’s mission: to promote EA principles and nurture the community of people who take those ideas seriously. As a result, you can still expect the Forum, groups, and events to support EA principles across a range of causes.
The role of principles in my path through EA
On a personal note, I’m excited about CEA sticking to its principles-first approach. I may have never started working in effective altruism or related causes if there weren’t an “EA community” that spanned multiple causes and nurtured a spirit of truth-seeking.
A friend of mine spent years trying to get me involved in AI safety without mentioning anything about EA. I was confused why he seemed to care so much about it, and I wasn’t particularly compelled by the arguments I heard.
After years of trying and failing to have me focus on AI safety, my friend told me that Open Philanthropy was hiring. I had never heard of Open Philanthropy (or EA!). But I was drawn to the team’s dedication to finding new ways to maximize impact while scaling up its charitable giving, largely with a focus on global health and development. At the time, I was working at a for-profit start-up after a prior stint as a management consultant, and I wasn’t particularly interested in any specific part of global health and development (in fact, I explicitly told the recruiter I wasn’t interested in a role that would make me choose a sole subject to become an expert in). I was, however, compelled by the prospect of a career explicitly oriented around helping others as much as I could.
After getting the Open Philanthropy researcher role, I had the opportunity to explore a variety of causes. In addition to my time as a researcher, I managed grantmaking programs across both human-centered health and development and farm animal welfare. I also spent time on operations and communications work that cut across causes.
During this time at Open Philanthropy, I began engaging more with AI safety. I initially had significant reservations. I saw AI safety as a delusion of privileged tech bros in the Bay Area focusing on theoretical risks that felt close to home to them and made their work seem important, unlike more distant harms faced by the global poor or animals in cages. More recently, I’ve started to take AI safety very seriously (much to the delight of my friend who had been pushing me toward AI many years ago). But that took time and the existence of EA. What ultimately made the difference for me was spending many hours talking with a community of people who had a different perspective on cause prioritization from mine but with whom I shared a commitment to key principles for determining how to best help others. It mattered that those I disagreed with weren’t just engineers looking to get wealthy, and instead were people who shared my values and were often vegans who donated 10% of their income to the global poor.
I think my journey demonstrates how EA principles can resonate with some people who may not be interested in specific causes. There are people who might have the opposite experience—bouncing off abstract or philosophical arguments while finding themselves excited about specific causes—but I think building and nurturing a community around EA principles creates a compelling beacon for many people, as it did for me.
With my updated cause prioritization, I hope CEA’s work helps humanity navigate advanced AI, but I want to be clear that this is not the only reason I’m promoting EA principles. I still feel uncertain about how to compare causes, and I also continue to believe that people inspired by EA principles make valuable contributions to animal welfare, human health, and in other places where moral progress is needed. I’m excited to do what I can to ensure the EA community is a place where people doing impactful work across multiple causes feel like they can find value and their work is celebrated.
Serving as CEA’s new CEO is an exciting opportunity to continue to protect and advocate for principles that played such an important role in my life. I’m grateful to work alongside others who share my ethical commitments, and I look forward to developing and refining programs that will nurture the community of people engaging with these principles.
Acknowledgments
I want to give a particularly large thank you to Michel Justen, who played a very significant role in drafting, editing, and coordinating feedback for this post. I also want to thank Max Dalton, Will MacAskill, Eli Rose, James Snowden, Lewis Bollard, Emma Richter, and the many CEA staff who helped refine this post and its underlying ideas.
- ^
This is in part a response to calls for EA organizations to have a transparent scope.
- ^
This list of principles isn’t totally exhaustive. For example, CEA’s website lists a number of “other principles and tools” below these core four principles and “What is Effective Altruism?” lists principles like “collaborative spirit”, but many of them seem to be ancillary or downstream of the core principles. There are also other principles like integrity that seem both true and extremely important to me, but also seem to be less unique to EA compared to the four core principles (e.g. I think many other communities would also embrace integrity as a principle).
Also, these principles are not unique to CEA. Others have used similar principles to describe EA, like Peter Wilderford here.
- ^
However, not all EA-related work has to be motivated by work on one of these populations. In particular, some people working on GCRs believe their efforts can be justified based purely on the impact on present-day humans.
- ^
There was a chance that CEA would make a pivot with a new CEO, but I’m excited about continuing to embrace a principles-first approach.
- ^
For the sake of efficiency, I won’t argue for that claim in depth in this post. For now, I’ll simply say that I think this is clear to me given that this has long been CEA’s mission and that CEA already has programs and staff committed to this mission.
- ^
For a public example, see here. I’m also aware of other conversations that have happened privately.
- ^
Based on this 2019 EA survey data indicating 42% of survey respondents had prioritized a different cause compared to when they first joined EA, it seems likely that the number of people who have changed causes is at least in the thousands.
- ^
Though if you think the EA community has issues but also has potential, its issues might actually be a reason to dedicate more reasons to developing and improving the community.
- ^
What programs we feature may change, but this is unlikely to happen in the near future (i.e., in 2024). You can see data on these programs on our dashboard.
- ^
A notable exception to this was the 2024 EAG Global Catastrophic Risk in the Bay Area. Our reasons for running that, laid out here, were our funder’s priorities, excitement about experimenting with cause-specific events, and the historic attendee pool of Bay Area events. The 2025 Bay Area EAG doesn’t have a cause-specific theme.
- ^
The amount of cross-cause content increases if you take into account non-main stage content, like meetups, workshops, and speed meetings.
- ^
We recently reviewed this event here.
- ^
I started in mid-February and took an extended period of pre-scheduled leave after joining.
- ^
I roughly agree with most heuristics in CEA’s Approach to Moderation and Content Curation, which details our approach to tasks like curating content and splitting content for our introductory materials. But they were written before my time and it’s likely there will be some places where I diverge.
- ^
We think that there are drawbacks to each of these groups (e.g. “cause prioritization experts” may be selected for preferring esoteric conclusions and arguments, highly-engaged community members have been selected to agree with current EA ideas), but they seem to converge to a significant degree.
- ^
I think there’s an important question about how much attention to pay to different existential risks. I’m personally inclined to prioritize AI significantly more than other existential risks (and I also care about AI for reasons that are not purely motivated by existential risk), though I suspect many others would disagree with my weightings.
Hi Zachary,
First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I haven't had time to reply to). This is, fwiw, is already well above the level of community engagement that I've perceived from most previous heads of CEA.
On your specific comments, it's possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope you'll appreciate that me focusing on the disagreements for the rest of this comment doesn't imply that they're my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. I'm giving my concerns here as much for your and the community's information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself.
In some sense this is obviously true, but I believe it's gerrymandering what the difference between 'what' and 'how' actually is.
For example, to my mind 'scout mindset' doesn't seem any more central a goal than 'be transparent'. In the post by Peter you linked, his definition of it sounds remarkably like 'be transparent', to wit: 'the view that we should be open, collaborative, and truth-seeking in our understanding of what to do'.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isn't so hard to imagine if it's understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policies - but I don't think we live in either world most of the time.
In the actual world, the community doesn't really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the ‘community’ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for 'leadership' forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we don't have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding - a simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what I'm pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem... We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim she's wrong on this. As presented, Labenz's statement is in itself a further example of lack of transparency that seems not to serve the community - it's a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general point - transparency doesn't just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldn't have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesn't deal in biosecurity or international politics - if it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than they're willing to offer themselves. Briefly going through their three key arguments:
'Challenge 1: protecting our brand' - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbey - the fact that that was 'other CEA' notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly 'a key member of our organisation') sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEA's historical media policy of 'talk as little as possible to the media' both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
'Challenge 2: information about us is information about grantees' - this mostly doesn't apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community they're serving - but in that situation we think it's a good tradeoff)
'Challenge 3: transparency is unusual' - this seems more like a whinge than a real objection. Yes, it's a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we can't hold ourselves to those raised standards, it's hard to have much hope that we'll ever inspire meaningful change in others.
> I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesn't matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said 'our primary goal is having a positive impact on the world rather than distributing bednets' and used that as a rationale to remove their hyperfocus on bednets, I'm confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful - if you later find out that your core competencies aren't that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science's founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion ... We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/or misunderstanding. There's a major equivocation going on here between exactly *which* members of the community you're serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for 'people leading EA community-building'. I want to see CEA support people who actually need support to do good - the low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurial funding; all the talented people who didn't go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what I'm already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. There's something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, 'positive career changes' can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldn't really be a distinction - per my comments about focus above, serving CEA's community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGs - then your donors are the community and hence, either way, your customers.