Introduction
I’m Zach, the new CEO of the Centre for Effective Altruism (CEA). As I step into my role, I want to explain the principles that I think make EA special and share how CEA will continue to promote them.
In this post, I will:
- Highlight the principles that I think are core to EA, and explain why CEA will continue to promote them above and beyond any single or set of cause area(s).
Explain what being principles-first means in practice for CEA[1].
- Explain how encouraging people to act on EA principles can still lead to some prioritization decisions between causes, how CEA has navigated those decisions in the past, and what factors influence those decisions.
- Share a little bit about my background and how I’ve personally engaged with these principles.
CEA will continue a “principles-first” approach to EA
In my role at CEA, I embrace an approach to EA that I (and others) refer to as “principles-first”. This approach doubles down on the claim that EA is bigger than any one cause area. EA is not AI safety; EA is not longtermism; EA is not effective giving; and so on. Rather than recommending a single, fixed answer to the question of how we can best help others, I think the value of EA lies in asking that question in the first place and the tools and principles EA provides to help people approach that question.
Four core principles that I and others think characterize the EA approach to doing good are[2]:
- Scope sensitivity: Saving ten lives is more important than saving one, and saving a thousand lives is a lot more important than saving ten.
- Scout mindset: We can better help others and understand the world if we think clearly and orient towards finding the truth, rather than trying to defend our own ideas and being unaware of our biases.
Impartiality: With the resources we choose to devote to helping others, we strive to help those who need it the most without being partial to those who are similar to us or immediately visible to us. (In practice, this often means focusing on structurally neglected and disenfranchised groups, like people in low-income countries, animals, and future generations[3].)
- Recognition of tradeoffs: Because we have limited time and money, we need to prioritize when deciding how we might improve the world.
CEA has historically taken a principles-first approach and I don’t expect to make big changes to this aspect of CEA’s mission[4]. With that being said, I recognize that being principles-first may mean something different to different people, and it may mean something different to me than it did to CEA’s previous CEO. In talking with staff who have been around CEA for longer than I, I didn’t find consensus in how principles-first was interpreted. Instead of trying to come up with a comprehensive history for this post, I instead want to focus on some specific actions that CEA has taken in the recent past that reflect this approach, as well as provide clarity on how I interpret a principles-first approach and where I still feel uncertain about how CEA will approach promoting EA principles in the future.
Why principles-first?
CEA will continue to promote these core principles and nurture a community based on them. While we’ll sometimes prioritize between causes (see examples and reasoning), cause-specific work won’t be CEA’s main focus.
I think EA principles are impactful and worth promoting, and CEA is one of the best-suited organizations to promote them[5]. I also think cause-specific field-building can be impactful, and I don’t feel confident in sweeping claims about how either a principles-first or cause-specific approach is much better than the other. I think it makes sense for organizations trying to do good via community-building and field-building, like CEA, 80,000 Hours, and (to some extent) Open Philanthropy, to take a variety of approaches in a community-building portfolio. We can argue about the specific allocations across that portfolio—and we have—but it seems extremely likely to me that promoting core EA principles and nurturing a community of people who take those principles seriously should be part of the portfolio.
Here are some benefits of the principles-first approach:
- Promoting EA principles has inspired and empowered thousands of people to be more altruistic and impactful with their careers and donations. EA’s core principles have served as a beacon for many sincere, talented people to pivot significant energy towards doing good in the world. It turns out that “how can I do more good with my career?" and "how can I do more good with my donations?” are questions that people actually ask. I worry that outreach only for specific causes would never catch the eye of people who are asking the big-picture questions that EA’s frameworks and principles try to help answer. More generally, principles that can draw thousands of people to common ground for the sake of helping others are something to protect and hard to reproduce.
- EA principles, and a community of talented people who take them seriously, are adaptable. Our knowledge of the world and the environment around us will inevitably change, and it’s valuable to have a group of people who can reprioritize as we learn more. If we lose a focus on EA principles, I think we risk losing our ability to notice what others may be missing. I think the EA community’s scout mindset and attention to neglected problems are behind some of our most impactful achievements, such as prioritizing campaigns for farmed animal welfare and an early focus on pandemics and AI safety. Looking ahead at possible wild futures, I think a community focused purely on making AI safe, for example, would be significantly less capable of tackling other potential challenges posed by emerging technology, such as post-AGI governance or digital sentience.
Promoting principles that draw people from many causes allows for a productive cross-pollination of ideas and changing of minds. Drawn together by a wish to help others, EA spaces can enable connections between people who wouldn’t meet otherwise, but who can benefit from one another. For example, AI safety advocates have sought advice from experienced animal-welfare advocates to inform potential approaches to regulation and campaigns for labs to voluntarily implement safety protocols[6]. It seems unlikely these groups would have collaborated without EA. I also think the epistemics of the community benefit from people with different backgrounds meeting while sharing principles like scout mindset. Seeing people around us hold the same values but come to different conclusions invites us to challenge our own cause prioritization in a way that, say, attracting people to work on malaria purely via anti-malaria campaigns doesn’t. In a testament to the impact of a commitment to a scout mindset, hundreds of people have engaged with others aspiring to do good, pressure-tested one another's ideas, and pivoted their work’s focus as a result[7].
With that being said, I don’t want to make it seem like “what should CEA do? or “what should EA be?” are questions with obvious answers.
There are reasons to think that it could be better to shift more resources to specific causes or, for example, advocate for greater splintering of different parts of EA:
Existing problems with the EA community: There are criticisms of the EA community that may be warranted. For example, I think criticisms about diversity issues in EA, echo chambers, and conflicts of interest have merit. If you believe these shortcomings can’t or won’t be addressed, that may be an argument to shift towards focusing on specific causes instead of trying to create an alternative community focused on the same principles[8].
- Downsides of interconnectedness: Having a large interconnected community may create shared risk between projects and people that would otherwise not be tied together. For example, a scandal involving someone working on AI safety can end up harming the credibility of animal welfare activists and someone focused on AI may be criticized for not being vegan.
- Concerns about how some people approach these principles (e.g. maximization): Other arguments against focusing on principles-first EA include concerns that EA can lead to a perilous focus on maximization or encourage unsustainable and unhealthy dedication to work.
- Benefits of a narrow focus: It may just be the case that certain causes are much more important to work on directly. And if so, there are benefits to focusing on one cause to build expertise and relevant relationships rather than emphasizing principles.
I’m sympathetic to these concerns about a principles-first approach and the case for spending more resources on building specific fields. In particular, I believe there are real concerns about the EA community. But I believe we should improve EA, not abandon it. I don't see the community or core EA principles as fatally flawed. I also don’t see a clear high-impact, non-EA community that exists without flaws. I want to stay open to criticisms like those above and guide CEA to improve what it can, and I’m grateful to others who also contribute to making our community better.
Overall, I think the benefits of a principles-first approach outweigh the concerns. I feel good about honestly saying, “Yes, cause-specific efforts can be very valuable, but so can a principles-first approach. Both should exist, and I’m focusing on the latter.”
What exactly does principles-first mean for CEA?
CEA’s mission is to nurture a community of people who are thinking carefully about the world’s most pressing problems and taking impactful action to solve them.
We currently enact our mission via five main efforts[9]. We think these programs all promote EA principles more than they promote any specific answer to how to do the most good, and we’ll continue to prioritize this approach in the future.
CEA program | Examples that demonstrate a commitment to principles |
Events: We run conferences like EA Global and support community-organized EAGx conferences. We also run some bespoke events for subject matter experts (see more below).
| EA Global and EAGx admissions weigh how well applicants understand EA principles and put those principles into practice. This means we end up accepting applicants from a range of causes[10], including non-standard EA causes if the applicant can make the case for their work’s impact.
We also platform a lot of cause-agnostic content, like cause-prioritization workshops and skill- or career-stage-based meetups. |
Groups: We fund and advise hundreds of local effective altruism groups, ranging from university groups to national groups. We also run virtual introductory EA programs.
| Our Groups program supports EA groups that engage with members who prioritize a variety of causes.
|
Online: We build and moderate the EA Forum, an online hub for discussing the ideas of effective altruism. We also produce the Effective Altruism Newsletter. | We don’t approve or reject EA Forum posts based on cause prioritization, and we curate content on the EA Forum and EA Newsletter that is relevant to a variety of causes. The EA Forum runs events like Career Conversations Week and the Donation Election, which encourage engagement with EA principles and don't pre-suppose an answer. |
Community Health: We aim to prevent and address interpersonal and community problems that can prevent community members and projects from doing their best work.
| This work supports individuals, projects, and organizations across the EA space and across cause areas. |
Communications: We work to communicate about EA principles, ideas, and work with a variety of audiences and stakeholders. This involves working with the media, advising and assisting communicators in the EA community, and supporting the creation of content about EA.
| We support communications at organizations across cause areas and dedicate part of our content focus to highlighting EA principles. |
Sometimes we’ll prioritize some causes over others
While we want CEA’s work to be principles-first, I don’t think it makes sense for CEA’s work to be principles only. Part of what makes EA special is that it goes beyond a group of people thinking about how to do good—it’s a group of people doing good. We want to encourage a journey from learning about EA principles to applying these principles to concrete problems. And insofar as we introduce concrete problems, it’s inevitable that we run into tricky questions about what causes we prioritize.
Moreover, I think there are ample reasons to want CEA to be an ally for people working directly on priority causes, even as we continue to have a principles-first focus. Both approaches emphasize solving problems that can save or improve the lives of people and animals. We have a lot to learn from cause-area experts, whether they explicitly engage with EA or not, about what interventions are most promising in their field, what talent and projects would help the most, and in what ways we could harm the field. And cause-area experts can benefit by conveying their ideas to a wider audience and attracting donations and talented people to work on important issues. I worry that in the past the EA community has been too insular and perhaps dismissive of non-EA expertise, and I’d be excited to see more humility in the future.
Cause prioritization examples
In light of CEA deciding to not solely focus on principles, I‘lI give some examples of the cause-prioritization decisions CEA has made recently:
- EAGs are an opportunity for attendees to learn more about specific causes. How should we distribute object-level sessions across cause areas?
For EA Globals in 2023, 33% of our content on the three main-stages[11] ended up covering cross-cause issues (growing effective altruism, cause prioritization, skills, etc.). Of the cause-specific content, 64% was focused on existential risk reduction, 15% was on animal welfare, and 21% was on global health and development.
- CEA has the events team with the most capacity in the EA ecosystem, which means we can enable high-value events that object-level experts either couldn’t run or couldn’t run nearly as well without us. If we think cause-specific events are more valuable than another ‘meta EA’ event, which cause-area specific events should we support?
- Our Partner Events Team has supported events like two Summits on Existential Security and an Effective Giving Summit.
We also experimented with an EAG in the Bay Area focused on Global Catastrophic Risks[12].
- After introducing EA principles in the EA intro program, we want to highlight concrete problems in the world to ground the application of EA principles and emphasize the value of actually doing things. Which areas should we spotlight?
- In the EA intro program syllabus, the first three weeks explain differences in impact via global health and development examples and radical empathy via animal welfare readings. The next three weeks explain the “most important century” thesis, longtermism, and risks from AI. The final weeks emphasize the importance of thinking for yourself, less common causes, and putting these ideas into practice.
The examples above demonstrate that when CEA has prioritized between causes, AI safety has received more attention than other areas. While I’ve only been full-time in this role for a few months[13] and don’t yet have a clear perspective on what I think the “correct” balance of attention should be between specific causes going forward, I do expect AI safety to continue to receive the most attention (though I wouldn’t be surprised if the relative weighting of causes looked different). At the same time, I sometimes worry this can go too far, and I expect we’ll experiment with different approaches. It is important to me that people engaging across all core-EA causes can find value and feel like their work is valued when they engage with the EA community.
Factors that shape CEA’s cause prioritization
To understand how CEA prioritizes (and, for example, why AI safety currently receives more attention than other specific causes), here are some of the factors that weigh into our cause-prioritization (insofar as we’re not just promoting cross-cause tools)[14].
- The opinions of CEA staff: I want to actively encourage CEA staff to be thoughtful about their own cause prioritization and have opinions about how they can accomplish the most good with the time they’re spending on their career at CEA. CEA staff’s constant judgment calls influence CEA’s programs. An informal 2023 survey of CEA staff suggests that staff, on average, thought that there were around five key priorities, with mitigating existential risk selected the most, followed by AI existential security. We also shared a post about where CEA staff donated in 2023.
- Our funders: The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies. While I don’t think it’s necessary for us to share the exact same priorities as our funders, I do feel there are some constraints based on donor intent, e.g. I would likely feel it is wrong for us to use the GCRCB team’s resources to focus on a conference that is purely about animal welfare. There are also practical constraints insofar as we need to demonstrate progress on the metrics our funders care about if we want to be able to successfully secure more funding in the future. I’m interested in doing more to support a broader array of causes (e.g. running more events targeted at animal welfare or global health and development), though I expect there to be some barriers in terms of different willingness-to-pay for community building from different funders, team bandwidth, and in some cases staff interest. Over time, I’d like to see CEA diversify its funding to better reflect a principles-first approach.
- With that being said, there have been significant changes in staffing and funding practices at both Open Philanthropy and CEA, and I think it’s uncertain how Open Philanthropy will approach funding CEA in the future (e.g. if the funding continues to come from one grantmaking portfolio or if it will be spread out). We expect this to be an active topic of conversation before our next funding cycle.
The views of people who have thought a lot about cause prioritization: CEA has historically shown some deference to heavily-engaged people who serve key roles in organizations that embrace EA principles and relevant cross-cause or cause-specific experts[15]. I feel significant ambivalence about this approach. On the one hand, CEA doesn’t have deep in-house expertise in cause prioritization, and I think deferring to an aggregate of well-informed experts can represent an appropriate degree of humility. On the other hand, I worry that this creates an echo chamber. For example, people could point to surveys from the Meta Coordination Forum to justify focusing more on existential risk, which then means there’s a disproportionate emphasis placed on existential risk when inviting attendees to future Meta Coordination Forums, creating a self-reinforcing cycle. Ultimately, I don’t think resolving how much weight to put on this factor is essential, because both this point and the ones mentioned above suggest CEA will emphasize existential risks more than other causes.
Some argue that we should instead mirror back the cause prioritization of the community as a whole, e.g. based on community surveys. I think this is wrong. Not only does that presuppose that there should be equal weighting of views between people who have not necessarily engaged equally with the question of what causes are worth prioritizing (and, as is discussed above in this post, the cause prioritization of people engaging with EA is liable to change), but it also presupposes that CEA exists solely to serve the EA community. I view the community as CEA’s team, not its customers. While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).
As the factors above demonstrate, we care about (and are incentivized to care about) prioritizing existential risk reduction work when we need to prioritize between causes[16]. But prioritizing between causes isn’t at the heart of CEA’s mission: to promote EA principles and nurture the community of people who take those ideas seriously. As a result, you can still expect the Forum, groups, and events to support EA principles across a range of causes.
The role of principles in my path through EA
On a personal note, I’m excited about CEA sticking to its principles-first approach. I may have never started working in effective altruism or related causes if there weren’t an “EA community” that spanned multiple causes and nurtured a spirit of truth-seeking.
A friend of mine spent years trying to get me involved in AI safety without mentioning anything about EA. I was confused why he seemed to care so much about it, and I wasn’t particularly compelled by the arguments I heard.
After years of trying and failing to have me focus on AI safety, my friend told me that Open Philanthropy was hiring. I had never heard of Open Philanthropy (or EA!). But I was drawn to the team’s dedication to finding new ways to maximize impact while scaling up its charitable giving, largely with a focus on global health and development. At the time, I was working at a for-profit start-up after a prior stint as a management consultant, and I wasn’t particularly interested in any specific part of global health and development (in fact, I explicitly told the recruiter I wasn’t interested in a role that would make me choose a sole subject to become an expert in). I was, however, compelled by the prospect of a career explicitly oriented around helping others as much as I could.
After getting the Open Philanthropy researcher role, I had the opportunity to explore a variety of causes. In addition to my time as a researcher, I managed grantmaking programs across both human-centered health and development and farm animal welfare. I also spent time on operations and communications work that cut across causes.
During this time at Open Philanthropy, I began engaging more with AI safety. I initially had significant reservations. I saw AI safety as a delusion of privileged tech bros in the Bay Area focusing on theoretical risks that felt close to home to them and made their work seem important, unlike more distant harms faced by the global poor or animals in cages. More recently, I’ve started to take AI safety very seriously (much to the delight of my friend who had been pushing me toward AI many years ago). But that took time and the existence of EA. What ultimately made the difference for me was spending many hours talking with a community of people who had a different perspective on cause prioritization from mine but with whom I shared a commitment to key principles for determining how to best help others. It mattered that those I disagreed with weren’t just engineers looking to get wealthy, and instead were people who shared my values and were often vegans who donated 10% of their income to the global poor.
I think my journey demonstrates how EA principles can resonate with some people who may not be interested in specific causes. There are people who might have the opposite experience—bouncing off abstract or philosophical arguments while finding themselves excited about specific causes—but I think building and nurturing a community around EA principles creates a compelling beacon for many people, as it did for me.
With my updated cause prioritization, I hope CEA’s work helps humanity navigate advanced AI, but I want to be clear that this is not the only reason I’m promoting EA principles. I still feel uncertain about how to compare causes, and I also continue to believe that people inspired by EA principles make valuable contributions to animal welfare, human health, and in other places where moral progress is needed. I’m excited to do what I can to ensure the EA community is a place where people doing impactful work across multiple causes feel like they can find value and their work is celebrated.
Serving as CEA’s new CEO is an exciting opportunity to continue to protect and advocate for principles that played such an important role in my life. I’m grateful to work alongside others who share my ethical commitments, and I look forward to developing and refining programs that will nurture the community of people engaging with these principles.
Acknowledgments
I want to give a particularly large thank you to Michel Justen, who played a very significant role in drafting, editing, and coordinating feedback for this post. I also want to thank Max Dalton, Will MacAskill, Eli Rose, James Snowden, Lewis Bollard, Emma Richter, and the many CEA staff who helped refine this post and its underlying ideas.
- ^
This is in part a response to calls for EA organizations to have a transparent scope.
- ^
This list of principles isn’t totally exhaustive. For example, CEA’s website lists a number of “other principles and tools” below these core four principles and “What is Effective Altruism?” lists principles like “collaborative spirit”, but many of them seem to be ancillary or downstream of the core principles. There are also other principles like integrity that seem both true and extremely important to me, but also seem to be less unique to EA compared to the four core principles (e.g. I think many other communities would also embrace integrity as a principle).
Also, these principles are not unique to CEA. Others have used similar principles to describe EA, like Peter Wilderford here.
- ^
However, not all EA-related work has to be motivated by work on one of these populations. In particular, some people working on GCRs believe their efforts can be justified based purely on the impact on present-day humans.
- ^
There was a chance that CEA would make a pivot with a new CEO, but I’m excited about continuing to embrace a principles-first approach.
- ^
For the sake of efficiency, I won’t argue for that claim in depth in this post. For now, I’ll simply say that I think this is clear to me given that this has long been CEA’s mission and that CEA already has programs and staff committed to this mission.
- ^
For a public example, see here. I’m also aware of other conversations that have happened privately.
- ^
Based on this 2019 EA survey data indicating 42% of survey respondents had prioritized a different cause compared to when they first joined EA, it seems likely that the number of people who have changed causes is at least in the thousands.
- ^
Though if you think the EA community has issues but also has potential, its issues might actually be a reason to dedicate more reasons to developing and improving the community.
- ^
What programs we feature may change, but this is unlikely to happen in the near future (i.e., in 2024). You can see data on these programs on our dashboard.
- ^
A notable exception to this was the 2024 EAG Global Catastrophic Risk in the Bay Area. Our reasons for running that, laid out here, were our funder’s priorities, excitement about experimenting with cause-specific events, and the historic attendee pool of Bay Area events. The 2025 Bay Area EAG doesn’t have a cause-specific theme.
- ^
The amount of cross-cause content increases if you take into account non-main stage content, like meetups, workshops, and speed meetings.
- ^
We recently reviewed this event here.
- ^
I started in mid-February and took an extended period of pre-scheduled leave after joining.
- ^
I roughly agree with most heuristics in CEA’s Approach to Moderation and Content Curation, which details our approach to tasks like curating content and splitting content for our introductory materials. But they were written before my time and it’s likely there will be some places where I diverge.
- ^
We think that there are drawbacks to each of these groups (e.g. “cause prioritization experts” may be selected for preferring esoteric conclusions and arguments, highly-engaged community members have been selected to agree with current EA ideas), but they seem to converge to a significant degree.
- ^
I think there’s an important question about how much attention to pay to different existential risks. I’m personally inclined to prioritize AI significantly more than other existential risks (and I also care about AI for reasons that are not purely motivated by existential risk), though I suspect many others would disagree with my weightings.
I wish I could be as positive as everyone else, but there are some yellow flags for me here.
Firstly, as Zachary said, these seem to be exactly the same principles CEA has stated for years. If nothing about them is changing, then it doesn't give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, ‘Insanity is doing the same thing over and over again and expecting different results.’
Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does 'recognition of tradeoffs' involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn't 'scope sensitivity' basically a subset of the concerns implied by 'impartiality'? Is something like 'do a counterfactually large amount of good' supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does 'scout mindset' need to be on the list, when 'thinking through stuff carefully and scrupulously' is a prerequisite to effective counterfactual actions? On reading this post, I'm genuinely confused about what any of this means in terms of practical expectations about CEA's activities.
Thirdly, 'I view the community as CEA’s team, not its customers' sounds like a way of avoiding ever answering criticisms from the EA community, and really doesn't gel with the actual focuses of CEA:
Lastly, I really really wish 'transparency' would make the list again (am I crazy? I feel like it was on a CEA list in some form in the early days, and then was removed). I think there are multiple strong reasons for making transparency a core principle:
I am very positive about the new batch of Effective Ventures trustees and the direction of independence CEA and other EV projects have taken, and I strongly hope that my concerns here turn out to be misplaced.
Note: I had drafted a longer comment before Arepo's comment, given the overlap I cut parts that they already covered and posted the rest here rather than in a new thread.
I agree with Arepo that both halves of this claim seem wrong. Four of CEA's five programs, namely Groups, Events, Online, and Community Health, have theories of change that directly route through serving the community. This is often done by quite literally providing them with services that are free, discounted, or just hard to acquire elsewhere. Sure, they are serving the community in order to have a positive impact on the wider world, but that's like saying a business provides a service in order to make a profit; true but irrelevant to the question of whether the directly-served party is a customer.
I speculate that what's going on here is:
I'm sympathetic to both impulses, but if taken too far they leave the CEA <-> EA community relationship at an impasse and make the name 'CEA' a real misnomer. Regardless of preferred language, I hope that CEA will rediscover its purpose of nurturing and supporting the EA community by providing valuable services to its members[1] - a lower bar than 'make these eternal critics happy' - and I believe the short descriptions of those four teams quoted below already clearly point in that direction.
For me, this makes the served members customers, in the same sense that a parishioner is a customer of their church. Most businesses can't make all prospective customers happy either! But if that fact makes them forget that their continued existence is contingent upon their ability to serve customers, then they are truly lost.
As I hope comes across, I do not think this is at all radical. But if CEA cannot or will not do this, I think it should change its name.
Note: I'm no longer at CEA, thoughts my own.
I feel kind of confused about the point you are making here. CEA is the Centre for Effective Altruism, not the Center for Effective Altruists. This is fairly different from many community building organizations; e.g. Berkeley Seniors' mission is to help senior citizens in Berkeley per se (rather than advance some abstract idea which seniors residing in Berkeley happen to support).
I can't tell if you
I am not AGB, but it's clear that a huge fraction of the power that CEA has comes from it being perceived as a representative of the EA community, and because the community empowered it to solve coordination problems between its members. That power is given conditional on CEA acting on behalf of the people who invested that power.
Sure, maybe CEA accepted those resources (and the expectations that came with that) with the goal of doing the most good, but de-facto CEA as an institution basically only exists because of its endorsement by the EA community, and the post as written seems to me like it basically is denying that power relationship and responsibility.
Lightcone in its stewardship of LW is in a very similar position. Our goal with LW is to develop an art of rationality and reduce existential risk, but as an institution we are definitely also responsible for optimizing for the goals of the other stakeholders who have invested in LessWrong (like the authors, commenters, Eliezer who founded the site, and the broader rationality community which has invested in LessWrong as a kind of town square). People would be really pissed if we banned long-term contributors to LW, even if we thought it was best by our own lights, and rightfully so. They have invested resources which make them a legitimate stakeholder in the commons that we are administering.
(there is some degree to which we do have leeway here because there is widespread buy-in for something like "Well Kept Gardens Die by Pacifism", but that leeway comes from the fact that there is widespread buy-in for discretion-based moderation, and that buy-in does not exist for all forms of possible changes to LW)
Thanks! For what it's worth, the thing you are describing seems consistent with describing EAs as "teammates" (I also think that sports teams are successful ~entirely because of the work of their constituent team members) but I concede that the term is vague.
[Edit: further explained and qualified in a new comment below.]
Agreed, although I would note that the application varies from function to function.
For instance, I don't think it runs EAGs or funds EAGxes through power granted by the community. So I think CEA has considerably more room to do what it thinks best by its own lights when dealing with its events than in (e.g.,) operating the community health team.
I would put other core community infrastructure in a similar bucket as community health, at least to the extent it constitutes a function where coordination of effort is an important factor and CEA can be seen as occupying the field. For example, it makes sense to coordinate a single main Forum, a single sponsor of university groups at a particular university, etc.
Huh, EAG feels like one of the most obvious community-institutions. Like, it's the central in-person gathering event of the EA community, and it's exactly the kind of thing where you want to empower an organization to run a centrally controlled version of it, because having a Schelling-event is very valuable.
But of course, in empowering someone to do that, CEA accepts some substantial responsibility to organize the event with the preferences of the community in mind. Like, EAG is really hard to organize if you are not in an "official EA-representative" position, and a huge fraction of the complexity comes from managing that representation.
I could have been clearer that different CEA functions are on a continuum in their relationship with the community, rather than sounding more binary at points. Also, my view that CEA has more freedom around EAGs than certain other functions doesn't mean I would assign no meaningful constraints.
That being said, I think the "desirability of empowering an organization to run a centrally controlled" function is probably necessary but not sufficient to rely on the community-empowerment narrative. Here, there are various factors that pull me toward finding a weaker obligation on CEA's part -- the obligation not to unfairly or inappropriately appropriate for its own objectives the assembling of many EAs in one city at one time in a way that deprives other actors of their opportunity to make a play for that external/community resource. In other words, I see a minimum duty to manage that resource in an interoperable and cooperative manner . . . but generally not a duty to allocate CEA's own resources and programming decisions in a way that lines up with community preferences.
I don't think there is anything that prevents an organization from running a conference, even a top-notch conference, by its own lights and without necessarily surrendering a significant amount of control to the community. One plausible narrative here is that CEA put on a top-notch conference that others couldn't or didn't match (backing from Open Phil and formerly FTXFF doubtless would help!) and that the centralizing elements are roughly the natural result of what happens when you put on a conference that is much better than the alternatives. In this narrative, there would be no implied deal that makes CEA largely the agent of the community in running EAG.
That strikes me as at least equally plausible on its face than one than a narrative in which the community "empower[ed]" CEA to run a conference with centralizing tendencies as long as the community retained sizable influence regarding how it is run. And given my desire to incentivize orgs to organize (and funders to fund) top-notch conferences, as well as a default toward the proper response to a conference you don't like being organizing your own, I am inclined to make the natural-result narrative my starting point .
At the same time, I recognize the coordination work associated with EAGs -- although I would specifically emphasize the coordination value of having a bunch of EAs in about the same place at the same time away from their day jobs. To me, that's the main resource that is necessarily shared, in the sense of being something that can by its nature only happen 2-3 times per year, and is of community origin (rather than a CEA resource). I would take a fairly hard line against CEA actions that I judged to be an unfair or inappropriate grab at that resource. So while I would not impose the same duty you imply, I would assign a choice for CEA between that duty and a duty to run EAGs in an interoperable and cooperative manner.
Under that alternate duty, I would expect CEA to play nice with people and orgs who want to plan their own speakers and events that happen during the days of (or just before/after) the EAG. I would also expect CEA to take reasonable efforts to present its attendees with an option to opt-in to Swapcard with people who are not EAG attendees but are attending one of the other, non-CEA events. Failing to do these kinds of things would constitute a misuse of CEA's dominant position that deprives other would-be actors the ability to tap into the collective community resource of co-location in space and time, and deprives the individual community members of free choice.
On the other hand, the alternative duty would not generally extend to deferring to the community's preference on cause-area coverage for functions organized by CEA. Or to CEA's decisions about who to provide travel grants or admittance to its own events. CEA choosing to de-emphasize cause area X in its own event planning, or employ a higher bar for travel grants for people working in cause area X, does not logically preclude the community from doing these things itself. To the extent the community finds it difficult to perform these functions (or delegate another org to do so), that would update me toward the natural-result narrative and away from viewing CEA as a delegate who primarily exercises the community's power.
In contrast, my implied model for university groups is that the maximum healthy carrying capacity is usually one group per university due to a limited resource (student interest/attention) that is independent of CEA or any other org. Interoperability or co-existence is impractical, as the expected result would be failure of both/all groups from stretching the resource too thin. Moreover, starting a university group is within the operational capabilities of a number of actors (most non-EA student groups do not receive much in the way of external support, so the barriers to entry are pretty low). This raises the need for coordination among numerous potential actors. Under those circumstances, the empowerment/cooperation narrative is pretty convincing.
And many of the reasons I'm relatively more inclined to give CEA a freer hand on EAGs are lacking with the Forum. There are reasons a variety of conferences would be desirable (even if you want a single flagship conference), while the positive side of the ledger for multiple fora is more marginal. The speech on the Forum isn't CEA's own, so I'm not much less worried that expectations of community control of fora would reduce CEA's incentives and ability to speak its own message. The examples of topics on which I would defer to CEA's ability to use its own resources to pursue its own mission don't have good analogues in the Forum context. There are many actors who could pull off running a central forum -- the LW code could be forked, servers are fairly cheap, and the moderation lift would be manageable for a relatively small group of volunteers.
A thing you might not know is that I was on the founding team of the EA Global series (and was in charge of EA Global for roughly the first two years of its existence). This of course doesn't mean I am right in my analysis here, but it does mean that I have a lot of detailed knowledge about the kind of community negotiations that were going on at the time.
I agree with a bunch of the arguments you made, but my sense is that when creating EA Global, CEA leaned heavily on its coordinating role within the community (which I think made sense).
Indeed, CEA took over the EA Summit from Leverage explicitly because both parties thought it was pretty important to have a centralized annual EA conference.
I didn't know that, and adding in historical facts could definitely move me away from my starting point! For example, they could easily update me more toward thinking that (1) CEA would need to more explicitly disclaim intent to run the semi-official coordinating event, (2) it would need to provide some advance notice and a phase-out to allow other actors to stand up their own conferences that sought to fulfill a centralizing function; and (3) it would have a broader affirmative obligation to cooperate with any actor that wanted to stand up an alternative to EAG.
That's fair, I didn't really explain that footnote. Note the original point was in the context of cause prioritisation, and I should probably have linked to this previous comment from Jason which captured my feeling as well:
It seems possible, though far from obvious, that CEA's funding base is so narrow it's forced to focus on that target, in order to ensure the organisation's survival from that direction. This was something I thought Zach covered nicely:
Thanks! That context is helpful.
There’s a distinction between what an organization wants to achieve and how it wants to achieve it. The principles described in the original post are related to the what. They help us identify a set of shared beliefs that define the community we want to cultivate.
I think there’s plenty of room for disagreement and variation over how we cultivate that community. Even as CEA’s mission remains the same, I expect the approach we’ll use to achieve that mission will vary. It’s possible to remain committed to these principles while also continuing to find ways to improve CEA’s effectiveness.
I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself. Looking at the spectrum of approaches EA organizations take to doing good, I’m glad that there’s room in our community for a diversity of approaches. I think transparency is a good example of a value where organizations can and should commit to it at different levels to achieve goals inspired by EA principles, and as a result I don’t think it’s a principle that defines the community.
For example, I think it’s highly valuable for GiveWell to have a commitment to transparency in order for them to be able to raise funds and increase trust in their charity evaluations, but I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity. Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency. I think it’s reasonable for different individuals and organizations in the EA community to have different standards for transparency, and I’m happy for CEA to support others in their approach to doing good at a variety of points along that transparency spectrum.
When it comes to CEA, I think CEA would ideally be more transparent and communicating with the community more, though I also don’t think it makes sense for us to have a universal commitment to transparency such that I would elevate it to a “core principle.” I think different parts of our work deserve different levels of transparency. For example:
I feel quite strongly that these principles go beyond applause lights and are substantively important to EA. Instead of going into depth on all of the principles, I’ll point out that many others have spent effort articulating the principles and their value, e.g. here, here, and here.
To briefly engage with some of the points in your comment and explain how I see these principles holding value:
I think it’s important to view the quote from the original post in the context of the following sentence: “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).” I believe the goals of engaged community members and CEA are very frequently aligned, because I believe most community members strive to have a positive impact on the world. With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I worry some from the comments in response to this post that people are concerned we won’t listen to or communicate with the community. My take is that as “teammates,” we actually want to listen quite closely to the community and have a two-way dialogue on how we can achieve these goals. With that being said, based on the confusion in the comments, I think it may be worth putting the analogy around “teammates” and “customers” aside for the moment. Instead, let me say some concrete things about how CEA approaches engagement with the community:
I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the community's concerns are primarily self-interested: "trying to optimize for community satisfaction," "just plain helping the community," "make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)," "don’t optimize for making the community happy" for EAG admissions).
I don't doubt that y'all get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the community's concerns here as self-interested would be closer to a strawman than a steelman approach.
CEA receives many fewer resources from its donors than from the community. Again, CEA would not really have a job without the community. An organization like CEA would totally exist without your big donors (like, the basic institution of having an "EA leadership organization" requires a few hundred k per year, which you would be able to easily fundraise from a very small fraction of the community, and even at the current CEA burn-rate the labor-value of the people who are substantially directing their life based on the broader EA community vastly eclipses the donations to CEA).
Your donors seem obviously much less important of a stakeholder than the community which is investing you with the authority to lead.
Hi Zachary,
First off, I want to thank you for taking what was obviously a substantial amount of time to reply (and also to Sarah in another comment that I haven't had time to reply to). This is, fwiw, is already well above the level of community engagement that I've perceived from most previous heads of CEA.
On your specific comments, it's possible that we agree more than I expected. Nonetheless, there are still some substantial concerns they raise for me. In typical Crocker-y fashion, I hope you'll appreciate that me focusing on the disagreements for the rest of this comment doesn't imply that they're my entire impression. Should you think about replying to this, know that I appreciate your time, and I hope you feel able to reply to individual points without being morally compelled to respond to the whole thing. I'm giving my concerns here as much for your and the community's information as with the hope of a further response.
> I view transparency as part of the how, i.e. I believe transparency can be a tool to achieve goals informed by EA principles, but I don’t think it’s a goal in itself.
In some sense this is obviously true, but I believe it's gerrymandering what the difference between 'what' and 'how' actually is.
For example, to my mind 'scout mindset' doesn't seem any more central a goal than 'be transparent'. In the post by Peter you linked, his definition of it sounds remarkably like 'be transparent', to wit: 'the view that we should be open, collaborative, and truth-seeking in our understanding of what to do'.
One can imagine a world where we should rationally stop exploring new ideas and just make the best of the information we have (this isn't so hard to imagine if it's understood as a temporary measure to firefight urgent siutations), and where major charities can make substantial decisions without explanation and this tend to produce trustworthy and trusted policies - but I don't think we live in either world most of the time.
In the actual world, the community doesn't really know, for example with what weighting CEA priorities longtermist causes over others; how it priorities AI vs other longtermist causes, how it runs admissions at EAGs,;why some posts get tagged as ‘community’ on the forum, and therefore effectively suppressed while similar ones stay at the top level; why the ‘community’ tag has been made admin-editable-only; what the region pro rata rates CEA uses when contracting externally; what your funding breakdown looks like (or even the absolute amount); what the inclusion criteria for 'leadership' forums is, or who the attendees are; or many many other such questions people in the community have urgently raised. And we don't have any regular venue for being able to discuss such questions and community-facing CEA policies and metrics with some non-negligible chance of CEA responding - a simple weekly office hours policy could fix this.
> confidentiality seems like an obvious good to me, e.g. with some information that is shared with our Community Health Team
Confidentiality is largely unrelated to transparency. If in any context someone speaks to someone else in confidence, there have to be exceptionally good reasons for breaking that confidence. None of what I'm pointing at in the previous paragraph would come close to asking them to do that.
> Amy Labenz (our Head of Events) has stated, we want to avoid situations where we share so much information that people can use it to game the admissions process.
I think this statement was part of the problem... We as a community have no information on which to evaluate the statement, and no particular reason to take it at face value. Are there concrete examples of people gaming the system this way? Is there empirical data showing some patterns that justify this assertion (and comparing it to the upsides)? I know experienced EA event organisers who explicitly claim she's wrong on this. As presented, Labenz's statement is in itself a further example of lack of transparency that seems not to serve the community - it's a proclamation from above, with no follow-up, on a topic that the EA community would actively like to help out with if we were given sufficient data.
This raises a more general point - transparency doesn't just allow the community to criticise CEA, but enables individuals and other orgs to actively help find useful info in the data that CEA otherwise wouldn't have had the bandwidth to uncover.
> I think transparency may cause active harm for impactful projects involving private political negotiations or infohazards in biosecurity
These scenarios get wheeled out repeatedly for this sort of discussion (Chris Leong basically used the same ones elsewhere in this thread), but I find them somewhat disingenuous. For most charities, including all core-to-the-community EA charities, this is not a concern. I certainly hope CEA doesn't deal in biosecurity or international politics - if it does, then the lack of transparency is much worse than I thought!
> Transparency is also not costless, e.g. Open Philanthropy has repeatedly published pieces on the challenges of transparency
All of the concerns they list there apply equally to all the charities that Givewell, EAFunds etc expect to be transparent. I see no principled reason in that article why CEA, OP, EA Funds, GWWC or any other regranters should expect so much more transparency than they're willing to offer themselves. Briefly going through their three key arguments:
'Challenge 1: protecting our brand' - empirically I think this is something CEA and EV have substantially failed to do in the last few years. And in most of the major cases (continual failure for anyone to admit any responsibility for FTX; confusion around Wytham Abbey - the fact that that was 'other CEA' notwithstanding; PELTIV scores and other elitism-favouring policies; the community health team not disclosing allegations against Owen (or more politic-ly 'a key member of our organisation') sooner; etc) this was explicitly bad feeling over lack of transparency. I think publishing somee half-baked explanations that summarised the actual thinking of these at the time (rather than when in response to them later being exposed by critics) would a) have given people far less to complain about, and b) possibly generated (kinder) pushback from the community that might have averted some of the problem as it eventually manifested. I have also argued that CEA's historical media policy of 'talk as little as possible to the media' both left a void in media discussion of the movement that was filled by the most vociferous critics and generally worsened the epistemics of the movement.
'Challenge 2: information about us is information about grantees' - this mostly doesn't apply to CEA. Your grantees are the community and community orgs, both groups of whom would almost certainly like more info from you. (it also does apply to nonmeta charities like Givedirectly, who we nonetheless expect to gather large amounts of info on the community they're serving - but in that situation we think it's a good tradeoff)
'Challenge 3: transparency is unusual' - this seems more like a whinge than a real objection. Yes, it's a higher standard than the average nonprofit holds itself to. The whole point of the EA movement was to encourage higher standards in the world. If we can't hold ourselves to those raised standards, it's hard to have much hope that we'll ever inspire meaningful change in others.
> I also think it’s possible to have impartiality without scope sensitivity. Animal shelters and animal sanctuaries strike me as efforts that reflect impartiality insofar as they value the wellbeing of a wide array of species, but they don’t try to account for scope sensitivity
This may be quibbling, but I would consider focusing on visible subsets of the animal population (esp pets) a form of partiality. This particular disagreement doesn't matter much, but it illustrates why I think gestures towards principles that are really not that well defined is that helpful for giving a sense of what we can expect CEA to do in future.
> “While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).”
I think this politicianspeak. If AMF said 'our primary goal is having a positive impact on the world rather than distributing bednets' and used that as a rationale to remove their hyperfocus on bednets, I'm confident a) that they would become much less positive on the world, and b) that Givewell would stop recommending them for that reason. Taking a risk on choosing your focus and core competencies is essential to actually doing something useful - if you later find out that your core competencies aren't that valuable then you can either disband the organisation, or attempt a radical pivot (as Charity Science's founders did on multiple occasions!).
> I think this was particularly true during the FTX boom times, when significant amounts of money were spent in ways that, to my eyes, blurred the lines between helping the community do more good and just plain helping the community. See e.g. these posts for some historical discussion ... We have made decisions that may make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)
I think this along with the transparency question is our biggest disagreement and/or misunderstanding. There's a major equivocation going on here between exactly *which* members of the community you're serving. I am entirely in favour of cutting costs at EAGs (the free wine at one I went to tasted distinctly of dead children), and of reducing all-expenses-paid forums for 'people leading EA community-building'. I want to see CEA support people who actually need support to do good - the low-level community builders with little to no career development, esp in low or middle income countries whose communities are being starved; the small organisations with good track records but such mercurial funding; all the talented people who didn't go to top 100 universities and therefore get systemically deprioritised by CEA. These people were never major beneficiaries of the boom, but were given false expectations during it and have been struggling in the general pullback ever since.
> For example, for events, our primary focus is on metrics like how many positive career changes occur as a result of our events, as opposed to attendee satisfaction.
I think the focus would be better placed on why attendees are satisfied or dissatisfied. If I go to an event and feel motivated to work harder in what I'm already doing, or build a social network who make me feel better enough about my life that I counterfactually make or keep a pledge, these things are equally as important. There's something very patriarchal about CEA assuming they know better what makes members of the community more effective than the members of the community do. And, as any metric, 'positive career changes' can be gamed, or could just be the wrong thing to focus on.
> I think if anyone was best able to make a claim to be our customers, it would be our donors. Accountability to the intent behind their donations does drive our decision-making, as I discussed in the OP.
If both CEA and its donors are effectiveness-minded, this shouldn't really be a distinction - per my comments about focus above, serving CEA's community is about the most effective thing an org with a community focus can do, and so one would hope the donors would favour it. But also, this argument would be stronger if CEA only took money from major donors. As is, as long as CEA accepts donations from the community, sometimes actively solicits it, and broadly requires it (subject to honesty policy) from people attending EAGs - then your donors are the community and hence, either way, your customers.
(I work on the Forum but I am only speaking for myself.)
To respond to some bits related to the Forum:
If you're referring to "why" as in, what criteria is used for determining when to tag a post as "Community", that is listed in the Community topic page. If you're referring to "why" as in, how does that judgement happen, this is done by either the post author or a Forum Facilitator (as described here).
We provided a brief explanation in this Forum update post. The gist is that we would like to prevent misuse (i.e. people applying it to posts because they wanted to move them down, or people removing it from posts because they wanted to move them up).
Thank you for flagging your interest in this information! In general we don't publicly post about every small technical change we make on the Forum, as it's hard to know what people are interested in reading about. If you have additional questions about the Forum, please feel free to contact us.
In general, our codebase is open source so you're welcome to look at our PRs descriptions. It's true that those can be sparse sometimes — feel free to comment on the PR if you have questions about it.
If you have questions for the Forum team, you're welcome to contact us at any time. I know that we have not been perfect at responding but we do care about being responsive and do try to improve. You can DM me directly if you don't get a response; I am happy to answer questions about the Forum. I also attend multiple EAG(x) conferences each year and am generally easy to talk to there - I take a shift at the CEA org fair booth (if I am not too busy volunteering), and fill my 1:1 slots with user interviews asking people for feedback on the Forum. I think most people are excited for others to show an interest in their work, and that applies to me as well! :)
I personally disagree that it would be better for CEA to have a goal that includes a specific solution to their overarching goal. I think it is often the case that it's better to focus on outcomes rather than specific solutions. In the specific case of the Forum team, having an overarching goal that is about having a positive impact means that we feel free to do work that is unrelated to the Forum if we believe that it will be impactful. This can take the shape of, for example, a month-long technical project for another organization that has no tech team. I think if our goal were more like "have a positive impact by improving the EA Forum" that would be severely limiting.
I also personally disagree that this is "politicianspeak", in the sense that I believe the quoted text is accurate, will help you predict our future actions, and highlights a meaningful distinction. I'll refer back to an example from my other long comment: when we released the big Forum redesign, the feedback from the community was mostly negative, and yet I believe it was the right thing to do from an impact perspective (as it gave the site a better UX for new users). I think there are very few examples of us making a change to the Forum that the community overall disagrees with, but I think it is both more accurate for us to say that "our primary goal is having a positive impact on the world", and better for the world that that is our primary goal (rather than "community satisfaction").
While you raise a worthwhile point in that it probably would have been slightly better for this post to have a paragraph on ethical side constraints, I feel that the rest of this post is quite misguided (and that some points are likely due to an incomplete understanding of the top-level post)
CEA (and the EA movement as a whole) has been lacking in direction ever since Max stood down.
Having a clearly stated direction is an improvement in and of itself. It improves coordination and allows people to provide feedback on the direction of the community.
The shift in direction is that CEA is shifting further towards finding people who are (or could be) deeply committed to these principles and helping them deepen their understanding of them vs. shoveling as many people towards particular high-priority cause areas as possible.
The concretization of these principles is laid out in much more detail in resources that both of us are familiar with. There is no need for Zachary to have gone into more detail here because it is going the other way and pulling out general principles from specific discussion, norms and practices within the community.
The mission is obviously more important than us. That should be uncontroversial.
I suspect that more EA's should dedicate their efforts to improving the health of the community and that this would increase the overall impact, but at the end of the day, the mission should come first[1].
In any case, counting up the number of activities CEA runs that achieve impact indirectly through the community is not particularly relevant to answering the question of whether CEA's first duty is to the mission or the community.
It would have made sense for there to be a bit more discussion about ethical side-constraints, but including transparency in the list of core principles would honestly be just weird because transparency isn't distinctly EA. Beyond that, the importance of transparency is significantly complicated by the concept of infohazards in areas like biohazards or AI safety. I really don't see it as CEA's role to take a side in these debates. I think it makes sense for CEA to embrace transparency as a key organisational value, but it's not a core principle of EA in general and we should accept that different orgs will occupy different positions on the spectrum.
I'm not claiming that I've personally always lived up to this standard, but this should be the ideal.
Hey Chris :)
I'm not sure if you mean this question to be covered in the rest of your reply? If not, could you say concretely what you think I misunderstood? If so, I respectfully disagree that I misunderstood it:
Maybe I'm less familiar with the resources than you think? I know huge amounts have been written on these notions, but I know of nothing that would fix my problem of 'I don't see how stating these principles gives me any meaningful information about CEA's future behaviour'.
I think that's entirely consistent with what I've said. An organisation that aims to effect Y via X cannot afford to relegate X to an afterthought, or largely ignore the views of people strongly involved with X.
I'm concerned that 'infohazards' get invoked far too often, especially to deflect concerns about (non)transparency. In CEA's case in particular, it doesn't seem like they deal with biohazards or AI safety at a level necessitating high security, and even if they do have some kind of black ops program dealing with those things that they're not telling us about, that isn't the transparency I'm concerned about. Just a general commitment to sharing info guiding key decisions about the community with the community, such as
I work on the Forum team, but this comment only represents my personal views and not those of CEA. Also, I am responding to this comment in particular because it mentions the Forum by name. I may respond to other comments if I have time but no promises.
First off, I want to say thank you for your comment. I think the Forum serves as an important space for organizations to get feedback from the community and I’m happy that it’s doing so here. I will also say that I think writing clearly is hard, and I am not a particularly good writer, so I am happy to clarify if anything I say is unclear.
My understanding of the phrase “I view the community as CEA’s team, not its customers” is that CEA’s ultimate goal is to improve the world, and increasing the satisfaction of the EA community (or alternatively, satisfying any particular request an individual might have) is not the ultimate goal. I believe the purpose of laying this out is to be transparent and help readers understand and predict how CEA will act. My guess is that very often we will be improving the world by doing things that satisfy the EA community.
For the Forum in particular, user feedback is a vital input into how we prioritize our work. We gather this information via user interviews (such as at events, reaching out to specific groups of people while developing features, and broadly offering to do user interview calls with people like in my Forum profile), by including links to feedback forms when testing things out and launching new features, publishing posts and quick takes about our work, running various surveys including the annual Forum user survey, and even directly messaging users via the Forum to ask them questions. I genuinely believe that feedback is a gift, and I’m so grateful for people who take the time to provide it to us.
If you take one thing away from my comment, please remember that we love feedback - there are multiple ways to contact us listed here, including an anonymous option. You’re welcome to contact us with suggestions, questions, bug reports, feedback[1], etc. (I can only really speak for the Forum team, but I would guess other teams feel similarly.)
Earlier this year we implemented the ability to import Google Docs to the Forum and people gave us lots of positive feedback about that. I think most of the work on the Forum will be somewhere between “making the community happy” and “the community is mostly neutral, maybe a small subset are happy” - if you look at the features in our latest update post, I think basically all of them have been either requested by users or people have given us purely positive feedback on them[2]. One example of a change to the Forum that the EA community might have voted against is the big Forum redesign in 2023 - as you can see, we mostly got negative feedback about it. However, when I’ve interviewed users new to the site, I overwhelmingly get positive feedback about the design. It’s clear to me that having a skilled designer improve the site's usability was the right choice.
This reflects how I view my own work - to do good by supporting the EA community, which does not always mean that we should do what they would vote for[3].
I think some of the disagreement is that people interpret the terms “team” and “customers” differently. In some ways we do treat Forum users as customers - for example, our engineers rotate being on-call to respond to customer service requests. We think this is worth their time because we feel that our users provide significant value for the world, not because our end goal is a high customer satisfaction score, but the result is basically the same. As I referenced earlier, our team functions similarly to other tech teams. So for example, when we are building a feature for group organizers, we will do many user interviews with group organizers. Thinking about my own experience as a customer, oftentimes websites will use dark patterns, compromise UX, prioritize engagement/addictiveness, and literally outright lie, all in order to maximize their profit. I am happy that we do not treat our users as customers in any of these ways. One slightly different way of thinking about “customer” is more like “customer service”, where an organization should strive to satisfy any individual who files a complaint. Honestly I think the Forum team is pretty good at this given our small size, but I would like us to be able to prioritize issues that users report relative to the value of our other potential work and not automatically file customer service reports in the highest priority bucket.
I like the term “team” because that emphasizes that we all broadly have the same goal (improving the world) and I am happy for Forum users to act in service of that goal (even if they criticize my work), in the same way that I appreciate when users give me feedback about the Forum in a way that reflects understanding of that shared goal (like, “I have this suggestion for you, though I’m guessing that this wouldn’t affect many people so it’s probably low priority”). In practice, much of the way that the Forum makes progress on that goal is by “empowering [people] to work for/fundraise for or otherwise support charities.” Another aspect of “team” I like is that this implies collaboration and transparency, since we have shared goals (so it would be against my interests to lie), whereas I think it’s entirely normal/expected for a company to mislead its customers[4]. “Team” means that we respect your time more than other websites (that treat you like customers) do, because we believe your time is valuable (for the world) and we want you to use it well, because we have shared goals. When someone answers my inactive user feedback form saying that they use the Forum less now because they are focused on doing good directly via their job, I don’t feel like I have “lost a customer”. I feel happy that they are presumably correctly valuing their time and doing more good (although I hope they still occasionally return to contribute back to the community).
A point that multiple commenters reference is about how CEA handles criticism. In my opinion, someone who is on the same team as you is much more likely to take your criticism seriously than any entity to which you are a customer. For example, if I complain to a company about their shady business practices, I expect them to completely ignore me or possibly lie to me, but certainly not to actually consider my point. If you complain to the Forum team about something we are doing that you consider morally dubious, we actually engage with it (at least internally - we have not always done as well as I would like at responding publicly, and I hope we improve on this in the future.)
Given this, I personally disagree that we “relegate the EA community to an afterthought” and that we “largely ignore the views of people strongly involved with EA”, and I disagree that we implied that we plan to do these things in the future. In my opinion, viewing the EA community as CEA’s “team” does not preclude us from caring about our effect on the community, nor does it mean that we no longer want to nurture and support the community, nor does it imply that we will ignore criticism, nor does it mean that we don’t care about people’s opinion of our work. I would go so far as to say those are more important for a teammate to care about than a company to care about.
I believe the purpose of Zach’s post was to explain that CEA will focus on EA principles rather than specific cause areas, and that it was not meant to communicate anything about CEA’s principles as an organization. Personally I am quite pro-transparency and hope to post more about my work than has been the case in the past.
To respond to some specific points:
Including critical feedback! Every time I talk to a user I emphasize that critical feedback is especially useful for us, because people are biased towards saying nice things to us (at least to our face - I think this is less the case online).
I actually don’t know of any particular requests or feedback after the fact that we got about site performance improvements, but I am confident that it was worth doing. Improving site speed is one of the most evidence-based ways for a site to decrease their bounce rate and improve their SEO ranking. This type of issue, which either minorly inconveniences many people or disproportionately impacts people who are not Forum users but would have been, is hard to justify working on purely based on the goal of “community satisfaction”, but makes more sense under the goal of “improving the world”.
Not that customers normally get to unilaterally decide on what a company does via a vote.
To be clear, I think any organization has incentives against being 100% transparent, and I don’t think CEA is at the ideal level of transparency. But when I compare my time working in for-profit companies to my time working at CEA, it’s pretty stark how much more the people at CEA care about communicating honestly. For example, in a previous for-profit company, I was asked to obfuscate payment-related changes to prevent customers from unsubscribing, and no one around me had any objection to this.
Thanks for sharing your experience of working on the Forum Sarah. It's good to hear that your internal experience of the Forum team is that it sees feedback as vital.
I hope the below can help with understanding the type of thing which can contribute to an opposing external impression. Perhaps some types of feedback get more response than others?
AFAICT I have done this twice, once asking a yes/no question about unclear forum policy and once about a Forum team post I considered mildly misleading. The first got no response, the other got a response which was inaccurate, which was unfortunate, though I certainly assume it was unintentionally so.
I want to be clear that I do not think I am entitled to get a response. I think the Forum team is entitled to decide it should focus on analytics not individuals, for example. I basically thought it had, and so mentally wrote off those pathways. But your comment paints a surprisingly different picture and repeatedly pushes these options, so it didn't feel right to say that I disagree without disclosing a big part of why I disagree.
Looking to public, and frankly far more important, examples of this, the top comment on CEA's last fundraising attempt is highly critical of the Forum / Online team's direction and spend. At time of writing the comment has 23/2 agree/disagree votes and more karma than the top level post it's under. This seems like the kind of thing one prioritises responding to if trying to engage, and 10 months ago Ben West responded "I mostly want to delay a discussion about this until the post fully dedicated to the Forum". That post never came out[1]. So again my takeaway was that the Forum team didn't value such engagement.
As someone who directionally agrees with the quoted sentiments, this was helpful in clarifying part of what's going on here. I personally think that CEA has been opaque for the last few years, for better or for worse[2]. Others I have heard from think the same [3]. So I naturally interpret a post which is essentially a statement of continuinty as a plan to continue down this road. Arepo makes a similar point in the 2nd paragraph of their first comment. But if you think CEA, or at least your team, has been responsive in the past, the same statement of continuity is not naturally interpreted that way.
To the best of my knowledge. If it did, please link to it as a response to the comment! This type of thing is hard to search for, but I did spend ~5 minutes trying.
Since I've pushed CEA to be more responsive here and elsewhere, I want to note that distance is helpful in some contexts. I am unsurprised to hear that the Forum redesign in 2023 got negative feedback from entrenched users but positive feedback from new users, for example; seems a common pattern with design changes.
Long comment, so pulling out the relevant quote:
(Again: only speaking for myself, and here in particular I will avoid speaking about or for other people at CEA when possible.)
Yup, I think it’s very reasonable for people outside of CEA to have a different impression than I do. I certainly don’t fault anyone for that. Hopefully hearing my perspective was helpful.
I’m really sorry that our team didn’t properly respond to your messages. There are many factors that could affect whether or not any particular message got a response. We currently have a team assistant who has significantly improved how we manage incoming messages, so if you sent yours before she joined, I would guess someone dropped it by accident. As an engineer I know I have not always lived up to my own standards in terms of responding in a timely manner and I do feel bad about that. While I still think we do pretty good for our small size, I’m guessing that overall we are not at where I would personally like for us to be.
Hmm I currently don’t recall any post about Forum fundraising. I think we considered fundraising for the Forum, but I don’t remember if any significant progress was made in developing that idea. In my opinion, Ben and Oscar wrote multiple detailed replies to that comment, though I am sympathetic to the take that they did not quite respond to Nuno’s central point. I think this is just a case of, things sometimes fall through the cracks, especially during times of high uncertainty as was the case in this example. I feel optimistic that, with more stability and the ability to plan for longer futures, CEA will do better.
I also want to differentiate between public and internal engagement. I read Nuno’s writing and discussed it with my colleagues. At the time I didn’t necessarily think I would have better answers than Ben so I didn’t feel the need to join the public conversation, but at this point I probably do have better answers. I’ll just broadly say that, I agree that marginal value is what matters, as do others on my team. We do analyze the marginal impact of our Forum work. I would be excited to write more about it publicly but it will take a fair amount of work to make it clear and comprehensible for the Forum audience (up to my personal standards). Interestingly, Nuno’s points push me against taking the time to communicate publicly / be more open. Every hour I spend on writing a comment (and it can take me hours - I am not particularly good at writing, my training is in software engineering) is an hour that I don’t know how to value in the marginal impact analysis, so it defaults to being worth $0[1]. I strongly feel responsible for using EA/charitable money well, so using my work time to do something that I ultimately won’t put any value on is difficult.
I don’t disagree with this. I personally would prefer that we had communicated publicly more in the past, and I think ideally CEA would be more open about our work.
I’ll just note that the point of this post was not to lay out all of CEA’s upcoming plans, nor explain how CEA will change, nor even to talk about CEA’s organizational values or principles. I believe Zach has more posts planned, but he is also very busy.
Apologies - to clarify, I don’t think I said that CEA or my team has been responsive in the past. I’m guessing that on average CEA and my team have been below my personal bar. I feel that the Forum team aims to be responsive, and it is good to continue to have that goal, and to continue to do better relative to that goal (such as by getting help from our team assistant). My dissertation about “team”, similarly, doesn’t mean that we have been great about following through on all the ideals that “team” implies. I just think that it is an accurate description of our goals, and what I personally aspire to do. Based on Zach’s comment, I’m optimistic that CEA will do better.
I'm open to suggestions here. Perhaps transparency can be modeled as worth a fraction of the overall value CEA (or the Online Team, or the Forum) produces? But surely there are diminishing returns at some point - I would be surprised if I should be spending 50% of my work time on activities that are primarily valued via "transparency". I'm worried that this is so subjective that I would just use it to justify spending as much time as I would like on these activities. If I was allowed to ignore cost effectiveness I would naturally be more open.
Thanks for taking the time to respond.
I think we’re pretty close to agreement, so I’ll leave it here except to clarify that when I’ve talked about engaging/engagement I mean something close to ‘public engagement’; responses that the person who raised the issue sees or could reasonably be expected to see. So what you’re doing here, Zach elsewhere in the comments, etc.
CEA discussing internally is also valuable of course, and is a type of engagement, but is not what I was trying to point at. Sorry for any confusion, and thanks for differentiating.
Huh? That wasn't CEAs decision, they just fiscally sponsored Wytham
IIRC it was done under the name 'CEA' when that name covered both the current org and what is now 'Effective Ventures'. It was done at the impetus of a trustee of CEA-EV who, since they were the same legal entity, was also a trustee of CEA-CEA (I believe it's still true that they're currently the same organisation, CEA-CEA's plans to spin off notwithstanding). I can't find the initial announcement from CEA, but the justification was to host EA events and conferences there. Since by far the primary EA-event-and-conference-hosting organisation is CEA-CEA, it seems likely they were the primary beneficiary of the purchase.
I'm not really sure whether this technically qualifies as 'only fiscally sponsoring Wytham' (I doubt there's a simple yes-no answer to the question), but there's clearly a lot of entanglement with the organisation and people who a) are supposed to represent the EA community and b) benefited from the project. Even/especally if this entanglement is all perfectly innocent and well thought through, greater transparency would have made that more obvious and prevented much of the consequent muckraking of the movement by its critics.
I think it's super reasonable for people to be confused about this. EV is a ridiculously confusing entity (or rather, set of entities), even without the name change and overlapping names.
I wouldn't consider Wytham to have ever been a part of the project that's currently known as CEA. A potential litmus test I'd use is "Was Wytham ever under the control of CEA's Executive Director?" To the best of my knowledge, the answer is no, though there's a chance I'm missing some historical context.
This comment also discusses this distinction further.
I'm nigh-certain that Wytham was never under the control of CEA's Executive Director.
I think that this litmus test is pretty weak, though, as a response to Arepo's suggestion that CEA was the primary beneficiary of Wytham. However, I also think that this suggestion is mistaken. I believe that CEA hosted <10% of the events at Wytham (maybe significantly less; I don't know precisely, and am giving 10% as a round threshold that I'm relatively confident using as an upper bound).
Agreed.
Regarding some of the specific points you've made:
• I agree that it would be great to get the community more involved in thinking through what the forum should look like.
• Wytham Abbey was an independently run project that they just fiscally sponsored.
• I agree that funding sources should be public (although perhaps not individual donations below a certain amount).
• Unsurprised PELTIV backfired.
• I would love to see regular community office hours, though if these end up seeing low demand, or it's just the same folks over and over, I think it would be reasonable for them to decide to discontinue this.
Regarding some of the other things, I honestly don't see them as the highest priority, especially right now.
I wouldn't say they're all top priority right now either fwiw. What I'd like is some kind of public commitment to stuff like this as at least nice-to-haves, rather than something they seem to feel no obligation about at all. That's all any of these 'principles' can be - a directional statement about culture. But CEA has been around for over a decade, with an average annual budget that must be well into the millions, so even 'not top priority' concerns could easily have been long since addressed if they'd had a historical interest in doing so.
I'm not sure I agree with that characterisation of Wytham Abbey. It was orchestrated by one of the trustees of the org on behalf of the org, with intended beneficiaries being more or less a subset of the org's proxy beneficiaries. And this was done under their current moniker, which per agb/Jason's comment elsewhere in this discussion, is highly misleading - especially when they're involved in projects like this. Consequently, when Wytham Abbey became a PR disaster, it helped bring the whole movement into disrepute. Arguably the main lesson was just 'don't use the public face of EA for black box projects', but I think the backup lesson was 'if you do, at least show enough of your working to prove to reasonable critical observers that it isn't a backdoor way of giving the trustees a summer home.'
I guess I want CEA to focus very heavily on figuring out their overall strategy, including community engagement and then communicating their overall decisions.
Conference cost breakdowns feels like an unnecessary distraction at this point, so long as they satisfy the auditor.
I agree that absolute transparency is not ideal. That said, there is a version of transparency (i.e 'reasoning transparency') that is a somewhat distinct EA value.
That would make more sense.
This doesn't sound right to me. If you want to focus on the customer analogy, the funders are paying CEA to provide impact according to their impact metrics. CEA engages with a subset of the EA community that they think will lead to effects that they think will lead to impact according to their own theory of change and/or the ToC of the funder(s). Target groups can differ based on the ToC of project, so you see people engaging on the forum but being rejected from EAGs.
I think there is much room for criticism when looking more closely at the ToCs, which is more to your next point:
Both Givewell and GWWC want to shift donation money to effective charities, which is why they have to make a compelling case for donors. Transparency seems to be a good tool for this. The analogy here would be CEA making the case for them to get funded for their work. Zach has written a bit about how they engage with funders.
I personally think there is a good case to be made to try for broader meta-funding diversification, which would necessitate more transparency around impact measurement. The EA Meta Funding Landscape Report asks some good questions. However, I can also see that the EV of this might be lower than that of engaging with a smaller set of funders. Transparency and engaging with a broad audience can be pretty time-consuming and thus lower the cost-effectiveness of your approach.
(All opinions are my own and don't reflect those of the organisations I'm affiliated with.)
Right, the community isn't the ultimate beneficiary of CEA's work. It's roughly analogous to donors who receive GiveWell advice -- the ToC works instrumentally through the community/GW donors but impact is derived from positive effects on ultimate beneficiaries (generally children in Africa). Somewhat analogously, an object-level org creates impact through its employees, but employees are not beneficiaries of the org.
That undermines the first motivation for I gave for transparency, but I don't think it really touches on the other four. And as you say, it only undermines the first to the extent that we don't think it would be better that they get more diverse funding.
I think if only for feedback-loop reasons, it would be far better for CEA to get more from the community - if they're struggling to do so, that could be considered an important form of feedback in itself.
I feel like this proves too much. Givewell's potential donors could make exactly the same claim, but Givewell repeatedly reinforced their belief that greater transparency is necessary to have high credence that the organisation in question is doing a good job. The fact that CEA's outputs are less concrete/measurable/directly tied to human welfare if anything makes me think it's more important that feedback loops are tightened than for Givewell evaluands.
The "team" metaphor is ambiguous, and I think an accurate interpretation of it doesn't answer many questions.
The community isn't the team in the sense that CEA is the manager. The only plausible rationale for that would be a mandate from the community, and I think we can exclude that based on the community not being the "customers."
Thus, CEA seems to be in a leaderless co-worker type relationship, or a leaderless sports team co-member relationship, with other EAs and EA orgs. That's a loose sort of team, and often an ineffective one [add sports metaphor appropriate to your culture here as mine would be US-centric.] For those sorts of teams to be effective, there generally has to be a lot of give and take from a position of rough equality.
There are also "teams" where everyone kinda does their own thing with relatively little coordination. I'm thinking somewhat of toddlers engaged in mostly parallel play rather than truly playing together. A valid model, but they are unlikely to build a really cool tower of blocks that way!
This poses some interesting questions, and I've thought about them a bit, although I'm still a bit confused.
Let's start with the definition on effectivealtriusm.org, which seems broadly reasonable:
So what EA does is:
So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.
But going back to the points, we could add some questions:
1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, you'll have to understand the science behind them.
1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then I'm not impartial, but I still could use some of the methods EAs are using.
2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1.
In conclusion, I would see EA as:
Those two values seem to me to reflect the boundaries that the movement's founders, the most engaged actors, and the biggest funders want to see.
Some people are conducting local prioritisation research, which might sometimes be worthwhile from an impartial standpoint, but giving up on impartiality would radically change the premise of EA work.
Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If we're willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.
In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these don't need to be the ones we'll use in the future, so I see everything else as being downstream from the definition above.
Re-Reading Will MacAksill's Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims:
He didn't include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.
I started writing a comment, but the length got out of hand and life happened. So I'm just going to state the summary for now, but may develop into a full post (or series of quick takes) later.
It is likely that funder perspective has both the identified a direct influence on CEA's cause prio as well as various less direct effects. For instance, what CEA can get funded for likely has some effect on who is interested in working at CEA. Therefore, the views of CEA staff is less-than-independent from funder views, which in turn influences which cause prio experts they are inclined to defer to.
Because of CEA's outsize role with groups, EAGs, and other activities, there's a risk that its incentives could bias[1] the community's cause prio. X-risk didn't become more important in an abstract sense because SBF started handing out money (or less important when he was exposed), and GHD wouldn't become more important if some other billionaire did so. And although differences across cause areas in willingness to pay for talent development make sense, we should be careful not to let the effects of that differential willingness affect our perceptions of the abstract importance of cause areas or how to allocate non-talent resources (e.g., money, influence). Suggested mitigations would be stuff like: Provide more explicit communication of practical and tactical reasons that influence CEA's relative focus on cause area, especially early on in a new community member's interactions with CEA.
What steps should be taken to safeguard principles-first EA from the potential biasing effects of funder decisions? I didn't get too far into this part, but would likely conclude that this task calls for (1) identifying core community infrastructure, and (2) publishing legible guardrails to enshrine and protect the principles-first approach. For instance, I view it as inconsistent with principles-first EA for community builders to be evaluated on the basis of what cause areas the people they influence ultimately choose to engage with. If funders want to focus on (say) x-risk community building, that's fine . . . but they should call it that and not present it as being "general EA" so to speak. That way, the target audience understands that they are getting a specific, influenced perspective that is based on EA, and does not walk away with an imbalanced understanding of EA itself.
I use bias in the sense of to cause distortion, without any sinister or untoward implication.
I really love that you are upfront about your funders having a particular cause focus that could come into conflict with your own focus on principles. I look forward to seeing this discussion evolve - it feels very honest and very EA. Happy to see you leading CEA.
Thanks Zach. Like others, I'm excited to see that CEA will continue to take a principles-first approach to EA.
There's one point I'd be interested in you saying more about. In the post you express qualified support for CEA's cause prioritization being influenced by CEA's staff, CEA's funders and "people who have thought a lot about cause prioritization," but reject the idea that CEA should "mirror back the cause prioritization of the community as a whole."
I'm curious whether this means only that you reject the idea that CEA's cause prioritization should be entirely based on the unweighted views of the community, or whether you think that the weighted views of the community (giving more weight to those who have thought about cause prioritization more) should at least somewhat influence CEA's decisions, or somewhere in between.
I think the weighted views of the community should likely inform CEA's cause prioritization, though I think it should be one data point among many. I do continue to worry a bit about self-fulfilling prophecies. If EA organizations make it disproportionately easy for people prioritizing certain causes to engage (e.g. by providing events for those specific causes, or by heavily funding employment opportunities for those causes) then I think it becomes murkier how to account for weighted cause prioritization because cause prioritization is both an input and an output.
Thanks for clarifying!
I share this concern about weighting community views by engagement. That said, it seems plausible to me that the engagement-weighted views of the community at the least selected for [the set of views predominant among EA leadership] out of the options presented. True, CEA (and their donors, respected people who have thought about cause prioritisation a lot) can influence the views of highly engaged EAs in various ways. But I would expect CEA staff, donors, and select experts to be more strongly selected for a narrower set of views.
I'm encouraged by your principles-first focus, Zach, and I'm glad you're at the helm of CEA. Thanks for all you're doing.
Wow what a strong opening Salvo. Not only do I appreciate the honesty and clear direction, this approach makes sense to me. I also like the writing style, with short snappy sentences largely devoid of jargon. I haven't really engaged with/understood "CEA" as an organisation before, but I think I understand far better now what its about.
Thank you for writing this up! I was happy to hear you're taking this approach at your EAG London opening talk and now see it in writing.
One point that stands out is that the principles published on effectivealtruism.org also include a "collaborative spirit" that is missing from your list:
In the footnote, you write:
CEA created the website effectivealtruism.org, and my understanding was that it used a collaborative approach to getting input from different stakeholders and was also published after the list of principles on CEA's website. Maybe I'm wrong here, but I would find it helpful to know more about the decision process behind the principle selection.
I expect disagreement about the principles, but an approach focussed on principles (which I support) could be more powerful when there is broader stakeholder consensus on what they are. In your EAG London speech, you talked about CEA taking a stewardship role for the EA community, which I interpreted as hearing members' perspectives when making community-wide decisions. When you write, "I view the community as CEA’s team, not its customers." this sounds similar.
While CEA can have its own principles that differ, for example, from national and regional EA groups, a more consensus-based approach could help promote the brand across different target groups.
Thanks for the clarity Zach. I am particularly enthusiastic about embracing cause neutrality and epistemic principles. This approach not only broadens our perspective but also ensures that we address the most pressing global challenges with less bias.
I strongly relate to the philosophy here and I’m thrilled CEA is going to continue to be devoted to EA principles. EA’s principles will always be dear to me and a big part of my morality, but I’ve felt increasingly alienated from the community as it seemed to become only about technical AI Safety. I ended up going in my own direction (PauseAI is not an EA org) largely because the community was so reluctant to consider new approaches to AI Safety and Open Phil refused to fund it, a development that shocked and saddened me. I hope CEA will show strong leadership to keep the spirit of constant reevaluation of how we can do good alive. Imo having a preference as a community for only knowledge work and only associating with elite circles, as happened with technical AI Safety, is antithetical to EA scouty impact-focused thinking.
Wait, what makes PauseAI "not EA" exactly? I'm extremely surprised to hear that claim: people post promoting it on here, it has clear connections to a central EA goal, a founder with a background in EA. It might represent a minority view in the community, but so does "we should prioritise animal welfare above X-risk and development", but I've never thought of people who think that as "not EA".
It's not EA because it's for anyone who wants to PauseAI for any reason and does not share all the EA principles. It's just about pausing AI and it's a coalition.
I personally still identify with EA principles and I came to my work at PauseAI through them, but I increasingly dislike the community and find it a drag on my work. That, combined with PauseAI being open to all comers, makes me want distance from the community and to keep a healthy distance between PauseAI and EA. More and more I think that the cost of remaining engaged with EA is too high because of how demanding EAs are and how little they contribute to what I'm doing.
My 2 cents Holly is that while you're pointing at something acute to PauseAI, this is affecting AI Safety in general.
The majority of people entering the Safety community space in Australia & New Zealand now are NOT coming from EA.
Potentially ~ 75/25!
And honestly, I think this is a good thing.
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
That sucks :(
But hammers do like nails :/
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine "too much" real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
What are those non-AI safety reasons to pause or slow down?
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we've been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn't coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world's most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well -- potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.
Thank you! Your transparency is admirable, and very encouraging from an EA leader—very keen to hear more from you in the future!
Great post! Thanks, I really appreciate you clarifying the community's direction. The community has been rudderless for too long, I'm extremely pleased to see a clear direction. I would love to see a follow-up post on what a principles-first approach will mean in practise in more detail.
I like this principles-first approach! I think it's really valuable to have a live discussion that starts from "How do we do the most good?", even if I am kind of all-in on one cause. (Kind of: I think most causes tie together: making the future turn out well.) I think it'd be a valuable use of the time of you folks to try and clarify and refine your approach, philosophy, and incentives further, using the comments here as one input.
Have you decided yet whether to run another GCR-focused EAG?
We're not currently planning to run another GCR-focused EAG, but we do plan on continuing to investigate what other types of events we could run, including cause-specific (e.g. GCR-focused) events.