Hide table of contents


  • We’ve renamed our team to Community Health and Special Projects, in part to reflect our scope extending beyond what’s often considered to be “community health.” 
  • Since our last forum update, we’ve started working closely with Anu Oak and Łukasz Grabowski as contractors. Chana Messinger has been acting as interim team lead, while Nicole Ross has been focused on EV US board duties. Charlotte Darnell has also recently accepted a role on our team.
  • In response to reports of sexual misconduct by Owen Cotton-Barratt, an external investigation into our team’s response is underway, as well as an internal review.
  • Other key proactive projects we’ve been working on include the Gender Experiences project and the EA Organization Reform project
  • We are in the early stages of considering some significant strategic changes for our team. We’ve highlighted two examples of possible changes below, one being a potential spin-out of CEA and/or EV and another being a pivot to focus more on the AI safety space.
  • As a reminder, if you’ve experienced anything you’re uncomfortable with in the community or if you would like to report a concern, you can reach our team’s contact people (currently Julia Wise and Catherine Low) via this form (anonymously if you choose). 

The Community Health team is now Community Health and Special Projects

We decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.

We see these other areas of work as contributing to our goal: to strengthen the ability of EA and related communities to fulfil their potential for impact, and to address problems that could prevent that. However, those areas of work can be quite disparate, and so “Special Projects” seemed an appropriate name to gesture towards “other miscellaneous things that seem important and may not have a home somewhere else.”[1] 

We hope that this might go some way to encouraging people to report a wider range of concerns to our team.  

Our scope of work is guided by pragmatism: we aim to go wherever there are important community-related gaps not covered by others and try to make sure the highest priority gaps are filled. Where it seems better than the counterfactual, we sometimes try to fill those gaps ourselves. That means that our scope is both very broad and not always clear, and also that there will be plenty of things we don’t have the capacity or the right expertise to have fully covered. If you’re thinking of working on something you think we might have some knowledge about, the meme we want to spread is “loop us in, but don’t assume it’s totally covered or uncovered.” If we can be helpful, we’ll give advice, recommend resources or connect you with others interested in similar work.

Team changes

Here’s our current team: 

  • Nicole Ross (Head of Community Health and Special Projects)
  • Julia Wise (Community Liaison) 
  • Catherine Low (Community Health Associate) 
  • Chana Messinger (Interim Head and Community Health Analyst) 
  • Eve McCormick (Community Health Project Manager and Senior Assistant) 

In November 2022, Nicole took a step back from leading the team in order to focus on EV US board duties in response to the FTX crisis. In her place, Chana stepped into the role of Interim Head of Community Health and Special Projects. We anticipate that Chana will remain in this role for another 1-6 weeks (90% confidence interval). During this time, Nicole is dividing her time between some ongoing board duties and thinking about our team’s strategy, including potential pivots (see below). 

We’ve also started working closely with Anu Oak (Project Coordinator and Assistant) and Łukasz Grabowski (Project Manager) as contractors. Anu joined us in late October 2022 as Catherine’s assistant and has since taken on responsibility for several of our team’s internal systems. Łukasz came on board in February 2023 and has been collaborating with other team members on various projects, including the Gender Experiences project (see below). 

Charlotte Darnell has recently accepted a role on our team to help with interpersonal problems in the community alongside Catherine and Julia. Charlotte comes to us from CEA’s Events Team.  

External investigation and internal review

As mentioned here, EV UK and EV US have commissioned an independent, external investigation into reports of sexual misconduct by Owen Cotton-Barratt, including our team’s response to those reports. This investigation is currently underway. The main point person for this investigation from EV is Zach Robinson, Interim CEO of EV US

Separately from the independent, external investigation, Chana has been overseeing an internal review within our team, with support from Ben West (Interim Managing Director of CEA).[2]

Our goals for this review have been to reflect on our response to the reports about Owen’s conduct, and to identify whether there are any systematic changes we should make to our casework.

This process has included:

  • Julia and Nicole writing retrospectives, and members of the team discussing takeaways and updates from them, with support from Ben West.
  • Writing up anonymized versions of past cases to get perspectives on our processes from people outside the team and having calls with those people to discuss. 
  • Chana speaking to several HR professionals, ombudspeople and an employment lawyer to get their perspectives on the team’s work more broadly.
  • Catherine looking over the reports we received about Owen to give another perspective on how it could have been handled. 
  • Relevant members of the team discussing and consolidating their overall updates together.

Next steps:

  • Continued work consolidating and thinking through updates.
  • We will communicate process changes in a future update. 

Gender Experiences Project

In February, we announced our project to get a better understanding of the experiences of women and gender minorities in the EA community. This work is being carried out by Catherine Low, Anu Oak, and Łukasz Grabowski. Charlotte Darnell will be joining this project soon. 

So far, we have analysed data from a variety of existing sources such as EAG/x survey responses, the annual EA Survey run by Rethink Priorities, and Community Health team case records. We’re now exploring ways to gather new information including working with Rethink Priorities on questions to include in their next survey.  

We’re drafting a post with some of our findings so far and hope to publish it soon.

EA Organizations Reform Project

Like many others in the EA community, we have been thinking about ways the community should potentially change or reform. One effort in this direction is the project Julia is currently leading. This project aims to build a task force of 3-5 people from across EA organisations and projects, which will look into areas where EA organisations might reform their practices and produce recommendations for EA organisations about steps that seem promising. They plan to interview people who are familiar with other fields about best practices that EA might currently be missing.

Some areas the taskforce is likely to consider:

  • Board composition
  • Conflict of interest policies
  • Whistleblower protection / support

Please see this post to suggest people the task force could talk to.


We are continuing with our reactive casework as usual, where we respond to requests for support or advice from community members. 

Some examples of casework:

  • Handling cases involving interpersonal harm in the community. This often involves one or more of:
    • Listening to people talk through what they have experienced
    • Talking to people who have made others uncomfortable about how to improve their behavior
    • Restricting people who have caused harm from attending CEA events
    • Informing other EA groups, projects, or organizations about known problems 
  • Supporting individual community members who are dealing with personal or interpersonal problems, such as a mental health-related struggle or a conflict between multiple community members.

Catherine is the primary team member focusing on this area of our work at the moment, while Julia focuses on projects such as the organizational reform project. Charlotte will also be focusing on casework.

Other updates on our work

  • Łukasz has produced a team tool for mapping the overlap and interrelationships between the boards, staff, and advisors at EA organizations, to help us do risk assessment around the interrelatedness of EA projects. We hope that this will help draw our attention to conflicts of interest and high levels of interdependence, so we can help better manage risk. 
  • In collaboration with Chana, Victoria Brook contracted with us to produce this sequence of tutorials for tools to help people approach collaborative truth-seeking. The sequence includes tutorials for using Guesstimate, Loom, Excalidraw, Squiggle and Polis. 
  • Our work advising on projects with significant potential downside risks is continuing as usual. For example, we often advise on projects operating in less established and/or more sensitive fields, such as those relating to policy or involving minors, helping decision-makers to weigh up risks against the positives.  

Potential strategic changes

We’re in the early stages of considering some potentially significant strategic changes for our team. 

Firstly, as discussed a little elsewhere, we have been considering whether to spin out of CEA and/or Effective Ventures. This may grant us a useful kind of independence but might make some kinds of coordination more difficult. We will be gathering more information and considering these and other trade-offs over the next few months. 

Secondly, we have been considering whether our team should pivot more of our resources towards work in the AI safety space. Actions we might take there would often be analogous to our existing work in the EA community more broadly. For example, we could potentially receive and investigate concerns about individuals in the AI safety space, provide support for the well-being of people working in the space, and assist with coordination between relevant actors to make plans for “crunch time”. We will likely be investigating what this pivot could look like more deeply over the coming months. 

How to contact us (including anonymously) 

If you’ve experienced anything you’re uncomfortable with in the community or would like to report a concern, you can reach our team’s contact people (currently Julia Wise and Catherine Low) via this form. We know a form isn't the warmest, but this helps us manage the inflow of messages, and you can remain anonymous if you choose to be. Alternatively, you can email our contact people at community.contact.people@centreforeffectivealtruism.org, or you can contact us individually (our individual forms are linked here). And you can contact the whole team at community.health.special.projects@centreforeffectivealtruism.org

You might contact us because you just want to pass on some information, or you might want to have a call to talk through your situation or just have someone listen to your concerns.

If you’d like to have an anonymous, real-time conversation with us instead of a call, we may be able to facilitate that e.g. through Google Chat with your anonymous email address. If this is your preference, let us know and we can explore the options.

  1. ^

    We’re aware that Rethink Priorities also has a Special Projects team. Our current belief, and that of Rethink Priorities, is that this won’t cause much confusion. 

  2. ^

    Chana has not been reporting to Nicole throughout the duration of conducting this internal review, in part to mitigate conflicts of interest.





More posts like this

Sorted by Click to highlight new comments since: Today at 9:25 AM

Thanks for this update and for all the work you're doing!

A potential pivot toward AI safety feels pretty significant, especially for such a core "big tent EA" team. Is it correct to interpret this as a reflection of the team's cause prioritization? Or is this instead because of (1) particularly poor community health in the non-EA AI safety community relative to other causes, (2) part of a plan to spin off into other EA-adjacent spaces, or (3) something else?

I’m a little worried that use of the word pivot was a mistake on my part in that it maybe implies more of a change than I expect; if so, apologies.

I think this is best understood as a combination of

  • Maybe this is really important, especially right now [which I guess is indeed a subset of cause prioritization]
  • Maybe there are unusually high leverage things to do in that space right now
  • Maybe the counterfactual is worse - it’s a space with a lot of new energy, new organizations, etc, and so a lot more opportunity for re-making old mistakes, not a lot of institutional knowledge, and so on. 
    • I think this basically agrees with your point (1), but as a hypothesis, not a conclusion
    • In addition, there is an unusual amount of money and power flowing around this space right now, and so it might warrant extra attention
    • This is a small effect, but we’ve received some requests from within this space to pay more attention to it, which seems like some (small) evidence

On expanding to AI safety: Given all of the recent controversies, I’d think very carefully before linking the reputations of EA and AI safety more than they are already linked. It seems that the same group was responsible for community health for both and it either made a mistake or a correct, but controversial decision, there would be a greater chance of the blowback affecting both communities, rather than just one.

Maybe community health functions being independent of CEA would make this less of an issue. I guess it’s plausible, but also maybe not? Might also depend on whether any new org has EA in the name?

I think that the root cause is that there is no AI safety field building co-ordinating committee which would naturally end up taking on such a function. Someone really needs to make that happen (I'm not the right person for this).

This would have the advantage of allowing the norms of the communities to develop somewhat separately. It would sacrifice some operational efficiencies, but I think this is one of those areas where it is better not to skimp.

We decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.

Hmm, it's good that you guys are giving an updated public description of your activities. But it seems like the EA community let some major catastrophes pass through previously, and now the team that was nominally most involved with managing risk, rather than narrowing its focus to the most serious risks, is broadening to include the old stuff, new stuff, all kinds of stuff. This suggests to me that EA needs some kind of group that thinks carefully about what the biggest risks are, and focuses on just those ones, so that the major catastrophes are avoided in future - some kind of risk management / catastrophe avoidance team.

It seems very plausible to me that EA should have more capacity on risk management. That question is one of the things this taskforce might dig into.

Fwiw, I think we have different perspectives here - outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.

Maybe another place of discrepancy is that we primarily think of ourselves as looking for where high-impact gaps are, places where someone should be doing something but no one is, and risks are a subset of that but not the entirety.

(To be clear I also agree with Julia that it’s very plausible EA should have more capacity on this)

Yeah, I'm not trying to stake out a claim on what the biggest risks are.

I'm saying assume that some community X has team A that is primarily responsible for risk management. In one year, some risks materialise as giant catastrophes - risk management has gone terribly. The worst. But the community is otherwise decently good at picking out impactful meta projects. Then team A says "we're actually not just in the business of risk management (the thing that is going poorly), we also see ourselves as generically trying to pick out high impact meta projects. So much so that we're renaming ourselves as 'Risk Management and cool meta projects'". And to repeat, we (impartial onlookers) think that many other teams have been capable of running impactful meta projects. We might start to wonder whether team A is losing their focus, and losing track of the most pertinent facts about the strategic situation.

My understanding was that community health to some extent carries the can for catastrophe management, along with other parts of CEA and EA orgs. Is this right? I don't know whether people within CEA think anyone within CEA bears any responsibility for which parts of the past year's catastrophes. (I don't know as in I genuinely don't know - it's not a leading statement.) Per Ryan's comment, the actions you have announced here don't seem at all appropriate given the past year's catastrophes. 

I imagine that, for a number of reasons, it's not a good idea to put out an official, full CHSP List of Reasonably-Specific, Major-to-Catastrophic Risks complete with current and potential evaluation and mitigation measures. And your inability to do so likely makes it difficult to fully brief the community about your past, current, and potential efforts to manage those kinds of risks.

My guess is that a sizable fraction of the major-to-catastrophic risks center around a fairly modest number of key leaders, donors, and organizations. If that's so, there might be benefit to more specifically communicating CHSP's awareness of that risk cluster and high-level details about possible strategies to improve performance in that specific cluster (or to transition responsibility for that cluster elsewhere).

I'm curious about how CHSP's practical ability to address "concerns about individuals in the AI safety space" might compare to its abilities in EA spaces. Particularly, it seems that the list of practical things CHSP could do about a problematic individual in the non-EA AI safety space could be significantly more limited than for someone in the EA space (e.g., banning them from CEA events).

I think as an overall gloss, it’s absolutely true that we have fewer levers in the AI Safety space. There are two sets of reasons why I think it’s worth considering anyway:

  1. Impact - in a basic kind of “high importance can balance out lower tractability” way, we don’t want to only look where the streetlight is, and it’s possible that the AI Safety space will seem to us sufficiently high impact to aim some of our energy there
  2. Don’t want to underestimate the levers - we have fewer explicit moves to make in the broader AI Safety space (e.g. disallowing people from events), but there is both a high overlap with EA and my guess is that some set of people in a new space will appreciate people who have thought about community management a lot giving thoughts / advice / sharing models and so on.

But both of these could be insufficient for a decision to put more of our effort there, and it remains to be seen.

Thanks; that makes sense. I think part of the background is about potential downsides of an EA branded organization -- especially one that is externally seen (rightly or not) as the flagship EA org -- going into a space with (possibly) high levels of interpersonal harm and reduced levers to address it. I don't find the Copenhagen Interpretation of Ethics as generally convincing as many here do. Yet this strikes me as a case in which EA could easily end up taking the blame for a lot of stuff it has little control over.

I'd update more in favor if CHSP split off from CEA and EVF, and even more in favor if the AI-safety casework operation could somehow have even greater separation from EA.

As part of potential changes, will there be a review into the conflict of interests the Community Health team face and could face in the future? For example, one potential conflict of interest to me is that some Community Health team members are Fund Advisors for EA Funds (which is also part of Effective Ventures). 

Why do you think this is a noteworthy conflict?

The example I highlighted is striking to me, given how much information flows through the community health team (much of which may not be representative or may be false or even exaggerated). There is also a selection bias in the information they receive (ie, mostly negative as the CH team deals with reports and you report someone when something bad has happened).

For example, say you're socially unaware and accidentally make someone feel uncomfortable at an EA event without realising it. If the person you made feel uncomfortable mentioned this to the community health team, would the team then mention it to the grantmakers when you apply for an EA Funds grant? Would the grantmakers believe the worst in you (given the CH team suffers selection bias)? Why does the fact you accidentally made someone feel uncomfortable at an event matter when it comes to applying for grant?

Now there are dozens of other situations you could come up with and a dozen ways to mitigate this, but the easiest option seems to be removing yourself from that role entirely.


As for more widely, I think the CH team don't know how to handle conflict of interest and even overestimated their abilities to do so previously (eg, Julia Wise thinking she could handle the Owen Cotten-Barret situation herself instead of bringing in external professionals). I think the CH team should look into other places where conflicts of interest could arise in the future (by team members holding too many positions). The EA Funds may or may not be one such place.