Hide table of contents

EA Org Logos

As the Effective Altruism community has grown, efforts have emerged to promote and support the growth of EA groups in universities and cities around the world. Presently there are three organisations supporting EA groups on a large scale. This article outlines the background of each organisation, and clarifies the functional role and division of labour between them. We hope that the post will give the wider EA community a better understanding of the EA Group Support landscape Outreach space, while helping people involved in organising EA groups to identify the best ways to get assistance in different contexts.

 

Key Facts:

  • Click here to see our online resources combined in one place

  • All groups can approach CEA, LEAN and EAF to inquire about support

  • Some of EAF’s resources and events are only conveniently accessed by German speakers living in the region.

Joint Support

The following are jointly provided by CEA, LEAN and EAF:

 

  • The EA Organisers’ Facebook group

  • The Local EA Groups newsletter, aimed at supporting organisers and sharing relevant news and opportunities. New groups are automatically added, but email groupnewsletter@eahub.org if you think you have been left off of the mailing list

  • The annual Local EA Groups Survey. This year’s survey contains sections for both group members and organisers. Click to participate!

 

CEA: Local Groups Support

The Centre for Effective Altruism (CEA) is non-profit organisation which helps to grow and maintain the effective altruism community. Their mission is to create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible.


Before the Centre for Effective Altruism merger, Giving What We Can and EA Build (as a part of EA Outreach) supported local groups independently. CEA has since centralised both EA Outreach and Giving What We Can into one organisation at CEA. This also meant they centralised their support to local EA groups, which is currently provided by CEA’s Local Groups Coordinator - Harri Besceli. There is a small minority of EA groups which brand themselves as GWWC and 80,000 Hours groups, however these are supported by CEA in the same way as other EA-branded groups.

 

CEA Local Groups Support currently includes 1-1 mentoring, funding for EA Groups and resources and materials collected in the Effective Altruism Groups’ Google Drive Folder.

 

The support offered is currently being reviewed and updated. A new funding process, opportunities for receiving mentoring, suggested projects for EA groups and an EA Groups page on effectivealtruism.org will be announced by the end of August.

 

You can contact CEA's Local Groups Coordinator at harri.besceli@centreforeffectivealtruism.org

 

Effective Altruism Foundation: Outreach

The Effective Altruism Foundation (EAF, Stiftung für Effektiven Altruismus) is an effective altruist project incubator founded in Switzerland in 2013. It supports local groups in the German-speaking area (Germany, Austria, and Switzerland) in the following ways:

 

  • German resources, e.g. local group guide, leaflets, presentation slides, event flyer templates, and more

  • Speakers for EA introduction talks

  • Support for group organizers through a facebook group, personal advice, and 1-2 in-person local group meetups per year

  • A list of all groups and events in the German-speaking area

 

In addition, EAF supports the German-speaking EA movement with a German EA landing page, PR and media relations, social media, tax-deductible donation regranting to all EA charities, EAGx conferences, and a German EA newsletter.

 

You can contact EAF’s Local Groups Coordinator at marcello.veronese@ea-stiftung.org.

 

LEAN: The Local Effective Altruism Network

LEAN is a Rethink Charity project, originally set up by Tom Ash in 2014. LEAN’s objective is to promote Effective Altruism globally through the initiation of local EA groups, and the support of existing EA groups. Our outreach strategy has involved starting new EA presences by getting in touch with registered EAs in locations with no known EA representation. LEAN also assumed responsibility for an older, existing network of EA groups previously served by the now-defunct THINK project. Since its inception, LEAN has directly initiated or facilitated over 200 groups and presences around the world. LEAN supports local groups by providing:

 

  • Public profiles for Google searches, and visibility on the Map of EAs

  • Free websites, EA email addresses and Meetup.com use

  • Conference calls, bringing organisers together for knowledge transfer

  • Guides, ‘How-To’s and the EA Wiki

  • One-to-one feedback and support

  • Regular fundraisers (such as Living on Less) for groups to participate in

In addition to these services, LEAN has recently launched a Mentoring Programme in concert with CEA. LEAN will continue to build on its expertise for group management strategy, and use this to publish further guides and resources for organisers.

You can contact LEAN’s Manager at: richenda@eahub.org

Comments4


Sorted by Click to highlight new comments since:

thanks

Thanks for aggregating this information, Richenda! One quick bucket of thoughts around EA groups + universities:

  1. How are LEAN/CEA/EAF thinking about university chapters? Have they been an effective way of building a local community? Are there any university-focused plans going forwards?
  2. Are there other movements trying a university-focused strategy? Could we partner/learn from them? I'm thinking about something like Blockchain Education Network (see https://blockchainedu.org/ and https://medium.com/@rishipr/fa2543cdcbd8).

Thanks Richenda!

Hi Rhys,

Yes, Universities are especially good environments in which to start EA groups for a number of reasons (lots of young people with plenty of free time who are actively reaching for new ideas, experiences and activities, a lot of infrastructural support from institutions, student unions, a captive audience, etc.)

We are very mindful of the differences between local groups and University groups. Internally we work on building expertise about these differences, and customising the support and advice we give based on the nature of the group in question.

We have also drawn on the expertise of other successful student based movements. For example, the Secular Student Alliance has some excellent group growth and management guides which we pass on for recommended reading (while giving full credit, of course).

Awesome. Thanks Richenda—I'm looking into Secular Student Alliance now!

Curated and popular this week
 ·  · 7m read
 · 
Recently, @Lizka and @Ben_West🔸 published A shallow review of what transformative AI means for animal welfare. The main conclusion of this review was that animal welfare interventions should be heavily temporally discounted due to the possibility of transformative AI on short timelines. A reaction I had when reading this piece was that things tend to happen very slowly in animal agriculture, and even big wins like a corporate welfare commitment can take years before a specific animal is concretely better off. I therefore thought it might be useful to look at some of the main animal welfare interventions and assess how quickly they can help animals in the best case scenario.  A conclusion from this analysis is that animal interventions vary significantly from how quickly they start to have impact, with some interventions having impact almost immediately, some having predictable impact within some period of time (which can be up to a few years), and some having impact at some unspecified point in the future. Optimizing for speed to impact might be a new kind of frame under which animal advocates can evaluate and prioritize interventions.  These are just some preliminary thoughts that I wanted to get out there in the spirit of "shallow reviews" (also given the analysis itself is about the importance of speed). I'd welcome additional thoughts / feedback / pushback. Lowering / shifting meat demand Many animal welfare interventions achieve impact through lowering demand for animal products. In this category, I include things like starting a plant-based meat company, or doing vegan advocacy.  While it seems clearly possible to have wins in this area extremely quickly, there will often be a delay on having impact because of the structure of the supply chain for animal products.  For the simplest example, a beef cow in the US is generally 18-22 months old when they are slaughtered. This means that, to some extent, the beef supply in the US for the next 18-22 months h
 ·  · 6m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > How I decided what to say — and what not to I’m excited to share my TED talk. Here I want to share the story of how the talk came to be, and the three biggest decisions I struggled with in drafting it. The backstory Last fall, I posted on X about Trump’s new Secretary of Agriculture, Brooke Rollins, vowing to undo state bans on the sale of pork from crated pigs. I included an image of a pig in a crate. Liv Boeree, a poker champion and past TED speaker, saw that post and was haunted by it. She told me that she couldn’t get the image of the crated pig out of her head. She resolved that if she won prize money at her next poker tournament she’d give 20% of it away to help factory-farmed animals. She won $2.8 million. And she not only donated 20% of it, she also started posting to her many followers about factory farming, invited me on her podcast … and then invited me to speak at TED. (She was a guest curator at this year’s conference.) This was a huge opportunity. I don’t think the main TED stage has ever had a talk solely about factory farming before. (TED’s head Chris Anderson told me later that he regretted that TED hadn’t tackled the topic until now.) So I really didn’t want to mess it up. I knew what I wanted to convey: the moral urgency that we address factory farming. But I didn’t know how best to convey it. In particular, I struggled with three questions: how to talk about a moral atrocity, what my big idea would be, and what to ask for. Everyone likes an origin story. Thankfully my parents still had this stereotypically-New Zealand photo of me on our small sheep farm growing up. I was always more into the picnics than the farm work. Photo: Gilberto Tadday / TED. How to talk about a moral atrocity? Perha
 ·  · 50m read
 · 
In 2021, a circle of researchers left OpenAI, after a bitter dispute with their executives. They started a competing company, Anthropic, stating that they wanted to put safety first. The safety community responded with broad support. Thought leaders recommended engineers to apply, and allied billionaires invested.[1] Anthropic’s focus has shifted – from internal-only research and cautious demos of model safety and capabilities, toward commercialising models for Amazon and the military. Despite the shift, 80,000 Hours continues to recommend talented engineers to join Anthropic.[2] On the LessWrong forum, many authors continue to support safety work at Anthropic, but I also see side-conversations where people raise concerns about premature model releases and policy overreaches. So, a bunch of seemingly conflicting opinions about work by different Anthropic staff, and no overview. But the bigger problem is that we are not evaluating Anthropic on its original justification for existence. Did early researchers put safety first? And did their work set the right example to follow, raising the prospect of a ‘race to the top’? If yes, we should keep supporting Anthropic. Unfortunately, I argue, it’s a strong no.  From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.   Some limitations of this post: * I was motivated to write because I’m concerned about how contributions by safety folks to AGI labs have accelerated development, and want this to be discussed more. Anthropic staff already make cogent cases on the forum for how their work would improve safety. What is needed is a clear countercase. This is not a balanced analysis. * I skip many nuances. The conclusion seems roughly right though, because of overdetermination. Two courses of action – scaling GPT rapidly under a safety guise, starting a ‘