Considering the current trajectory of the EA community, we identified a number of key conclusions that we believe .impact is well-placed to address:

 
  1. As the community grows, it’ll likely become increasingly important to communicate EA ideas clearly and avoid inaccurate messaging.

  2. Community builders (local groups included) would benefit from a greater variety of resources and a higher standard in terms of shareability, digestibility and appeal.

  3. The success of local groups is highly dependent on the appeal of the group leader.

  4. The median EA donation ($330) was pretty low. There could be various reasons for this, but we can only really pin down an explanation when .impact conduct the next EA Survey. If EAs think they should donate more but don’t, there could be a fundamental disconnect between belief and action. Do we still need new incentives or additional prompting  to donate?

 

Community Building and Coordination

 

Many EAs have come across the movement via articles, but those that would be put off by lengthy text have few alternative opportunities to encounter effective altruism. The instances where we have branched out into various media and outreach have proved valuable in driving more people towards the community, e.g. Peter Singer’s TED talk, the Sam Harris podcast with Will MacAskill, EAG, the pledge drive and so on.

 

There is an abundance of existing content that .impact intends to refine into concise, enjoyable forms of media, making the information clearer whilst being careful to not oversimplify it. We will focus on making resources that allow the community to easily share EA ideas via engaging and digestible material.

 

These resources will also aid the Local EA Network. With LEAN, we have seen that a local group is more likely to succeed with a charismatic group leader. However, this combination of charisma, enthusiasm about EA, and willingness to put in the work, is scarce. LEAN, therefore, gives guidance on how to lead, what kind of events to have, and how to have an impact, so the (sustained) success of a group is no longer solely dependent on an individual’s character. In other words, we give them the training to succeed. Likewise, videos or other resources that introduce EA in a fun and appealing way can function as a supplement where charisma is lacking.

 

As the community grows, it’s becoming more important to have a clear message and avoid miscommunication. For example, we need to ensure that group leaders are up to speed on the key concepts of EA, and that it’s easy to learn the necessary points via engaging resources:

 
  • Engaging, digestible videos: explaining key concepts, transferring charisma, circumventing jargon, and shortening the learning experience without oversimplifying.

  • Podcasts or interviews: this may involve reaching out to existing popular podcasts or creating our own, depending on further research.

  • Infographics, memes, handouts: creating explanatory resources for newcomers (particularly useful for LEAN groups to distribute, and something they requested in our LEAN survey).

  • Resources for groups to measure impact: feedback forms, tips on useful metrics and so on.

  • An interactive EA flowchart to navigate the various resources, so users can direct their own experience of learning about EA; the learning experience would be modifiable by time and form (3-second meme, 3-minute video, 10 minute article etc.) and by intention (introductory learning, short action, in-depth learning, long-term commitment, etc); this also serves as a means of determining which resources are most popular with particular audiences via behavioural analytics.

 

The interactive EA flowchart is a good example of our ethos: we want to create a fun learning experience that appeals to a variety of users and eases the process of communicating about EA, while internally collecting data from behavioural analytics, which can influence our future strategy.

 

Our move to create and distribute high-quality resources represents a long-term approach to a) involve more people in the community and b) strengthen the commitment of existing community members.

 
 

Impact Missions, Peer-to-Peer Fundraising and Matching Donations

 

As a means of increasing and coordinating the impact of the EA community, we will be leading Impact Missions throughout the year.

 

These Impact Missions could take the form of anything from everyone working together to change a particular policy, to coordinating an effort to translate EA materials into non-English languages. The intention is to make waves through a concentrated, coordinated effort. Our peer-to-peer fundraisers are one iteration of this.

 

As part of our immediate impact, .impact is taking over a project that was previously (very successfully) run by Charity Science: peer-to-peer fundraising campaigns. There are two main reasons for this:

 
  • .impact has great potential to develop these fundraising activities, particularly given that we support ~300 Local Groups and a growing network of SHIC Clubs.

  • Charity Science has decided to focus on their direct poverty project, Charity Science Health.

 

We intend to provide platforms for fundraising as well as coming up with fun and innovative campaigns. We will encourage both LEAN and SHIC groups to take part, thereby:

 
  • Increasing people’s commitment to their respective local groups and EA as a whole.

  • Creating an opportunity to bond as a group, and to learn from other EA groups.

  • Giving groups an active dimension, and providing an alternative to lecture/discussion meetups.

  • Drawing in a new crowd; a proportion of whom may only take part in the fundraiser, while some will likely go on to further engage with EA.

  • Creating at least one clear metric by which groups can measure their success, and hopefully nudging them towards more data collection (something we want to encourage regardless).

 

Our first fundraiser and Impact Mission will be a peer-to-peer Winter fundraiser, ‘Season’s Givings’. We are currently gathering matching funds for this project, and are seeking people who have (or would like) experience with fundraising; please do contact us if you’re able to help us make progress with either of these goals.

 

Donate to .impact

 

We are currently fundraising for our 2017 operations. If you are interested in helping .impact continue and scale, please get in touch.

 

Email: georgiedotimpact@gmail.com

Chat: calendly.com/georgiedotimpact

 

See .impact update 1 of 3 here, and 2 of 3 here.

4

0
0

Reactions

0
0

More posts like this

Comments9


Sorted by Click to highlight new comments since:
[anonymous]4
0
0

The instances where we have branched out into various media and outreach have proved valuable in driving more people towards the community, e.g. Peter Singer’s TED talk, the Sam Harris podcast with Will MacAskill, EAG, the pledge drive and so on.

The examples here are the best outcomes that were generated by people who spent quite a bit of time developing a following. I don't think they're representative of what media-based outreach looks like on average.

As some useful data points: CEA isn't currently trying to promote EA through media outreach expect in cases where a) the audience is large and promising and b) we have access to a platform that lets us dig into the issues in depth (e.g. podcasts). This is because we've consistently failed to see much of a return from mass-media style stories about EA and are worried about putting EA in front of a large audience where we can't dig into the ideas in depth.

Since I've been at CEA we launched a major media campaign around Will's book with mixed results (unclear if it was worth it) and attempts to promote GWWC through media outreach don't seem to have been particularly successful. This mirrors my experience in my previous job where we worked with multiple outside PR firms on projects with little to show for it.

We don't plan on doing any mass media. I can see how the bit you quoted might be related to mass media, but hopefully the rest of the post clarifies that our focus will be on resources for LEAN, since our LEAN survey showed significant demand for this.

The median EA donation ($330) was pretty low. There could be various reasons for this, but we can only really pin down an explanation when .impact conduct the next EA Survey. I

According to the reports, the first survey of 2014 (ie reported in 2015) found a median donation of $450 in 2013, with 766 people reporting their donations.

The next survey of 2015 (ie reported 2106) found a mediant donation of $330 in 2014, with 1341 people reporting their donations.

Repeating the survey has gathered more data and actually produced a lower estimate. I'm interested how the third survey will help understand this better?

Me too! We're in the process of creating the survey now and will be distributing it in January. This is one thing we're going to address, and if you have suggestions about specific questions, we'd be interested in hearing them.

Unless you have a specific hypothesis that you are testing, I think the survey is the wrong methodology to answer this question. If you actually want to explore the reasons why (and expect there will not be a single answer) then you need qualitative research.

If you do pursue questions on this topic in a survey format, it is likely you will get misleading answers unless you have the resources to very rigorously test and refine your question methodology. Since you will essentially be asking people if they are not doing something they have said is good to do, there will be all sorts of biases as play, and it will be very difficult to write questions that function the way you expect them to. To the best of my knowledge question testing didn't happen at all with the first survey, I don't know if any happened with the second.

I appreciate the survey uses a vast amount of people's resources, and is done for good reasons. I hate sounding like a doom-monger, but there are pitfalls here and significant limitations on surveys as a research method. I think the EA community risks falling into a trap on this topic, thinking dubious data is better than none, when actually false data can literally costs lives. As previously, I would strongly suggest getting professional involvement.

Ah sorry Bernadette I misunderstood your first question!

I think 'pin down an explanation' was probably too strong on my part, because I definitely don't think it'd be conclusive and I do hope that we have some more qualitative research into this.

We do have professionals working on the survey this year (is that what you meant by professional involvement?) and I've sent your comment to them. They're far better placed to analyze this than me!

Thanks Georgie - I see where we were misunderstanding each other! That's great - research like this is quite hard to get right, and I think it's an excellent plan to have people with experience and knowledge about the design and execution as well as analysis involved. (My background is medical research as well as clinical medicine, and a depressing amount of research - including randomised clinical trials - is never able to answer the important question because of fundamental design choices. Unfortunately knowing this fact isn't enough to avoid the pitfalls. It's great that EA is interested in data, but it's vital we generate and analyse good data well.)

Please include a question about race. At the Effective Animal Advocacy Symposium this past weekend at Princeton, the 2015 EA Survey was specifically called out for neglecting to ask a question about the race of the respondents.

Thanks Eric, we spoke to Garrett about this too :)

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s
 ·  · 2m read
 · 
Summary Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources. Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals. 78% of those professionals said their initial call accelerated their involvement in AI safety[1].  Unfortunately, we’re closing due to a lack of funding.  We remain excited about other attempts at direct outreach to this population, and think the right team could have impact here. Why are we closing? Over the past year, we’ve applied for funding from all of the major funders interested in AI safety fieldbuilding work, and several minor funders. Rather than try to massively change what we're doing to appeal to funders, with a short funding runway and little to no feedback, we’re choosing to close down and pursue other options. What were we doing? Why? * Calls: we ran 1:1 calls with mid-career machine learning professionals. Calls lasted an average of 37 minutes (range: 10-79), and we had a single call with 96% of professionals we spoke with (i.e. only 4% had a second or third call with us). On call, we focused on: * Introducing existential and catastrophic risks from AI * Discussing research directions in this field, and relating them to the professional’s areas of expertise. * Discussing specific opportunities to get involved (e.g. funding, jobs, upskilling), especially ones that would be a good fit for the individual. * Giving feedback on their existing plans to get involved in AI safety (if they have them). * Connecting with advisors to support their next steps in AI safety, if appropriate (see below). * Supportive Activities: * Accountability: after calls, we offered an accountability program where participants set goals for next steps, and we check in with them. 114 call participants set goals for check