Hide table of contents

It is often discussed that EA is talent-constrained, and the movement is experimenting with ways to create new leaders for the most pressing cause areas. 

Since the launch of Naming What We Can, our linguistically-talented team conducted in-depth analysis in search of the most promising and neglected opportunities within this space. 

Our analysis shows that even though incubation programs are considered outside EA to be one the most promising ways to foster new talent, there is not a single incubation program within the EA ecosystem.

Project Plan

  • The first, and hardest part of the program, is finding strong applicants. The incubator participants would stay in the incubator facility for ~18 years and will be able to learn about EA and develop domain expertise.
  • The new people incubated at our incubator will be called X-risk-Men.
  • At some part of their growth, each X-risk-Man would realize they have an EA-superpower. Some will be super-forecasters. Others will be able to create QALYs out of thin air. In the most extreme cases, some might even be able to discuss Roko's Basilisk without putting everyone close to them in danger.
  • When their incubation is over, the X-risk-Men will all be automatically accepted to Charity Entrepreneurship (or has been renamed by Naming What We Can - Charity Entreprenreurooshrimp) and start super-effective charities.

As others have noted, “EA should focus on being a really good place for a relatively small group of unusual people to try to be extremely impactful”. And there is nothing weirder than superpowers, which definitely have not gone mainstream.

Impact estimation

Overall, we think the impact of the project will be net negative on expectation (see our Guesstimate model). That is because we think that the impact is likely to be somewhat positive, but there is a really small tail risk that we will cause the termination of the EA movement. However, as we are risk-averse we can mostly ignore high tails in our impact assessment so there is no need to worry.

Call to Action

In order to begin the incubation program, we need local EA groups to identify members with unusual talents or who are otherwise strange. We expect that very few EAs are strange, so this may be difficult, but with effort, we think that most EA groups (and maybe some EA orgs) can identify at least one such member and nominate them in a comment below.

Once all members are chosen, we will secretly clone them and raise the clones in the new actual incubator. 


Many thanks to David Manheim, Guy Raveh, Omri Sheffer, Edo Arad, and Yuval Shapira for contributing to this new important project, as well as many members of EA Israel (some of them have actually been through the trials in the Israeli desert).
 

Comments8


Sorted by Click to highlight new comments since:

EA Switzerland is sending the whole team - we are short so fit the description of "relatively small group of people" very well. 

I see in your guesstimate you used Expected Impact instead of Expected Value, could you please make your spreadsheets public so we can criticize them? I am not attacking you as a person, only your intelligence. Its for the good of the future of humanity.

 

We have a potential new member, Monica, who is an actual tram conductor. I am tired of philosophers telling me what to do, so we are recruiting experts into the community. She should be able to resolve the trolley problem in under 10,000 words with  only one game theory matrix. 

EA Czechia is sending Hana Kalivodová. She can shoot rays of excitement from her eyes, EA needs people which can inspire others.

EA Israel is sending Asaf Ifergan - he cooks well in natural environments, which can be very useful in a case of a global food shortage.

I'm up for the challenge and already sharpening my knife.

EA Sweden nominates Lowe Lundin - looks like your everyday swede, but don't let the blonde locs and innocent blue eyes fool you. This guy is wandering encyclopedia who eats enough for three people and has the hart of four people.

I read a blog post by Abraham Lincoln once and I think the core point was that EA is talent overhung instead of talent constrained.

Since this removes the core factor of impact from the project, it rounds most expected values down to 0, which is an improvement. You can thank me in the branches that would have otherwise suffered destruction by tail risk.

Interesting! Our basis for EA being talent constrained is also by Abraham Lincoln - our first citation might have been outdated. Thank you for this, it's a great improvement!

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed