Hide table of contents

Our programs exist to have a positive impact on the world, rather than to serve the effective altruism community as an end goal. This unfortunately means EAs will sometimes be disappointed because of decisions we’ve made — though if this results in the world being a worse place overall, then we’ve clearly made a mistake. This is one of the hard parts about how EA is both a community and a professional space.

Naturally, people want to know things like

  • Why didn’t I get admitted to a conference, when EA is really important to me and I’m taking actions inspired by EA?
  • Why didn’t my friend get admitted to a conference, when they seem like a good applicant?

We can understand why people would often like feedback on what they could have done differently or what they can try next time to get a better result. Or they just want to know what happened. When we have a specific idea about what would improve someone’s chances (like “you didn’t give much detail on your application, could you add more information?”) we’ll often give it. 

But we get thousands of applications and we don’t think it’s the best use of our staff’s time to give specific feedback about all of them. Often we don’t have constructive feedback to give.

Many of the things that go into a decision are not easy to pin down — how well we think you understand EA, how we think you’ll add to the social environment, how much we think you’ll benefit from the event given the program we’ve prepared, etc. These things are subjective, and in a lot of cases, reasonable people could disagree about what call to make. There are also cases where we’ll just make mistakes (by our own standards), sometimes in favor of an applicant and sometimes against them.
 

How we communicate about our programs

In responding to public discussion of our programs, sometimes we’ve gotten more in the weeds than we think was ideal. We’ve provided rebuttals or more information about some points but not others, which makes people understandably confused about how much information to expect from us and what the full picture is. It also uses a lot of our staff time. As the EA community grows, we need to adjust how we handle communications with the community.
 

What you should expect from us going forward:

  • When we think there are significant updates to our programs that the community should know about, or when there seems to be widespread confusion or misunderstanding about a particular topic, we’ll likely write a post (like this one).
  • We’ll likely be less involved in the comments or extended public back-and-forth.
  • We’ll read much of the public feedback about our programs, and will definitely read feedback you send directly to us (unless it’s extraordinarily long). We won’t respond in-depth to much of the feedback, though.
  • Our programs are a work in progress, and we’ll take feedback into account as we try to improve them.
     

What we hope you’ll do:

  • Feel free to express your criticisms, observations, and advice about our programs. This could be publicly if you think that’s best, or by writing to us directly. Our contact form can be filled out anonymously. Or you can reach specific programs at groups@centreforeffectivealtruism.org, forum@effectivealtruism.org, or hello@eaglobal.org, or community health’s form.
  • In general, we think it’s a good idea to fact-check public criticisms before publishing. If you send us something to fact-check, we’ll try to do so.
  • If you think we’ve missed important considerations or information on a decision, like about someone’s application, you can send us additional information.
     

On events specifically:

  • We try to run a range of events that serve the breadth of the community. 
  • We recognize that there are people who are really dedicated to doing good, but whose approach isn’t a good fit for all our events.
  • For example, people working all kinds of jobs and donating have historically been the lifeblood of EA. CEA wouldn’t exist without these people. But EA Global is mostly geared toward people who are making other kinds of decisions.
  • There are a lot of options besides EAG. The EAGx conference series serves about twice as many people as the EAG conferences, has broader admissions standards, and takes place in a much wider variety of places (including virtually). Giving What We Can has virtual events year-round connecting members. A lot of the action is in local groups around the world. Some groups have organized unconferences and other more social events. The EA Forum and other online spaces are useful spaces to swap research, ideas, and advice. Virtual programs provide discussion spaces for thousands of people each year.

More info about events admissions.

Comments3


Sorted by Click to highlight new comments since:

There’s an underlying tension where CEA struggles to be both “top down” and “bottom up” at the same time:

Apply now, and please err on the side of applying!

But we get thousands of submissions so “it’s not the best use of our time to give feedback [on rejections]…these things are subjective.”

(When I say “you” further in this comment, I am referring to CEA generally and not the author specifically.) Tl;dr see four recommendations below.

From the “bottom up” vantage point, CEA wants to grow the community. You want more people working for and funding EA interventions. You appreciate that diverse worldviews bring to light overlooked areas, and you can’t predict where breakthroughs come from. You recognize the benefits of broadening the EA tent to match talent including management and line workers.

From the “top down” vantage point, CEA wants high quality programs and events: a position of thought leadership, a high epistemic standard, optimal 1-1 networking, and influence over policy, talent and funding to do the most good. Therefore, you reasonably choose to gate conferences, classes and programs. You don’t want to dilute the core principles or attract bad actors.

It feels to me like CEA chooses to apply the “top down” or “bottom up” lens as appropriate for itself in most situations. However, this can be confusing or misleading to the bulk of the EA community who works alongside CEA but is unaffiliated with it. 

As I’ve previously commented in Open EA GlobalEA is a personal choice/identity. It’s not a job title earned or a license you receive through a license board. So when people are turned away from events or courses - that they are excited to attend or pay for - with unsigned form letters and vague calls-to-action - it feels like a value-based judgment even when it is not.

I pursued four unrelated programs[1] and all of these applications were lengthy and personal. I felt like CEA took my trust and willingness to provide detailed personal information for granted. I’ve encountered only a few organizations (graduate degree programs come to mind) that came close to asking for that level of qualitative detail - and in those cases I found more clarity on what the bar was and what the upside could be. So I can see where feelings of frustration or dissatisfaction are compounded when folks don’t receive commensurate feedback.

Slight aside, based on my professional expertise in branding and customer service, it’s not obvious that the risks of “bad actors” attending events/programs outweighs the risks of “disgruntled ex-EAs” diluting the EA brand in other venues. It’s been suggested that more public criteria for applications might allow people to “game the system,” but you can also motivate good actors to do what’s necessary to qualify next time. I’m not sure if the visible cost of bad seeds at an event is higher than the invisible cost of people being turned away who might sour on EA altogether. If not handled with care, rejected applicants become vectors of negative publicity throughout the nonprofit landscape and beyond. Measuring and understanding this could be a useful research project.

Here are four actionable recommendations to improve this process (in no particular order):

  • Don’t call everything an “application.” As silly as it sounds, just calling something an application sets expectations that the process will be painful and there will be a harsh judgment of “accept/reject.” EAs could, for instance, more softly “request an EAG slot,” “petition to join a seminar,” etc.
  • Don’t encourage all folks to apply to high-bar programs. If you already have high rejection rates, casting a wider net alienates more people. You can still use targeted outbound communications to nudge relevant folks.
  • Allow EAs to vouch for other EAs. By the author’s admission, the staff often makes mistakes - there’s just too much information and too few resources (time, staff) to process it. Rely on the trust of others in the community to make the process smoother. If someone is motivated enough to get several references, we probably want them involved.
  • Make CEA assets (EAG keynotes, Virtual Program syllabi) more discoverable. E.g. dozens of EAs present hour-long keynotes at EAG conferences, but for most of these folks, only a few thousand people will ever see them! If CEA cultivates this content and makes it more prominent across owned assets and communications, there’s an alternate path for EAs to engage without needing to apply to constrained programs.

Thanks for listening. I acknowledge that gatekeeping EA programs is a thankless job. I’m interested in making the process better both as a marketing leader with relevant experience and as a community member who sees opportunities for CEA to overcome its own hurdles to doing more good! Feel free to take or leave any of my suggestions and DM me if you want to dig deeper on anything.

  1. ^

    Intro EA Virtual Program and 80k Hours Advising (accepted), EAG DC and EAGx Berlin (rejected).

To reiterate my views from previous discussions: while it does seem impossible (or not worth it) to give everyone individualized feedback, this is not really the point. The point is the ability of the community to understand CEA policies and to oversee them, point out problems when those occur, and "express [our] criticisms, observations, and advice" as you wrote. In this case, that means having info about the admissions criteria that you've so far declined to give, for fear of people gaming the system.

So this post seems to signal a transition from "we will not tell you what our policies are, but we'll at least publicly engage with your criticism" to the even less transparent "we will just not tell you what our policies are". I strongly think this is the wrong direction to move in.

If CEA do not want to be driven by the community, I think they should consider whether they should present themselves as representatives of the community. For instance, their ownership of effectivealtruism.org and branding their events as Effective Altruism Global (as opposed to say, adding "Centre for" to each of these).

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 7m read
 · 
Tl;dr: In this post, I introduce a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed o
 ·  · 2m read
 · 
Is now the time to add to RP’s great work?     Rethink’s Moral weights project (MWP) is immense and influential. Their work is the most cited “EA” paper written in the last 3 years by a mile - I struggle to think of another that comes close. Almost every animal welfare related post on the forum quotes the MWP headline numbers - usually not as gospel truth, but with confidence. Their numbers carry moral weight[1] moving hearts, minds and money towards animals. To oversimplify, if their numbers are ballpark correct then... 1. Farmed animal welfare interventions outcompete human welfare interventions for cost-effectiveness under most moral positions.[2] 2.  Smaller animal welfare interventions outcompete larger animal welfare if you aren’t risk averse. There are downsides in over-indexing on one research project for too long, especially considering a question this important. The MWP was groundbreaking, and I hope it provides fertile soil for other work to sprout with new approaches and insights. Although the concept of “replicability”  isn't quite as relevant here as with empirical research, I think its important to have multiple attempts at questions this important. Given the strength of the original work, any new work might be lower quality - but perhaps we can live with that. Most people would agree that more deep work needs to happen here at some stage, but the question might be is now the right time to intentionally invest in more?   Arguments against more Moral Weights work 1. It might cost more money than it will add value 2. New researchers are likely to land land on a similar approaches and numbers to RP so what's the point?[3] 3. RP’s work is as good as we are likely to get, why try again and get a probably worse product? 4. We don’t have enough new scientific information since the original project to meaningfully add to the work. 5. So little money goes to animal welfare work  now anyway, we might do more harm than good at least in the short t