Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

15
8d
Running EA Oxford Socials: What Worked (and What Didn't) After someone reached out to me about my experience running EA socials for the Oxford group, I shared my experience and was encouraged to share what I sent him more widely. As such, here's a brief summary of what I found from a few terms of hosting EA Oxford socials. The Power of Consistency Every week at the same time, we would host an event. I strongly recommend this, or having some kind of strong schedule, as it lets people form a routine around your events and can help create EA aligned friend-groups. Regardless of the event we were hosting, we had a solid 5ish person core who were there basically every week, which was very helpful. We tended to have 15 to 20 people per event, with fewer at the end of the term as people got busy with finishing tutorials. Board Game Socials Board game socials tended to work the best of the types of socials I tried. No real structure was necessary, just have a few strong EAs to set the tone, so it really feels like "EA boardgames," and then just let people play. Having the games acts as a natural conversation starter. Casual games especially are recommended, "Codenames" and "Coup" were favorites in particular at my socials but I can imagine many others working too. Deeper games have a place too, but they generally weren't primary. In the first two terms, we would just hold one of these every week. They felt like ways for people to just talk about EA stuff in a more casual environment than the discussion groups or fellowships. "Lightning Talks" We also pretty effectively did "Lightning Talks," basically EA powerpoint nights. As this was in Oxford, we could typically get at least one EA-aligned researcher or worker there every week we did it (which was every other week), and the rest of the time would be filled with community member presentations (typically between 5-10 minutes). These seemed to be best at re-engaging people who signed up once but had lost contact wi
30
1mo
Make your high-impact career pivot: online bootcamp (apply by Sept 14) Many accomplished professionals want to make a bigger difference with their career, but don’t always know how to turn their skills into real-world impact. We (the Centre for Effective Altruism) have just launched a new, free, 4-day online career bootcamp designed to help with that. How it works: * Runs Sept 20–21 & 27–28 (weekends) or Oct 6–9 (weekdays) * Online, 6–8 hours/day for 4 days * For accomplished professionals (most participants mid-career, 5+ years’ experience, but not a hard requirement) What you’ll get: * Evaluate your options: identify high-impact career paths that match your skills and challenge blind spots * Build your network: meet other experienced professionals pivoting into impact-focused roles * Feedback on CVs: draft, get feedback, and iterate on applications * Make real progress: send applications, make introductions, or scope projects during the bootcamp itself Applications take ~30 mins and close Sept 14. If you’re interested yourself, please do apply! And if anyone comes to mind — colleagues, university friends, or others who’ve built strong skills and might be open to higher-impact work — we’d be grateful if you shared this with them.
53
2mo
5
I am sure someone has mentioned this before, but… For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing. I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc. All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it! Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)
4
3d
2
Community > Epistemics Community is more important to EA than epistemics. What drives EA's greater impact isn’t just reasoning, but collaboration. Twenty “90% smart” people are much more likely identify more impactful interventions than two “100% smart” people. I may be biased by how I found EA—working alone on “finding most impactful work” before stumbling into the EA community—but this is the point: EA isn’t unique for asking, “How can I use reason to find the most impactful interventions?” Others ask that too. EA is unique because it gathers those people, and facilitates funding and coordination, enabling far more careful and comprehensive work.
51
3mo
1. If you have social capital, identify as an EA. 2. Stop saying Effective Altruism is "weird", "cringe" and full of problems - so often And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overall credibility. If you're aligned with EA’s core principles, thoughtful in your actions, and have no significant reputational risks, then identifying openly as an EA is especially important. Normalising the term matters. When credible and responsible people embrace the label, they anchor it positively and prevent misuse. Offline I was early to criticise Effective Altruism’s branding and messaging. Admittedly, the name itself is imperfect. Yet at this point, it is established and carries public recognition. We can't discard it without losing valuable continuity and trust. If you genuinely believe in the core ideas and engage thoughtfully with EA’s work, openly identifying yourself as an effective altruist is a logical next step. Specifically, if you already have a strong public image, align privately with EA values, and have no significant hidden issues, then you're precisely the person who should step forward and put skin in the game. Quiet alignment isn’t enough. The movement’s strength and reputation depend on credible voices publicly standing behind it.
30
3mo
5
The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure. I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles. 1. The harms associated with the origins of our funding The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from: * harms to adolescent mental health, * cooperation with authoritarian regimes, * and the erosion of democracy, even in the US and Europe. These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company. To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics. But the systems that generated that wealth — and shaped the broader tech landscape could still matter. Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage. 2. Ongoing risk from the same culture Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk. Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be comple
39
4mo
2
Productive conference meetup format for 5-15 people in 30-60 minutes I ran an impromptu meetup at a conference this weekend, where 2 of the ~8 attendees told me that they found this an unusually useful/productive format and encouraged me to share it as an EA Forum shortform. So here I am, obliging them: * Intros… but actually useful * Name * Brief background or interest in the topic * 1 thing you could possibly help others in this group with * 1 thing you hope others in this group could help you with * NOTE: I will ask you to act on these imminently so you need to pay attention, take notes etc * [Facilitator starts and demonstrates by example] * Round of any quick wins: anything you heard where someone asked for some help and you think you can help quickly, e.g. a resource, idea, offer? Say so now! * Round of quick requests: Anything where anyone would like to arrange a 1:1 later with someone else here, or request anything else? * If 15+ minutes remaining: * Brainstorm whole-group discussion topics for the remaining time. Quickly gather in 1-5 topic ideas in less than 5 minutes. * Show of hands voting for each of the proposed topics. * Discuss most popular topics for 8-15 minutes each. (It might just be one topic) * If less than 15 minutes remaining: * Quickly pick one topic for group discussion yourself. * Or just finish early? People can stay and chat if they like.   Note: the facilitator needs to actually facilitate, including cutting off lengthy intros or any discussions that get started during the ‘quick wins’ and ‘quick requests’ rounds. If you have a group over 10 you might need to divide into subgroups for the discussion part. I think we had around 3 quick wins, 3 quick requests, and briefly discussed 2 topics in our 45 minute session.
46
5mo
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online: * Say what you mean, as plainly as possible. * Try to use words and expressions that a general audience would understand. * Be more casual and less formal if you think that means more people are more likely to understand what you're trying to say. * To illustrate abstract concepts, give examples. * Where possible, try to let go of minor details that aren't important to the main point someone is trying to make. Everyone slightly misspeaks (or mis... writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you're engaging in nitpicking. * When you don't understand what someone is trying to say, just say that. (And be polite.) * Don't engage in passive-aggressiveness or code insults in jargon or formal language. If someone's behaviour is annoying you, tell them it's annoying you. (If you don't want to do that, then you probably shouldn't try to communicate the same idea in a coded or passive-aggressive way, either.) * If you're using an uncommon word or using a word that also has a more common definition in an unusual way (such as "truthseeking"), please define that word as you're using it and — if applicable — distinguish it from the more common way the word is used. * Err on the side of spelling out acronyms, abbreviations, and initialisms. You don't have to spell out "AI" as "artificial intelligence", but an obscure term like "full automation of labour" or "FAOL" that was made up for one paper should definitely be spelled out. * When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isn't already in the know can more easily understand who or what you're talking about. For example, instead of just saying "MacAskill" or "Will", say "Wi
Load more (8/148)