Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

6
2d
Praise for Sentient Futures By now, I have had the chance to meet most staff at Sentient Futures, and I think they really capture the best that EA has to offer, both in terms of their organisational goals and culture.  They are kind, compassionate, impartial, frugal - the things that I feel like the movement compromised on in the past years in pursuit of trying to save us from AI. I really hope this kind of culture becomes more prominent in the 4th wave of EA[1], with similar organisations popping up in the coming months and years.   PS.: I have friends at the org, so this obviously makes me biased:) 1. ^ 3rd wave described here in Ben West's post. If you go with this one, then what I'm describing would be the 5th wave.
44
1mo
4
Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence. Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard). Hopefully this is auspicious for things to come?
25
1mo
5
MrBeast just released a video about “saving 1,000 animals”—a well-intentioned but inefficient intervention (e.g. shooting vaccines at giraffes from a helicopter, relocating wild rhinos before they fight each other to the death, covering bills for people to adopt rescue dogs from shelters, transporting lions via plane, and more). It’s great to see a creator of his scale engaging with animal welfare, but there’s a massive opportunity here to spotlight interventions that are orders of magnitude more impactful. Given that he’s been in touch with people from GiveDirectly for past videos, does anyone know if there’s a line of contact to him or his team? A single video/mention highlighting effective animal charities—like those recommended by Animal Charity Evaluators (e.g. The Humane League, Faunalytics, Good Food Institute)—could reach tens of millions and (potentially) meaningfully shift public perception toward impact-focused giving for animals. If anyone’s connected or has thoughts on how to coordinate outreach, this seems like a high-leverage opportunity I really have no idea how this sorta stuff works, but it seemed worth a quick take — feel free to lmk if I’m totally off base here). 
2
2d
The "areas of expertise" and, to a lesser extent, the "areas of interest" features seem off on Swapcard. Many people put 5+ areas as their expertise. This is not only unlikely, but dilutes the filtering feature. Some people also don't put in anything (I think?), which means they will be left out of my search, even if they would be relevant to talk to. Suggested improvement: make it compulsory to add at least one area of expertise, but cap it at 3, so people don't just put in everything.
55
5mo
5
I am sure someone has mentioned this before, but… For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing. I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc. All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it! Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)
54
5mo
1. If you have social capital, identify as an EA. 2. Stop saying Effective Altruism is "weird", "cringe" and full of problems - so often And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overall credibility. If you're aligned with EA’s core principles, thoughtful in your actions, and have no significant reputational risks, then identifying openly as an EA is especially important. Normalising the term matters. When credible and responsible people embrace the label, they anchor it positively and prevent misuse. Offline I was early to criticise Effective Altruism’s branding and messaging. Admittedly, the name itself is imperfect. Yet at this point, it is established and carries public recognition. We can't discard it without losing valuable continuity and trust. If you genuinely believe in the core ideas and engage thoughtfully with EA’s work, openly identifying yourself as an effective altruist is a logical next step. Specifically, if you already have a strong public image, align privately with EA values, and have no significant hidden issues, then you're precisely the person who should step forward and put skin in the game. Quiet alignment isn’t enough. The movement’s strength and reputation depend on credible voices publicly standing behind it.
30
4mo
Make your high-impact career pivot: online bootcamp (apply by Sept 14) Many accomplished professionals want to make a bigger difference with their career, but don’t always know how to turn their skills into real-world impact. We (the Centre for Effective Altruism) have just launched a new, free, 4-day online career bootcamp designed to help with that. How it works: * Runs Sept 20–21 & 27–28 (weekends) or Oct 6–9 (weekdays) * Online, 6–8 hours/day for 4 days * For accomplished professionals (most participants mid-career, 5+ years’ experience, but not a hard requirement) What you’ll get: * Evaluate your options: identify high-impact career paths that match your skills and challenge blind spots * Build your network: meet other experienced professionals pivoting into impact-focused roles * Feedback on CVs: draft, get feedback, and iterate on applications * Make real progress: send applications, make introductions, or scope projects during the bootcamp itself Applications take ~30 mins and close Sept 14. If you’re interested yourself, please do apply! And if anyone comes to mind — colleagues, university friends, or others who’ve built strong skills and might be open to higher-impact work — we’d be grateful if you shared this with them.
47
7mo
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online: * Say what you mean, as plainly as possible. * Try to use words and expressions that a general audience would understand. * Be more casual and less formal if you think that means more people are more likely to understand what you're trying to say. * To illustrate abstract concepts, give examples. * Where possible, try to let go of minor details that aren't important to the main point someone is trying to make. Everyone slightly misspeaks (or mis... writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you're engaging in nitpicking. * When you don't understand what someone is trying to say, just say that. (And be polite.) * Don't engage in passive-aggressiveness or code insults in jargon or formal language. If someone's behaviour is annoying you, tell them it's annoying you. (If you don't want to do that, then you probably shouldn't try to communicate the same idea in a coded or passive-aggressive way, either.) * If you're using an uncommon word or using a word that also has a more common definition in an unusual way (such as "truthseeking"), please define that word as you're using it and — if applicable — distinguish it from the more common way the word is used. * Err on the side of spelling out acronyms, abbreviations, and initialisms. You don't have to spell out "AI" as "artificial intelligence", but an obscure term like "full automation of labour" or "FAOL" that was made up for one paper should definitely be spelled out. * When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isn't already in the know can more easily understand who or what you're talking about. For example, instead of just saying "MacAskill" or "Will", say "Wi
Load more (8/155)