Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

24
22d
5
Why don’t EA chapters exist at very prestigious high schools (e.g., Stuyvesant, Exeter, etc.)? It seems like a relatively low-cost intervention (especially compared to something like Atlas), and these schools produce unusually strong outcomes. There’s also probably less competition than at universities for building genuinely high-quality intellectual clubs (this could totally be wrong).
34
2mo
1
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week). The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews. More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).
8
9d
1
Whenever I talk about Effective Altruism (EA) to someone new, I talk about EA-the-Movement and EA-the-Philosophy. EA-the-Movement draws a specific kind of person (quantitative, techy, philosophical) and has selected a few causes it has determined to be the most effective. EA-the-Philosophy is about asking whether our donations and volunteering are going to places that get the most bang for our buck and those questions can be applied to anything we care about.  It's a way of easing people into our way of thinking without insisting that they join our particular group or adopt our priorities. I find it's especially useful if the quantitative or strong recommendations from EA-the-Movement to be offputting, or if they have previous associations with the movement. I think it's worth making people who are doing good in some way more effective, even if it doesn't end up getting them to do what we'd consider the most good. Although if someone spends enough time thinking with the EA Philosophy, it might end up leading the straight back to the EA Movement. 
9
13d
2
The Forum should normalize public red-teaming for people considering new jobs, roles, or project ideas. If someone is seriously thinking about a position, they should feel comfortable posting the key info — org, scope, uncertainties, concerns, arguments for — and explicitly inviting others to stress-test the decision. Some of the best red-teaming I’ve gotten hasn’t come from my closest collaborators (whose takes I can often predict), but from semi-random thoughtful EAs who notice failure modes I wouldn’t have caught alone (or people think pretty differently so can instantly spot things that would have taken me longer to figure out). Right now, a lot of this only happens at EAGs or in private docs, which feels like an information bottleneck. If many thoughtful EAs are already reading the Forum, why not use it as a default venue for structured red-teaming? Public red-teaming could: * reduce unilateralist mistakes, * prevent coordination failures (I’ve almost spent serious time on things multiple people were already doing — reinventing the wheel is common and costly), Obviously there are tradeoffs — confidentiality, social risk, signaling concerns — but I’d be excited to see norms shift toward “post early, get red-teamed, iterate publicly,” rather than waiting for a handful of coffee chats.
11
21d
EAGx and Summit events are coming up, and we're looking for organizers for more! Applications for EAGxCDMX (Mexico City, 20–22 March), EAGxNordics (Stockholm, 24–26 April), and EAGxDC (Washington DC, 2–3 May) are all open! These will be the largest regional-focused events in their respective areas, and are aimed at serving those already engaged with EA or doing related professional work. EAGx events are networking-focused conferences designed to foster strong connections within their regional communities. If you’d like to apply to join the organizing team for a 2026 Bay Area EAGx (date and venue to be confirmed, targeting August–September), please apply via this form. Full details can be found here. We also have applications or direct registrations open for EA Summits in Helsinki (28 Feb), Hong Kong (7 March), and Jakarta (19 April), with more to be announced soon. Summits welcome existing EA community members but they also include more introductory content, making them a great way for newer, EA-curious professionals to learn about EA and explore potential opportunities.. Please keep them in mind to recommend to friends and colleagues who you think could benefit from in-person exposure to EA ideas and the real people behind them. If you are interested in hosting an EAGx or Summit in your city, or want to nominate an area for consideration, please fill out this form!
45
4mo
5
Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence. Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard). Hopefully this is auspicious for things to come?
24
3mo
14
Hey y'all, My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "has an enormously well funded branch ... that is spending millions on hosting AI safety conferences." I  think there's a lot to take from it. The first is in relation to @Bella's argument recently that EA should be doing more to actively define itself. This is what happens when it doesn't. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. That's what I assume drew many of us here to begin with. It's interesting enough that when outsiders make videos like this, even when they're not the picture that'd we'd prefer,[1] they will capture the attention of many. This video is a significant impression, but it's not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it. The second is about zero-sum attitudes and leftism's relation to EA. In the comments, many views like this were presented: @LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I don't think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think there's a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesn't grapple enough with systemic change, but on the other hand that society would be
55
8mo
5
I am sure someone has mentioned this before, but… For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing. I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc. All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it! Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)
Load more (8/168)