Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

7
5d
Learnings from a day of walking conversations  Yesterday, I did 7 one-hour walks with Munich EA community members. Here's what I learned and why I would recommend it to similarly extroverted community members: Format * Created an info document and 7 one-hour Calendly slots and promoted them via our WhatsApp group * One hour worked well as a default timeframe - 2 conversations could have been shorter while others could have gone longer * Scheduling more than an hour with someone unfamiliar can feel intimidating, so I'll keep the 1-hour format * Walked approximately 35km throughout the day and painfully learned that street shoes aren't suitable - got blisters that could have been prevented with proper hiking boots Participants * Directly invited two women to ensure diversity, resulting in 3/7 non-male participants * Noticed that people from timeslots 1 and 3 spontaneously met for their own 1-1 while I was busy with timeslot 2 * Will actively encourage more member-initiated connections next time to create a network effect Conversations * My prepared document helped skip introductions and jump straight into meaningful discussion * Tried balancing listening vs. talking, succeeding in some conversations while others turned into them asking me more questions * Expanded beyond my usual focus on career advice, offering a broader menu of discussion topics * This approach reached people who initially weren't interested in career discussions * One participant was genuinely surprised their background might be impactful in ways they hadn't considered * Another wasn't initially interested in careers but ended up engaging with the topic after natural conversation flow * 2 of 7 people shared personal issues where I focused on empathetic listening and sharing relevant parts of my own experience * The remaining 5 discussions centered primarily on EA concepts and career-related topics Results * Received positive feedback suggesting participants gained eithe
15
1mo
1
Mini EA Forum Update We've updated the user menu in the site header! 🎉 I'm really excited, since I think it looks way better and is much easier to use. We've pulled out all the "New ___" items to a submenu, except for "New question" which you can still do from the "New post" page (it's still a tab there, as is linkpost). And you can see your quick takes via your profile page. See more discussion in the relevant PR. Let us know what you think! 😊 Bonus: we've also added Bluesky to the list of profile links, feel free to add yours!
32
2mo
2
EA Awards 1. I feel worried that the ratio of the amount of criticism that one gets for doing EA stuff to the amount of positive feedback one gets is too high 2. Awards are a standard way to counteract this 3. I would like to explore having some sort of awards thingy 4. I currently feel most excited about something like: a small group of people solicit nominations and then choose a short list of people to be voted on by Forum members, and then the winners are presented at a session at EAG BA 5. I would appreciate feedback on: 1. whether people think this is a good idea 2. How to frame this - I want to avoid being seen as speaking on behalf of all EAs 6. Also if anyone wants to volunteer to co-organize with me I would appreciate hearing that
44
4mo
11
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be? In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts: * Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects. * OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing. * One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy? * I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever. * Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
2
4d
The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary. https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus   TLDR; 80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world. ---------------------------------------- According to their post, they will still host the backlog of content on non AGI causes, but may not promote or feature it. They also say a rough 80% of new podcasts and content will be AGI focused, and other cause areas such as Nuclear Risk and Biosecurity may have to be scoped by other organisations. Whilst I cannot claim to have in depth knowledge of robust norms in such shifts, or in AI specifically, I would set aside the actual claims for the shift, and instead focus on the potential friction in how the change was communicated. To my knowledge, (please correct me), no public information or consultation was made beforehand, and I had no prewarning of this change. Organisations such as 80 000 hours may not owe this amount of openness, but since it is a value heavily emphasises in EA, it seems slightly alienating. Furthermore, the actual change may not be so dramatic, but it has left me grappling with the thought that other mass organisations could just as quickly pivot. This isn't necessarily inherently bad, and has advantageous signalling of being 'with the times' and 'putting our money where our mouth is' in terms of cause area risks. However, in an evidence based framework, surely at least some heads up would go a long way in reducing short-term confusion or gaps.   Many introductory programs and fellowships utilise 80k resources, and sometimes as embeds rather than as standalone resources. Despite claimi
70
9mo
4
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune. Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take! https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
62
9mo
12
I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing. It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes. I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it. EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again. EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.
111
2y
11
GET AMBITIOUS SLOWLY Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.  Faced with big dreams but unclear ability to enact them, people have a few options.  *  try anyway and fail badly, probably too badly for it to even be an educational failure.  * fake it, probably without knowing they're doing so * learned helplessness, possible systemic depression * be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.  * discover more skills than they knew. feel great, accomplish great things, learn a lot.  The first three are all very costly, especially if you repeat the cycle a few times. My preferred version is ambition snowball or "get ambitious slowly". Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures. I claim EA's emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago  What size of challenge is the right size? I've thought about this a lot and don't have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules: * stick to problems where failure will at least be informative. If you can't track reality well eno
Load more (8/125)