Harry Taussig

211 karmaJoined Ardmore, PA 19003, USA


This is great, I really appreciate you writing it. I just took vacation for a couple months and basically did what Alice said. Any readers feel free to DM me if you'd like to discuss these feelings + what you might do about it :))

Thanks for writing :)

I see the "narcissism of small differences" dynamics already coming up subtly between EA sub-groups. I see some resentment toward the Bay Area rationalists and similar circles.

Also I found the the tech firm example helpful, and wouldn't be surprised if other social movements became increasingly guarded against or dismissive of EAs' aims because its philosophy is so captivating  and its outreach is so aggressive to top-talent students.

I wonder how you imagine EA outreach looking differently? Do you think it should be slower? 

I'm not sure exactly what I think, but I want it to be the case, and have the intuition that it's best for us to be teaching students everything their university should have taught them. Part of that is how to make a difference in the world using an "EA-mindset", but it's also emotional intelligence, how to collaborate without hierarchy, how to hold multiple mindsets usefully, and how to understand and work with oneself. 

I have not! 

But I would guess that about the closest you can get is doing user interviews (or surveys but I don't think you could get many people to fill them out) multiple months out, and just asking people how they think it affected them, and how counterfactual they think that impact was. I think people will have good enough insight here for this to get you most of the valuable information. My first EAG was the difference between me working in an EA org and being a software engineering. My most recent EAG did almost nothing for me, on reflection, even though I made new connections and rated it very highly. 

I think just asking this directly probably gets us closer than trying to assign what portion of the impact each particular event might get, even though I agree in reality the picture is much more complicated than this. 

And if anyone has ideas on how to do better impact on analysis than this on events, PLEASE tell me. But I think this is already a huge improvement on my sense of what the default impact analysis for EA events is, and anything more complicated won't give us too much more information. 

I totally agree here if we are talking about giving people the best experience, which is a lot of what we want to do to facilitate friendships that will support people long-term in their motivation and making big decisions related to their career or life that could be quite impactful. 

I also worry about feedback loops here, and how it's easiest to optimize for people giving you good reviews at the end of your event, which means optimizing for people's happiness over everything else.  
I'd be very excited about events and retreats that more consistently do follow-ups 1-12 months after the event so we can see what really impacted and supported people. I'm guessing a lot of is vibes, but it could be a lot less than I currently think (my position is currently similar to yours). There are big impactful wins to be had that optimizing for people's well-being will liekly not get us to. 

For more on this you can check out similar thoughts from this forum post on why CFAR didn't go as well as planned, or Andy Matuschak's thoughts on "enabling environments".

Just want to say that I really appreciate this post and keep coming back to it :)

Thank you for writing this! This helped me understand my negative feelings towards long-termist arguments so much better. 

In talking to many EA University students and organizers, so many of them have serious reservations about long-termism as a philosophy, but not as a practical project because long-termism as a practical project usually means don't die in the next 100 years, which is something we can pretty clearly make progress on (which is important since the usual objection is that maybe we can't influence the long-term future). 

I've been frustrated that in the intro fellowship and in EA conversations we must take such a strange path to something so intuitive: let's try to avoid billions of people dying this century. 

The case for working on animal welfare over AI / X-risk

Thanks for writing this post. :)

I like how you accept that a low-commitment reading group is sometimes the best option. 

I think one of the ways reading groups go wrong is when you don't put in the intentional effort or accountability to encourage everyone to actually read, but you still expect them to – even though you're unsurprised when they don't read. But then, because you wish they had read, you still run the discussion as if they're prepared. You get into this awkward situation you talked about where people don't speak since they don't want to blatantly reveal they haven't read. 

I love and appreciate these suggestions! I'll be stealing the idea about copying readings into google docs and am super excited for it.

Load more