I'm a doctor working towards the dream that every human will have access to high quality healthcare. I'm a medic and director of OneDay Health, which has launched 53 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
Understanding the NGO industrial complex, and how aid really works (or doesn't) in Northern Uganda
Global health knowledge
As much as I mostly agree with this, I selfishly want to soak up a little of your love and "entertainment" from time to time. I'm keen to keep the forum vibrant and nurturing to our souls, it's going to be hard to avoid fun from time to time.
So when the gratuitous, meaningless fun hits the forum, I might not be reporting to @Toby Tremlett🔹 and the fun police.... ;)
I really like this take on EA as an intellectual movement, and agree that EA could focus more on “the mission of making the transition to a post-AGI society go well.”
As important as intellectual progress is, I don’t think it defines EA as a movement. The EA movement is not (and should not be) dependent on continuous intellectual advancement and breakthrough for success. When I look at your 3 categories for the “future” of EA, they seem to refer more to our relevance as thought leaders, rather than what we actually achieve in the world. Not everything needs to be intellectually cutting edge to be doing-lots-of-good. I agree that EA might be somewhat “intellectually adrift”, and yes the forum could be more vibrant, but I don’t think these are the only metric for EA success or progress - and maybe not even the most important.
Intellectual progress moves in waves and spikes - times of excitement and rapid progress, then lulls. EA made exciting leaps over 15 years in the thought worlds of development, ETG, animal welfare, AI and biorisk. Your post-AGI ideas could herald a new spike which would be great. My positive spin is that in the meantime, EAs are “doing” large scale good in many areas, often without quite the peaks and troughs of intellectual progress.
My response to your “EA as a legacy movement set to fade away;” would be that only so far as legacy depends on intellectual progress. Which it does, but also depends on how your output machine is cranking. I don't think we have stalled to the degree your article seems to make out. On the “doer” front I think EA is progressing OK, and it could be misleading/disheartening to leave that out of the picture.
Here’s a scattergun of examples which came to mind where I think the EA/EA adjacent doing machine is cranking pretty well in both real world progress and the public sphere over the past year or two. They probably aren't even the most important.
1. Rutger Bregman going viral with “The school for Moral ambition” launch
2. Lewis Bollard’s Dwarkesh podcast, Ted talk and public fundraising.
3. Anthropic at the frontier of AI building and public sphere, with ongoing EA influence
4. The shrimp Daily show thing…
5. GiveWell raised $310 million dollars last year NOT from OpenPhil, the most ever.
6. Impressive progress on reducing factory farming
7. 80,000 hours AI video reaching 7 million views
8. Lead stuff
9. CE incubated charities gaining increasing prominence and funding outside of EA, with many sporting multi-million dollar budgets and producing huge impact
10. Everyone should have a number 10....
Yes we need to looking for the next big cause areas and intellectual leaps forward, while we also need thousands of people committed to doing good in areas they have already invested, in behind this. There will often be years of lagtime between ideas and doers implementing them. And building takes time. Most of the biggest NGOs in the world are over 50 years old. Even Open AI in a fast-moving field was founded 10 years ago. Once people have built career capital in AI/Animal welfare/ETG or whatever, I think we should be cautious about encouraging those people on to the next thing too quickly, lest we give up hard fought leverage and progress. In saying that, your new cause areas might be a relatively easy pivot especially for philosophers/AI practitioners.
I appreciate your comment “Are you saying that EA should just become an intellectual club? What about building things!” Definitely not - let’s build, too!”
But I think building/doing is more important than a short comment as we assess EA progress.
I agree with your overall framing and I know you can’t be too balanced or have too many caveats in a short post, but I think as well as considering the intellectual frontier we should keep “how are our doers doing” front and center in any assessment of the general progress/decline of EA.
I agree with this comment in general, and think $100 would be a relatively small amount. For both EV and PR reasons though, I would think $1000ish would be reasonable.
If we were looking for PR firms to compete for a logo or a brand or similar then 10k might make sense, or even more. But the competition is labeled as a "Meme" prize which signals to me at least rougher, lower effort work with less longevity and sticking power than a fun and thought-provoking meme.
I really doubt a competition with any prize pool has more than a 5% chance of producing a meme with close to the strength of p(doom) or 1984, but am happy to be pointed to examples which might show otherwise.
Thanks for the wonderful insight. I'm 38 and have lived with my wife for the last 12 years in the EA hub of Northern Uganda. Although yes it's the perfect place to deeply understand and work on solving tricky development issues (come live with us!), I'll admit there are a few reasons why people might not want to move here permanently, including most you listed ;).
Although our experience has been that if you live somewhere long enough, the place can become home and then you get some of the best of both worlds....
People interested in global health will benefit from subscribing to @David Nash's amazing monthly Roundup of the best writing on global development. He has an uncanny knack of selecting quality stuff, and I always find an interesting article I wouldn't have seen otherwise.
He also does a great job of breaking the topics up so we can focus on our own area of interest (aid/growth/governance/trade/health/education etc.)
https://gdea.substack.com/subscribe
My only criticism might be that there's a slightly disproportionate focus on economic growth, but hey we've all got our hobby horses ;)
Completely unsolicited plug BTW. Even when I did meet up with David in person he didn't even pay for my coffee ;)
This comment surprised me "Advocacy is riskier than the average grant".
Yes, it might be riskier than the average GHD or Animal welfare grant, but I would have guessed that advocacy would be less risky than than technical AI safety grants. You've illustrated some ways that advocacy may have caused real world changes. I doubt many technical AI safety can concretely point to a way that they may have made the world even a little safer from AI.
Is MIRI now not largely an advocacy organisation as well now, emerging from previous technical work?
Strong upvote great job (unusual for me for an AI safety post ha). I think within very specific domains like this, there's no reason at all why you can't do cost-effectiveness comparisons for AI safety. I would loosely estimate this to have similarish validity to many global health comparisons.
I love rational animations and show my friends their videos in groups - very subjectively I think you might have underrated their quality adjuster. But still I'm gobsmacked they have spent over 4 million dollars. If 80k can continue to produce videos even 1/10th as good as their first one for 100k though (about the same cost as each rational animations video), I would probably rather put my money there.
I also think 12x quality adjustment might be a bit of an overshoot for Robert Miles. Subjectively adjusting each view on one channel as being over 10x the value of another is a pretty big call and I can imagine if I was another video producer I might squint a bit....
This especially is a great take in a competitive job market which I hadn't thought about before - as much as it might be hard for some personality traits.
"Aim to be a spikier candidate - i.e., someone with some chance of being a fantastic hire, but lower confidence of being an average hire. If you’re getting to the mid-stages of many processes, there’s a chance you’re seen as a ‘good but not great’ candidate across the park. I see many very well-meaning, well-intentioned applicants like this - clearly value-aligned with AIM but without any standout traits that get me excited about their potential as a founder. Like with dating, it’s better to be a perfect fit for one role than a decent fit for every role: it’s much better to be a 2/10 for 5 hiring managers, and a 10/10 for 1 hiring manager, than a 6/10 for every process you go through. Don’t just aim to tick the boxes in your application submissions; highlight what makes you more unique in terms of your experience, knowledge, or approach to working. Can you bring a novel angle to the test task you’ve been presented, that might fall flat, but might also make you stand out?"