Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.
According to readers of the Forum. Click below to read the posts:

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
I wanted to flag an upcoming Netflix limited series, The Altruists, which dramatises the collapse of FTX and centres on Sam Bankman-Fried and Caroline Ellison. Filming wrapped late last year, and the series is expected to release in 2026. Regardless of how carefully or poorly the show handles the facts, the title and premise alone are likely to renew public association between effective altruism, crypto, and the FTX collapse. Given Netflix’s reach, this will almost certainly shape first impressions for many people encountering EA-adjacent ideas for the first time. It seems worth thinking early about what’s likely to follow. This won’t primarily be about factual accuracy. Even a relatively balanced dramatisation will compress nuance and foreground irony, because that is how narrative television works. A Netflix drama will travel faster and wider than any later attempts at nuance. Silence may be read as evasiveness, while reactive defensiveness would likely make things worse. I don’t have a fully formed proposal for how the community should respond, but it seems worth beginning the conversation early, before others frame it for us. I’d be interested to hear how others are thinking about this. 
(Half baked and maybe just straight up incorrect about people's orientations) I worry a bit about groups thinking about the post-AGI future (e.g., Forethought) will not want to push for something like super-optimized flourishing because this will seem weird and possibly uncooperative with factions that don't like the vibe of super-optimization. This might happen even if these groups thinking about the future do believe in their hearts that super-optimized flourishing is the best outcome.  It is very plausible to me that the situation is "convex" in the sense that it is better for the super-optimizers to optimize fully with their share of the universe, while the other groups do what they want with their share (with rules to prevent extreme suffering, pessimization etc). I think this approach might be better for all groups, rather than aiming for a more universal middle ground that leaves everyone disappointed. This bad middle ground might look like a universe that is both not very optimized for flourishing but is still super weird and unfamiliar.  It would be very sad if we miss out on the optimized flourishing because we were trying to not seem weird or uncooperative. 
1
Benton
1h
0
Are there any good online groups for software engineers interested in EA? I joined the EA software engineer discord, but it’s not really active. Looking for an active space where I could get advice. 
The economist Tyler Cowen linked to my post on self-driving cars, so it ended up getting a lot more readers than I ever expected. I hope that more people now realize, at the very least, self-driving cars are not an uncontroversial, uncomplicated AI success story. In discussions around AGI, people often say things along the lines of: ‘deep learning solved self-driving cars, so surely it will be able to solve many other problems'. In fact, the lesson to draw is the opposite: self-driving is too hard a problem for the current cutting edge in deep learning (and deep reinforcement learning), and this should make us think twice before cavalierly proclaiming that deep learning will soon be able to master even more complex, more difficult tasks than driving.