🐖
🪞
EA Wins: 2025

Click the banner to add a win, or find out more. 

Click the banner to add a win, or find out more. 

Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
21
Linch
20h
2
What are people's favorite arguments/articles/essays trying to lay out the simplest possible case for AI risk/danger? Every single argument for AI danger/risk/safety I’ve seen seems to overcomplicate things. Either they have too many extraneous details, or they appeal to overly complex analogies, or they seem to spend much of their time responding to insider debates. I might want to try my hand at writing the simplest possible argument that is still rigorous and clear, without being trapped by common pitfalls. To do that, I want to quickly survey the field so I can learn from the best existing work as well as avoid the mistakes they make.
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week). The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews. More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).
According to someone I chatted to at a party (not normally the optimal way to identify top new cause areas!) fungi might be a worrying new source of pandemics because of climate change. Apparently this is because thermal barriers prevented fungi from infecting humans, but because fungi are adapting to higher temperatures, they are now better able to overcome those barriers. This article has a bit more on this: https://theecologist.org/2026/jan/06/age-fungi Purportedly, this is even more scary than a pathogen you can catch from people, because you can catch this from the soil. I suspect that if this were, in fact, the case, I would have heard about it sooner. Interested to hear comments from people who know more about it than me, or have more capacity than me to read up about it a bit.
I was a bit worried for the last 3 weeks that the Forum had gone quiet... Then I come back after a 5 day Ugandan internet blackout and there are lots of fantastic front page posts great job everyone!!!
Quick link-post highlighting Toner quoting Postrel’s dynamist rules + her commentary. I really like the dynamist rules as a part of the vision of the AGI future we should aim for: “Postrel does describe five characteristics of ‘dynamist rules’: I see some overlap with existing ideas in AI policy: * Transparency, everyone’s favorite consensus recommendation, fits well into a dynamist worldview. It helps with Postrel’s #1 (giving individuals access to better information that they can act on as they choose), #3 (facilitating commitments), and #4 (facilitating criticism and feedback). Ditto whistleblower protections. * Supporting the development of a third-party audit ecosystem also fits—it helps create and enforce credible commitments, per #3, and could be considered a kind of nestable framework, per #5. * The value of open models in driving decentralized use, testing, and research is obvious through a dynamist lens, and jibes with #1 and #4. (I do think there should be some precautionary friction before releasing frontier models openly, but that’s a narrow exception to the broader value of open source AI resources.) Another good bet is differential technological development, aka defensive accelerationism—proactively building technologies that help manage challenges posed by other technologies—though I can’t easily map it onto Postrel’s five characteristics. I’d be glad to hear readers’ ideas for other productive directions to push in.”