We've got a Draft Amnesty Week coming up soon (October 13-19). During Draft Amnesty, people publish drafts that have been lying around forever, but they also write new, draftier, posts.
If they do... what do you want them to write? What would you like to read on the EA Forum?
PS- This question has had some great answers before, see here for February's version.
When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing and ideas can be voted on separately.
If you see an answer here describing a post you think has already been written, please lend a hand and link it here.
A few suggestions for possible answers:
- A question you would like someone to answer: “How, historically, did AI safety become an EA cause area?”
- A type of experience you would like to hear about: “I’d love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?”
- A gap in an argument that you'd like someone to fill.
If you have loads of ideas, consider writing an entire "posts I would like someone to write" post.
Why put this up before Draft Amnesty Week?
If you see a post idea here that you think you might be positioned to answer, Draft Amnesty Week might be a great time to post it. During Draft Amnesty Week, your posts don't have to be thoroughly thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting. More details.
A proposal for an "Anonymity Mediator" ("AM") in EA. This would be a person that mostly would strip identity from information. For example, if person A has information about an EA (person B) enabling dangerous work at a big AI lab, the AM would be someone person A could connect with, giving extremely minimal information in a highly secure way (ideally in-person with no devices). The AM would then be able to alert people that perhaps should know, with minimal chance of person A's identity being revealed. I would love to see a post for a proposal for such a person and if it seems helpful (community issues, information security, etc.) maybe a way to make progress on funding and finding such a person.
I have thought for a long time that the EA power centers have a lack of curiosity or appropriate respect for the potential of the EA community and has a pretty specific set of markers that suggest that people should be given the time of day. I have the impression that there is a pretty small "nerve center" that sets priorities and considers how the EA community might be helpful to address these priorities along the paths that it sees.
This seems to me to limit the power of EA significantly: if we could be getting more perspectives and ideas taken seriously and dedicating resources in these direction, rather than just to relatively narrow agents of the "nerve center", we might be able to accomplish a lot more. Right now it seems pretty sad that EA is often just identified with the people who have power in it, rather than its more basic and important idea of maximizing good with the resources that we have.
I suppose there have probably been posts along these lines, but I guess a "democratizing EA funding and power as maximizing epistemic hygiene and reach" would be appreciated.
I get the impression posts along these lines pop up from time to time, see e.g. the one ben linked below. Personally, Id like to see way more (cross) cause prioritisation than today, but this is constrained by funding rather than interest or talent.
I would like to see somebody argue that YIMBYism / abundance shouldn't be considered an EA priority. (Hard mode, given we don't own our donors' money: YIMBYism / abundance shouldn't receive OP money).
A combined guide for EA talent to move to stable democracies and a call to action for EA hubs in such countries to explore facilitating such moves. I know there are people working on making critical parts of the EA ecosystem less US-centric. It might be that I am missing other work in this direction but I think this is a good time for EA hubs in e.g. Switzerland and the Nordics to see if they can help make EA more resilient when it might be needed in possibly rough times ahead. Perhaps also preparing for sudden influxes of people, or facilitate more rapid support in case things start to change quickly.
I would love it if someone wrote about strategies to prevent AI Safety from becoming politicized.
I’d be happy to see more writing on:
Hopeful of seeing more fun, welcoming and wholesome posts that can uplift the spirit of the community. I could refer this post as an example.
I'd like to see a serious re-examination of the evidence underpinning GiveWell's core recommendations, focusing on
I did this for one intervention in GiveWell should fund an SMC replication & @Holden Karnofsky did a version of it in Minimal-trust investigations, but I think these investigations are worth doing multiple times over the years from multiple parties. It's a lot of work though, so I see why it doesn't get done too often.
I'd love someone to write how someone who feels most comfortable donating to the GiveWell top charities fund should address donating to animal charities. I know there exist ways like Animal Charity Evaluators Movement Grants, the Giving What We Can Effective Animal Advocacy fund, or the EA Animal Welfare Fund.
However, these all feel a bit different-flavoured than GiveWell's top charities fund in that they seem to be more opportunistic, small or actively managed; in contrast to GiveWell's larger, established, and typically more stable charities. This makes it much harder for smaller donors to understand how different theories of change are being considered, or keep track of the money's impact.
I am curious to read more about EA community’s current takes on humanity’s epistemic resilience in view of growing AI. In other words, I’m wondering: What are the risks that our capacity for curiosity, agency, critical thinking, sourcing, and vetting information, and evaluative decision-making might deteriorate as AI usage increases? How big, tractable, and neglected are these risks, especially as AI systems may reduce our incentives to develop or use these skills?
My intuition is it could drive challenges even with aligned AI and without direct misuse—we as humans could disempower ourselves voluntarily out of mere laziness or lost skills. The risk could be aggravated if, following the “Intelligence Curse” logic, the “powerful actors” see no reasons to keep humans epistemically capable. Besides, it could threaten AI alignment if our capacity to make informed decisions about AI governance diminishes.
I’m now only learning the EA ways and hopefully in some time will be able to myself evaluate whether this is a valid issue or I’m just doomsaying. However, if I imagine that for AI to go well we need both AI aligned with humans and humans evolved for AI, I’m under the impression that current EA efforts lean towards the former more than to the latter. Is my estimate sensible?
Far from claiming to have conclusive evidence, I've made some observations to fuel the above subjective impression. I draw them from reflecting on the information bubble I’m building around myself as I’m now delving into Effective altruism.
For example, as I searched for skilled volunteering opportunities, I reviewed 20 AI orgs through EA-related opportunities boards (EA, 80,000 Hours, ProbablyGood, AISafety, BlueDot Impact, Consultants for Impact). I tried to be impartial, though if I had any bias, it was toward preferring work on epistemic resilience issues. Of these organizations, I found 4 that tackle the issue more or less explicitly—focusing on the human side—compared to 16 that seem to mainly address the AI side.
Also following 80,000 Hours problem profiles and AI articles, BlueDot Impact Future of AI course, EA Forum digest and several AI newsletters of the recent 1-2 months, supplied with some express googling, I extracted 5 more or less explicit mentions on the topic of preparing humans for AI. While I didn’t count precisely, the proportion of articles focusing on AI-side problems (e.g., compute, AI rights, alignment) seemed subjectively much higher. 2 of those 5 specifically tackle intentional misuse, and the rest 3—more general changes in cognitive patterns including but not limited to malevolent usage, e.g., Michael Gerlich’s 2025 study “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”. I am asking about the latter 3—broader implications for our thinking without bad intentions as a key risk factor.
Does EA community have any view on the topic of our readiness to use AI without degrading? Is my impression about EA community leaning more towards the AI side of the issue vs. the human side of it sensible? Is it a problem, worth exploring further? Are there any drafts on the topic that wait to be published?