I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.
Blog: aaronbergman.net
I think I'm more bullish on digital storage than you.
Most alignment work today exists as digital bits: arXiv papers, lab notes, GitHub repos, model checkpoints. Digital storage is surprisingly fragile without continuous power and maintenance.
SSDs store bits as charges in floating-gate cells; when unpowered, charge leaks, and consumer SSDs may start losing data after a few years. Hard drives retain magnetic data longer, but their mechanical parts degrade; after decades of disuse they often need clean-room work to spin up safely. Data centres depend on air-conditioning, fire suppression, and regular maintenance.
In a global collapse where grids are down for years, almost all unmaintained digital archives eventually succumb to bit-rot, corrosion, fire, or physical decay.
This is true but the fundamental value proposition of digital storage is extremely cheap, and easy (~free on both fronts for a few GB of PDFs) replication. So it's true that a single physical device by default won't last very long, but the data itself simply ("simply" - I mean the crux is how hard/common this will be) needs to hop from device to device (possibly and ideally existing on many devices at any one time) indefinitely.
By analogy you might think that human aging/death poses a major problem for the continued existence of the information necessary to recreate a human (i.e. DNA + a few other sources of info) but in fact the ability for people to replicate makes destruction of that knowledge much, much harder.
Of course a key consideration is how bad the setback is. At the limit you're right because our ability to create the tools for info replication won't exist, but at least naively it might take quite a bit to destroy all of society's ability to manufacture flash drives and computers for long enough for all info to die out via the mechanisms you describe.
Wanted to bring this comment thread out to ask if there's a good list of AI safety papers/blog posts/urls anywhere for this?
(I think local digital storage in many locations probably makes more sense than paper but also why not both)
Lightcone and Alex Bores (so far)
Edit: to say a tiny bit more, LessWrong seems instrumentally good and important and rationality is a positive influence on EA. Lightcone doesn't have the vibes of "best charity" to me, but when I imagine my ideal funding distribution it is the immediate example of "most underfunded org" that comes to mind. Obviously related to Coefficient not supporting rationality community building anymore. Remember, we are donating on the margin, and approximately the margin created by Coefficient Giving!
Super cool - a bit hectic and I substantively disagree with one of the "fallacies" the fallacy evaluator flagged on this post but I'll definitely be using this going forward
Thanks for the highlight! Yeah I would love better infrastructure for trying to really figure out what the best uses of money are. I don't think it has to be as formal/quantitative as GiveWell. To quote myself from a recent comment (bolding added)
At some level, implicitly ranking charities [eg by donating to one and not another] is kind of an insane thing for an individual to do - not in an anti-EA way (you can do way better than vibes/guessing randomly) but in a "there must be better mechanisms/institutions for outsourcing donation advice than GiveWell and ACE and ad hoc posts/tweets/etc and it's really hard and high stakes" way.
Like what I would love is a lineup of 10-100 very highly engaged and informed people (could create the list simply by number of endorsements/requests) who talk about their strategy and values in a couple pages and then I just defer to them (does this exist?)
I did something related but haven't updated it in a couple years! If there's a good collection of AI safety papers/other resources/anything anywhere it would be very easy for me to add it to the archive for people to download locally, or else I could try to collect stuff myself
1. ClusterFree
2. Center for Reducing Suffering
3. Arthropoda Foundation
4. Shrimp Welfare Project
5. Effective Altruism Infrastructure Fund
6. Forethought Foundation
7. Wild Animal Initiative
8. Center for Wild Animal Welfare
9. Animal Welfare Fund
10. Aquatic Life Institute
11. Longview Philanthropy's Emerging Challenges Fund
12. Legal Impact for Chickens
13. The Humane League
14. Rethink Priorities
15. Centre for Enabling EA Learning & Research
16. MATS Research
I used AI for advice (unlike last year) with Claude-Opus-4.5 and Gemini-3-Pro-Preview. I didn't take either of their suggested rankings of course, but they both gave me a pretty decent starting point to work with such that I'm pretty sure I'd endorse either's list as a marginal improvement to the vote distribution
Both were prompted with a list of my values and takes in addition to roughly 200k tokens scraped from the relevant posts list.
More substantially, my final list was probably based on a combination of not-really-all-that-amazing-but-better-than-nothing heuristics: what I feel like EA at large is under-funding, what I would be most excited to see, what tentatively aligns with my literally endorsed beliefs about the nature of suffering, what cause areas make theoretical sense to be near the Pareto frontier.
Honestly I think I would (will?) almost certainly adjust my list if I look(ed) into it for just a few additional hours
At some level, implicitly ranking charities [eg by donating to one and not another] is kind of an insane thing for an individual to do - not in an anti-EA way (you can do way better than vibes/guessing randomly) but in a "there must be better mechanisms/institutions for outsourcing donation advice than GiveWell and ACE and ad hoc posts/tweets/etc and it's really hard and high stakes" way.
Like what I would love is a lineup of 10-100 very highly engaged and informed people (could create the list simply by number of endorsements/requests) who talk about their strategy and values in a couple pages and then I just defer to them (does this exist?)
I do not accept premise 2:
For some small amount of intense suffering, there is always some sufficiently large amount of moderate suffering such that the intense suffering is preferable.
To be clear, I think this premise is one way of distilling and clarifying the (or 'a') crux of my argument and if I wind up convinced that the whole argument is wrong, it will probably be because I am convinced of premise 2 or something very similar
Wow, this is super exciting and thanks so much to the judges! ☺️
An interesting dynamic around this competition was that the promise of the extremely cracked + influential judging team reading (and implicitly seriously considering) my essay was a much stronger incentive for me to write/improve it than the money (which is very nice don’t get me wrong).[1]
I’m not sure what the implications of this are, if any, but it feels useful to note this explicitly as a type of incentive that could be used to elicit writing/research in the future
Insofar as I’m not totally deluding myself, I mean the altruistic impact of some chance of shaping the judges’ views as opposed to the possibility of seeming mildly clever to some high status figures
I strongly endorse this and think that there are some common norms that stand in the way of actually-productive AI assistance.
Both of these are reasonable but we could really use some sort of social technology for saying "yes, this was AI-assisted, you can tell, I'm not trying to trick anyone, but also I stand by all the claims made in the text as though I had done the token generation myself."