Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups.
BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.
How does one robustly set themselves up during their studies and early career for meaningfully contributing to making transformative AI go well?
How can we increase the global capacity for the amount of people working on the most pressing problems?
Community building and setting up new (university) groups.
Interstellar travel will probably doom the long-term future
Some quick thoughts: By the time we've colonized numerous planets and cumulative galactic x-risks are starting to seriously add up, I expect there to be von Neumann probes traveling at a significant fraction of the speed of light (c) in many directions. Causality moves at c, so if we have probes moving away from each other at nearly 2c, that suggests extinction risk could be permanently reduced to zero. In such a scenario most value of our future lightcone could still be extinguished, but not all.
A very long-term consideration is that as the expansion of the universe accelerates so does the number of causally isolated islands. For example, in 100-150 billion years the Local Group will be causally isolated from the rest of the universe, protecting it from galactic x-risks happening elsewhere.
I guess this trades off with your 6th conclusion (Interstellar travel should be banned until galactic x-risks and galactic governance are solved). Getting governance right before we can build von Neumann probes at >0.5c is obviously great, but once we can build them it's a lot less clear if waiting is good or bad.
Thinking out loud, if any of this seems off lmk!
Not really an answer to your questions, but I think this guide to SB 1047 gives a good overview of a related aspects.
How many safety-focused people have left since the board drama now? I count 7, but I might be missing more. Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Cullen O'Keefe, Pavel Izmailov, William Saunders.
This is a big deal. A bunch of the voices that could raise safety concerns at OpenAI when things really heat up are now gone. Idk what happened behind the scenes, but they judged now is a good time to leave.
Possible effective intervention: Guaranteeing that if these people break their NDA's, all their legal fees will be compensated for. No idea how sensible this is, so agree/disagree voting encouraged.
Interesting post. I've always wondered how sensitive the views and efforts of the EA community are to the arbitrary historical process that led to its creation and development. Are there any in-depth explorations that try to answer this question?
Or, since thinking about alternative history can only get us so far, are there any examples of EA-adjacent philosophies or movements throughout history? E.g. Mohism, a Chinese philosophy from 400 BC, sounds like a surprisingly close match in some ways.
Right, so even with near-c von Neumann probes in all directions, vacuum collapse or some other galactic x-risk moving at c would only allow civilization to survive as a thin spherical shell of space on a perpetually migrating wave front around the extinction zone that would quickly eat up the center of the colonized volume.
Such a civilization could still contain many planets and stars if they can get a decent head start before a galactic x-risk occurs + travel at near c without getting slowed down much by having to make stops to produce and accelerate more von Neumann probes. Yeah, that's a lot of if's.
20 billion ly estimate seems accurate, so cosmic expansion only protects against galactic x-risks on very long timescales. And without very robust governance it's doubtful we might not get to that point.