On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
I think liberalism is unfortunately on a timer that will almost certainly expire pretty soon, no matter what we do.
We either technologically regress due to the human population falling and more anti-democratic civilizations winning outright due to the zero/negative sum games being played, or we create AIs that replace us and due to the incentives plus the sheer difference in power, that AIs by default create something closer to a dictatorship for humans, and in particular value alignment is absolutely critical in the long run for AIs that can take every human job.
Modern civilization is not stable at all.
Acausal trade/cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.
Yeah, assuming no FTL, acausal trade/cooperation is necessary if you want anything like a unified galactic/universal polity.
Interstellar travel will probably doom the long-term future
A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don't actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
The change the universe constants is an example.
Also, in most modern theories of time travel, you only get self-consistent outcomes, and a lot of the classic portrayals of using time travel to destroy the universe through paradoxical inputs wouldn't work, because only self-consistent outcomes are allowed, and would almost certainly be prevented beforehand.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
For those unaware of acausal trade, it's basically replacing direct communication for predicting what each other wants, and if you have the ability to do vast amounts of simulations, you can get very, very good predictive models of what the other wants such that both of you can trade without requiring any communication, which is necessary for realistic galactic empires/singletons to exist:
https://www.lesswrong.com/w/acausal-trade
I don't have much of an opinion on the question, but if it's true that acausal trade can basically substitute wholly for communication that is traditionally necessary to suppress rebellions in empires, then most galactic/universe scale risks are pretty easily avoidable because we don't have to roll the dice on every civilization trying to do it's own research that may lead to x-risk.
The main unremovable advantages of AIs over humans will probably be in the following 2 areas:
A serial speed advantage, from 50-1000x, with my median in the 100-500x speed advantage range, and more generally the ability to run slower or faster to do more work proportionally, albeit there are tradeoffs at either extreme of either running slow or fast.
The ability for compute/software improvements to directly convert into more researchers with essentially 0 serial time necessary, unlike basically all of reproduction (about the only cases where it even gets close are the days/hours doubling time of flies and some bacteria/viruses, but these are doing much simpler jobs and it's uncertain whether you could add more compute/learning capability without slowing down their doubling time.)
This is the mechanism by which you can get way more AI researchers very fast, while human researchers don't increase proportionally.
Humans probably do benefit assuming AI is useful enough to automate say AI research away, but these 2 unremovable limitations fundamentally prevent anything like an explosion in research, unlike AI research.
Consequentialists should be strong longtermists
I disagree, mostly due to the should wording, as believing in consequentialism doesn't obligate you to have any particular discount rate or have any particular discount function, and these are basically free parameters, so discount rates are independent of consequentialism.
Bioweapons are an existential risk
I'll just repeat @weeatquince's comment, since he already covered the issue better than I did:
With current technology probably not an x-risk. With future technology I don’t think we can rule out the possibility of bio-sciences reaching the point where extinction is possible. It is a very rapidly evolving field with huge potential.
AGI by 2028 is more likely than not
While I think AGI by 2028 is reasonably plausible, I think that there are way too many factors that have to go right in order to get AGI by 2028, and this is true even if AI timelines are short.
To be clear, I do agree that if we don't get AGI by the early 2030s at latest, AI progress will slow down, I don't have nearly enough credence for the supporting arguments to have my median be in 2028.
The basic reason for the trend continuing so far is that NVIDIA et al have diverted normal compute expenditures into the AI boom.
I agree that the trend will stop, and it will stop around 2027-2033 (my widest uncertainty lies here), and once that happens the probability of having AGI soon will go down quite a bit (if it hasn't happened by then).
I think this is kind of a crux, in that I currently think the only possible galactic scale risks are risks where our standard model of physics breaks down in a deep way once you can get at least one dyson swarm going up, you are virtually invulnerable to extinction methods that doesn't involve us being very wrong about physics.
This is always a tail risk of interstellar travel, but I would not say that interstellar travel will probably doom the long-term future as stated in the title.
The better title is interstellar travel poses unacknowledged tail risks.
I agree that if there's an X-risk that isn't defendable (for the sake of argument), then acausal trade is reliant on every other civilization choosing to acausally trade in a manner where the parent civilization can prevent x-risk, but the good news is that a lot of the more plausible (in a relative sense) x-risks have a light-speed limit, meaning that given we are probably alone in the observable universe (via the logic of dissolving the fermi paradox), means that humanity only really has to do acausal trade.
And a key worldview crux is conditioning on humanity becoming a spacefaring civilization, I expect superintelligence that takes over the world to come first, because it's much easier to develop good enough AI tech to develop space sufficiently than it is for humans to go spacefaring alone.
And AI progress is likely to be fast enough such that there's very little time for rogue spacefarers to get outside of the parent civilization's control.
The dissolving the fermi paradox paper is here:
https://arxiv.org/abs/1806.02404