This post was published for draft amnesty day, so it’s less polished than the typical EA forum post.
Epistemic status: in the spirit of Cunningham's Law [1].
Givewell estimates that $300 million in marginal funding would result in ~30,000 additional lives saved, that’s very roughly $0.50 per day of life.
If you believe that there’s a higher than 10% chance of extinction via AGI[2], that means that delaying AGI by one day gives you 10% · 10¹⁰[3] life-days, equivalent to ~$0.5B in GiveWell marginal dollars (as a rough order of magnitude).
Potential disagreements and uncertainties:
- Delaying AGI is, in expectation, going to make lives in pre-AGI world worse.
To me, this seems negligible compared to the risk of dying, unless you put the 0-point of a “life worth living” very high (e.g. you think ~half the current global population would be better off dead). If the current average value of a life is X, for an AGI transformation to make it go to 2X it would need to be extremely powerful and extremely aligned. - Under longtermism, the value of current lives saved is negligible compared to the value of future lives that are more likely to exist. So the only thing that matters is if the particular method by which you delay AGI reduces x-risks.[4]
I would guess that, probably, delaying AGI by default reduces the probability of x-risks by giving more time for a “short reflection”, and for the field of AI Alignment to develop. - Delaying AGI is not tractable, e.g. regulation doesn’t work.
It seems to me that lots of people believe excessive regulation raises prices and slows down industries and processes. I don’t understand how that doesn’t apply to AI in particular (and the same arguments don’t apply to nuclear power, healthcare, or other safety-sensitive very technical areas). And there are areas where differential technological development happened in practice (e.g. human cloning and embryo DNA editing). - There's significantly less than a 1% risk from AGI for lives that morally matter.
It's possible, probably my main uncertainty, but I think it would require both narrow person affecting views and a lot of certainty on AI timelines or consequences.
Proposals:
- Signal boost Instead of technical research, more people should focus on buying time and Ways to buy time from Akash
- Ride the current wave of AI skepticism by people worried about it being racist, or being replaced and left unemployed. To lobby for significantly more government involvement, to slow down progress (like the FDA in medicine).
- In general, focus less on technical / theorem-proving alignment work, or hoping AI capability companies don’t get tempted to gamble billions of lives on a chance of becoming trillionaires after some EA engineers start working there.
Curious on your thoughts!
- ^
The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer. (Wikipedia)
- ^
If you believe it’s ~100% just multiply by 10, if you believe it’s ~1% just divide by 10
- ^
Human population is roughly 10^10 humans
- ^
Extinction, unrecoverable collapse/stagnation, or flawed realization
Oh no, I'm sorry if that's the case!
I'm unsure if deletion is the right response to bad posts (which this one likely is!), instead of explaining why the post is bad so that others can understand that it's wrong (and that the forum thinks is wrong, which I guess could be as important!).
For context, I'm not a longtermist. I'm just worried about global catastrophic risks, since a billion people is a lot of people, and the marginal cost of life saved per GiveWell seems relatively high ($10k/life)
My personal current career trajectory hinges a bit on this :/
Like, is it more likely for me to (help) influence AI timelines or billions of capital?
Is that the same as There's significantly less than a 1% risk from AGI for lives that morally matter (which I agree is my main uncertainty), or is it a different consideration?
What would make friends and not enemies? In a conflict between e.g. workers/artists and AI companies that want to stay unregulated, can you avoid making enemies while helping one side?
I am mostly worried about real people in the real world that (maybe) suffer from a real large risk. I think a marginal GiveWell dollar might help us real people less than lowering those risks.