I'm a biologist in my 30s working on cures for chronic lung diseases. I've followed AI developments closely over the past 3 years. Holy smokes it's moving fast. But I have neither the technical skills nor the policy conviction to do anything about AI safety.
And I have signed the Giving What We Can pledge 🔸.
If superintelligence is coming soon and goes horribly, then I won't be around to help anyone in 2040. If superintelligence is coming soon and goes wonderfully, then no one will need my help that badly in 2040.
Those two extreme scenarios both push me to aggressively donate to global health in the near term. While I still can.
Does anyone else feel this way? Does anyone in a similar scenario to me see things differently?
Regarding Jackson's comment, I agree that 'dumping' money last-minute is a bit silly. Spending at a higher rate (and saving less) doesn't seem so crazy - which is what it seems you were considering.
My guess the modal outcome from AGI (and eventual ASI) is human disempowerment/extinction. Less confidently, I also suspect that most worlds where things go 'well' look weird and not not much like business-as-normal. For example, if we eventually have a sovereign ASI implement some form of coherent extrapolated volition I'm pretty unsure how (we would want) this to interact with individuals' capital. [Point 2. of this recent shortform feel adjacent - discussing CEV based on population rather than wealth].