I'm a biologist in my 30s working on cures for chronic lung diseases. I've followed AI developments closely over the past 3 years. Holy smokes it's moving fast. But I have neither the technical skills nor the policy conviction to do anything about AI safety.
And I have signed the Giving What We Can pledge 🔸.
If superintelligence is coming soon and goes horribly, then I won't be around to help anyone in 2040. If superintelligence is coming soon and goes wonderfully, then no one will need my help that badly in 2040.
Those two extreme scenarios both push me to aggressively donate to global health in the near term. While I still can.
Does anyone else feel this way? Does anyone in a similar scenario to me see things differently?
I thought about this a few years ago and have a post here.
I agree with Caleb's comment on the necessity to consider what a post-superintelligence world would look like, and whether capital could be usefully deployed. This post might be of interest.
My own guess is that it's most likely that capital won't be useful and that more aggressive donating makes sense.
Hello!
I'm glad you found my comment useful! I'm sorry if it came across as scolding; I interpreted Tristan's original post to be aimed at advising giant mega-donors like Open Philanthropy, moreso than individual donors. In my book, anybody donating to effective global health charities is doing a very admirable thing -- especially in these dark days when the US government seems to be trying to dismantle much of its foreign aid infrastructure.
As for my own two cents on how to navigate this situation (especially now that artificial intellige... (read more)