I'm a biologist in my 30s working on cures for chronic lung diseases. I've followed AI developments closely over the past 3 years. Holy smokes it's moving fast. But I have neither the technical skills nor the policy conviction to do anything about AI safety.
And I have signed the Giving What We Can pledge 🔸.
If superintelligence is coming soon and goes horribly, then I won't be around to help anyone in 2040. If superintelligence is coming soon and goes wonderfully, then no one will need my help that badly in 2040.
Those two extreme scenarios both push me to aggressively donate to global health in the near term. While I still can.
Does anyone else feel this way? Does anyone in a similar scenario to me see things differently?
Hello!
I'm glad you found my comment useful! I'm sorry if it came across as scolding; I interpreted Tristan's original post to be aimed at advising giant mega-donors like Open Philanthropy, moreso than individual donors. In my book, anybody donating to effective global health charities is doing a very admirable thing -- especially in these dark days when the US government seems to be trying to dismantle much of its foreign aid infrastructure.
As for my own two cents on how to navigate this situation (especially now that artificial intelligence feels much more real and pressing to me than it did a few years ago), here are a bunch of scattered thoughts (FYI these bullets have kind of a vibe of "sorry, i didn't have enough time to write you a short letter, so I wrote you a long one"):
However, unless we very soon get a nightmare-scenario "fast takeoff" where AI recursively self-improves and seizes control of the future over the course of hours-to-weeks, it seems like there will probably be a transition period, where approximately human-level AI is rapidly transforming the economy and society, but where ordinary people like us can still substantially influence the future. There are a couple ways we could hope to influence the long-term future:
For a couple of examples of interventions that could exist midway along a spectrum from givewell-style interventions to AI safety research, which are also focused on influencing the transitional period of AGI, consider Dario Amodei's vision of what an aspirational AGI transition period might look like, and what it would take to bring it about: