This is a very short post mentioning some recent developments that make me hopeful for the future of AI safety work. These mostly relate to an increased amount of attention for AI safety concerns. I think this is likely to be good, but you might disagree.
- Eliezer Yudkowsky was invited to give a TED talk and received a standing ovation
- The NSF announced a $20 million request for proposals for empirical AI safety research.
- 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development
- AI Safety concerns have received increased media coverage
- ~700 people applied for AGI Safety Fundamentals in January
- FLI’s open letter has received 27572 signatures to date
Remember – The world is awful. The world is much better. The world can be much better.
Hi Fai,
I agree on the point about non-human animals, but I guess if we should account for future beings too. 1k years ago, we had not fully realised how much humanity (or post-humanity) could flourish, because it was not clear settling the galaxy etc. was possible, so I think the utility of the future has increased a lot in expectation (as long as you think the future is positive). If we decrease existential risk, we can increase it even further. In other words: