I belive AI safety is a big problem for the future, and more people working on the problem would likely increase the chance that it gets solved, but I think the third component of ITN might need to be reevaluated.
I mainly formed my base ideas around 2015, when the AI revolution was portraied as a fight against killer robots. Nowadays, more details are communicated, like bias problems, optimizing for different-than-human values (ad-clicks), and killer drones.
It is possible that it only went from very neglected to somewhat neglected, or that the news I received from my echochamber was itself biased. In any case, I would like to know more.
Based on talking to various researchers, I'd say there are fewer than 50 people doing promising work on existential AI safety, and fewer than 200 thinking about AI safety full-time in any reasonable framing of the problem.
If you think that AI safety is 10x as large as, say, biorisk, and returns are logarithmic, we should allocate 10x the resources to AI safety as biorisk. And biorisk is still larger than most causes. So it's fine for AI safety to not be quite as neglected as the most neglected causes.
Which leads to the question of how we can get more people to produce promising work in AI safety. There are plenty of highly intelligent people out there who are capable of doing work in AI safety, yet almost none of them do. Maybe trying to popularize AI safety would help to indirectly contribute to it, since it might help to convince geniuses with the potential to work in AI safety to start working on it. It could also be an incentive problem. Maybe potential AI safety researchers think they can make more money by working in other fields, or maybe there ... (read more)