My intuition is that there are heaps of very talented people interested in AI Safety but 1/100 of the jobs.
A second intuition I have is that the rejected talent WON'T spillover into other cause areas much (biorisk, animal welfare, whatever) and may event spillover into capabilities!
Let's also assume more companies working towards AI Safety is a good thing (I'm not super interested in debating this point).
How do we get more AI Safety companies off the ground??
As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people's reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that.