This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there.
This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117
For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai
I think NOVAH may have been inspired by Betsy Levy Paluck's research into using radio dramas to reduce racial prejudice. https://sparq.stanford.edu/solutions/radio-soaps-stop-hate
From what I understand, the MacArthur foundation was one of the main funders of nuclear security research, including at the Carnegie Endowment for International Peace, but they massively reduced their funding of nuclear projects and no large funder has replaced them. https://www.macfound.org/grantee/carnegie-endowment-for-international-peace-2457/
(I've edited this comment, I got confused between the MacArthur foundation and the various Carnegie philanthropic efforts.)
I am not more informed on NIST than you are, but I would offer the following framework:
1. If your comment is taken into account, FANTASTIC.
2. If your comment is not taken into account, how much do you learn from deeply engaging with US policy and generating your own ideas about how to improve it? If you're considering pivoting into AI governance/evals, this might be a great learning opportunity. If that's not relevant to you, then maybe commenting has less value.
This is a really complex space with lots of moving parts; very cool to see how you've compiled/analyzed everything! Haven't finished going through your report yet, but it looks awesome :)