I'd like to distill AI Safety posts and papers, and I'd like to see more distillations generally. Ideally, posts and papers would meet the following criteria:
- Potentially high-impact for more people to understand
- Uses a lot of jargon or is generally complex and difficult to understand
- Not as well-known as you think they should be (in the AI X-risk space)
What posts meet these criteria?
I see you already volunteer on aisafety.info! From working on that project these are some areas I think could benefit from being made more accessible (on our platform or otherwise - we’re working on these but definitely could use the help + I would be really happy to see them worked on anywhere)
I realize these are categories instead of specific documents but there’s just so much to be worked on! These are purely my views, and I haven't run this past anyone else in the team, who I suspect have more thoughts. For anyone stumbling across this who’d like to help with the project but who isn’t familiar with it, we have a prioritized list of content we would like to cover on the site but don’t have yet.