I'd like to distill AI Safety posts and papers, and I'd like to see more distillations generally. Ideally, posts and papers would meet the following criteria:
- Potentially high-impact for more people to understand
- Uses a lot of jargon or is generally complex and difficult to understand
- Not as well-known as you think they should be (in the AI X-risk space)
What posts meet these criteria?
I'm working on a related distillation project, I'd love to have a chat so we can coordinate our efforts! (riley@wor.land)