This is a linkpost for https://www.lesswrong.com/posts/kMmNdHpQPcnJgnAQF/prediction-augmented-evaluation-systems
The first section is a decent summary:
It's common for groups of people to want to evaluate specific things. Here are a few examples I'm interested in:
- The expected value of projects or actions within projects
- Research papers, on specific rubrics
- Quantitative risk estimates
- Important actions that may get carried out by artificial intelligences
I think predictions could be useful in scaling and amplifying such evaluation processes. Humans and later AIs could predict intensive evaluation results. There has been previous discussion on related topics, but I thought it would be valuable to consider a specific model here called "prediction-augmented evaluation processes." This is a high-level concept that could be used to help frame future discussion.