Hi everyone! We, Ought, have been working on Elicit, a tool to express beliefs in probability distributions. This is an extension of our previous work on delegating reasoning. We’re experimenting with breaking down the reasoning process in forecasting into smaller steps and building tools that support and automate these steps.
In this specific post, we’re exploring the dynamics of Q&A with distributions by offering to make a forecast for a question you want answered. Our goal is to learn:
- Whether people would appreciate delegating predictions to a third party, and what types of predictions they want to delegate
- Whether a distribution can more efficiently convey information (or convey different types of information) than text-based interactions
- Whether conversing in distributions isolates disagreements or assumptions that may be obscured in text
- How to translate the questions people care about or think about naturally into more precise distributions (and what gets lost in that translation)
We also think that making forecasts is quite fun. In that spirit, you can ask us (mainly Amanda Ngo and Eli Lifland) to forecast any continuous question that you want answered. Just make a comment on this post with a question, and we’ll make a distribution to answer it.
Some examples of questions you could ask:
- When will I be able to trust a virtual personal assistant to make important decisions for me?
- I live in the US. How much happier will I be if I move to Germany?
- How many EA organizations will be founded in 2021?
- I live in New York. When will I be able to go to the gym again?
- In 2021, what percentage of my working hours will I spend on things that I would consider to be forecasting or forecasting-adjacent?
We’ll spend <=1 hour on each one, so you should expect about that much rigor and information density. If there’s context on you or the question that we won’t be able to find online, you can include it in the comment to help us out.
We’ll answer as many questions as we can from now until Monday 8/3. We expect to spend about 10-15 hours on this, so we may not get to all the questions. We’ll post our distributions in the comments below. If you disagree or think we missed something, you can respond with your own distribution for the question.
We’d love to hear people’s thoughts and feedback on outsourcing forecasts, providing beliefs in probability distribution, or Elicit generally as a tool. If you’re interested in more of what we’re working on, you can also check out the competition we’re currently running on LessWrong to amplify Rohin Shah’s forecast on when the majority of AGI researchers will agree with safety concerns.
If a question like that from Grace et.al's 2016 survey (note I can not find the exact question)
was replicated in August 2025 (and had high rates of people filling it, etc), what will the unweighted average of the 50th percentile from the following groups be?
1. AI experts, similar to Grace et.al's original survey
2. Economists, eg IGM economist panels
3. Attendants of the Economics of AI conference
4. Superforecasters
5. Top 100 users on Metaculus
6. Historians
7. Neuroscientists
8. Long-termist philosophers
9. The EA Survey
10. Employees of Ought
This is a lot of questions, so just pick whichever one you're most excited to answer and/or think is the best reference class! :)
My predictions for:
1. AI researchers
2. Historians
Notes:
1. I chose AI researchers because then I could use Grace et. al. as directly as possible, and I chose historians because I expected them to differ the most from AI researchers
2. I worked on this for about 30 min, so it's pretty rough. To make it better, I'd:
a. dig into Grace et. al. more (first data, then methods) to learn more about how to interpret the results/what they tell us about the answer to Linch's question
b. read other expert surveys re: when will AGI come (I think AI Impac... (read more)