Hi everyone! We, Ought, have been working on Elicit, a tool to express beliefs in probability distributions. This is an extension of our previous work on delegating reasoning. We’re experimenting with breaking down the reasoning process in forecasting into smaller steps and building tools that support and automate these steps.
In this specific post, we’re exploring the dynamics of Q&A with distributions by offering to make a forecast for a question you want answered. Our goal is to learn:
- Whether people would appreciate delegating predictions to a third party, and what types of predictions they want to delegate
- Whether a distribution can more efficiently convey information (or convey different types of information) than text-based interactions
- Whether conversing in distributions isolates disagreements or assumptions that may be obscured in text
- How to translate the questions people care about or think about naturally into more precise distributions (and what gets lost in that translation)
We also think that making forecasts is quite fun. In that spirit, you can ask us (mainly Amanda Ngo and Eli Lifland) to forecast any continuous question that you want answered. Just make a comment on this post with a question, and we’ll make a distribution to answer it.
Some examples of questions you could ask:
- When will I be able to trust a virtual personal assistant to make important decisions for me?
- I live in the US. How much happier will I be if I move to Germany?
- How many EA organizations will be founded in 2021?
- I live in New York. When will I be able to go to the gym again?
- In 2021, what percentage of my working hours will I spend on things that I would consider to be forecasting or forecasting-adjacent?
We’ll spend <=1 hour on each one, so you should expect about that much rigor and information density. If there’s context on you or the question that we won’t be able to find online, you can include it in the comment to help us out.
We’ll answer as many questions as we can from now until Monday 8/3. We expect to spend about 10-15 hours on this, so we may not get to all the questions. We’ll post our distributions in the comments below. If you disagree or think we missed something, you can respond with your own distribution for the question.
We’d love to hear people’s thoughts and feedback on outsourcing forecasts, providing beliefs in probability distribution, or Elicit generally as a tool. If you’re interested in more of what we’re working on, you can also check out the competition we’re currently running on LessWrong to amplify Rohin Shah’s forecast on when the majority of AGI researchers will agree with safety concerns.
I have a spreadsheet of different models and what timelines they imply, and how much weight I put on each model. The result is 18% by end of 2026. Then I consider various sources of evidence and update upwards to 38% by end of 2026. I think if it doesn't happen by 2026 or so it'll probably take a while longer, so my median is on 2040 or so.
The most highly weighted model in my spreadsheet takes compute to be the main driver of progress and uses a flat distribution over orders of magnitude of compute. Since it's implausible that the flat distribution should extend more than 18 or so OOMs from where we are now, and since we are going to get 3-5 more OOM in the next five years, that yields 20%.
The biggest upward update from the bits of evidence comes from the trends embodied in transformers (e.g. GPT-3) and also to some extent in alphago, alphazero, muzero: Strip out all that human knowledge and specialized architecture, just make a fairly simple neural net and make it huge, and it does better and better the bigger you make it.
Another big update upward is... well, just read this comment. To me, this comment did not give me a new picture of what was going on but rather confirmed the picture I already had. The fact that it is so highly upvoted and so little objected to suggests that the same goes for lots of people in the community. Now there's common knowledge.