We're Ought. We're going to answer questions here on Tuesday August 9th at 10am Pacific. We may get to some questions earlier, and may continue answering a few more throughout the week.
About us:
- We're an applied AI lab, taking a product-driven approach to AI alignment.
- We're 10 people right now, roughly split between the Bay Area and the rest of the world (New York, Texas, Spain, UK).
- Our mission is to automate and scale open-ended reasoning. We are working on getting AI to be as helpful for supporting reasoning about long-term outcomes, policy, alignment research, AI deployment, etc. as it is for tasks with clear feedback signals.
- We're building the AI research assistant Elicit. Elicit's architecture is based on supervising reasoning processes, not outcomes, an implementation of factored cognition. This is better for supporting open-ended reasoning in the short run and better for alignment in the long run.
- Over the last year, we built Elicit to support broad reviews of empirical literature. We're currently expanding to deep literature reviews, then other research workflows, then general-purpose reasoning.
- We're hiring for full-stack, devops, ML, product analyst, and operations manager roles.
We're down to answer basically any question, including questions about our mission, theory of change, work so far, future plans, Elicit, relation to other orgs in the space, and what it's like to work at Ought.
Capabilities like automated reasoning and improved literature search have the potential to reinforce or strengthen the effects of confirmation bias. For example, people can more easily find research to support their beliefs, or generate new reasons to support their beliefs. Have you done much thinking about this? Is it possible this risk outweighs the benefits of tools like Elicit? How might this risk be mitigated?
Great question! Yes, this is definitely on our minds as a potential harm of Elicit.
Of the people who end up with one-sided evidence right now, we can probably form two loose groups:
For the first group – the accidental ones – we’re aiming to make good reasoning as easy (and ideally easier than) finding one-sided evidence. Work we’ve done so far:
- We have a “possible critiques” feature in Elici
... (read more)