We're Ought. We're going to answer questions here on Tuesday August 9th at 10am Pacific. We may get to some questions earlier, and may continue answering a few more throughout the week.
About us:
- We're an applied AI lab, taking a product-driven approach to AI alignment.
- We're 10 people right now, roughly split between the Bay Area and the rest of the world (New York, Texas, Spain, UK).
- Our mission is to automate and scale open-ended reasoning. We are working on getting AI to be as helpful for supporting reasoning about long-term outcomes, policy, alignment research, AI deployment, etc. as it is for tasks with clear feedback signals.
- We're building the AI research assistant Elicit. Elicit's architecture is based on supervising reasoning processes, not outcomes, an implementation of factored cognition. This is better for supporting open-ended reasoning in the short run and better for alignment in the long run.
- Over the last year, we built Elicit to support broad reviews of empirical literature. We're currently expanding to deep literature reviews, then other research workflows, then general-purpose reasoning.
- We're hiring for full-stack, devops, ML, product analyst, and operations manager roles.
We're down to answer basically any question, including questions about our mission, theory of change, work so far, future plans, Elicit, relation to other orgs in the space, and what it's like to work at Ought.
I agree!
This sounds to me like almost the most generic-problem-solving thing someone could aim for, capable of doing many things without going outside the general use case.
As a naive example, couldn't someone use "high quality reasoning" to plan how to make military robotics? (though the examples I'm actually worried about are more like "use high quality reasoning to create paperclips", but I'm happy to use your one)
In other words, I'm not really worried about a chess robot being used for other things [update: wait, Alpha Zero seems to be more general purpose than expected], but I wouldn't feel as safe with something intentionally meant for "high quality reasoning"
[again, just sharing my concern, feel free to point out all the ways I'm totally missing it!]