We're Ought. We're going to answer questions here on Tuesday August 9th at 10am Pacific. We may get to some questions earlier, and may continue answering a few more throughout the week.
About us:
- We're an applied AI lab, taking a product-driven approach to AI alignment.
- We're 10 people right now, roughly split between the Bay Area and the rest of the world (New York, Texas, Spain, UK).
- Our mission is to automate and scale open-ended reasoning. We are working on getting AI to be as helpful for supporting reasoning about long-term outcomes, policy, alignment research, AI deployment, etc. as it is for tasks with clear feedback signals.
- We're building the AI research assistant Elicit. Elicit's architecture is based on supervising reasoning processes, not outcomes, an implementation of factored cognition. This is better for supporting open-ended reasoning in the short run and better for alignment in the long run.
- Over the last year, we built Elicit to support broad reviews of empirical literature. We're currently expanding to deep literature reviews, then other research workflows, then general-purpose reasoning.
- We're hiring for full-stack, devops, ML, product analyst, and operations manager roles.
We're down to answer basically any question, including questions about our mission, theory of change, work so far, future plans, Elicit, relation to other orgs in the space, and what it's like to work at Ought.
Yay, I was really looking forward to this! <3
My first question [meant to open a friendly conversation even though it is phrased in a direct way] is "why do you think this won't kill us all?"
Specifically, sounds like you're doing a really good job creating an AI that is capable of planning through complicated vague problems. That's exactly what we're afraid of, no?
ref
My next questions would depend on your answer here, but I'll guess a few follow ups in sub-comments
Epistemic Status: I have no idea what I'm talking about, just trying to form initial opinions
You're welcome!
Beyond that, we believe that factored cognition could scale to lots of knowledge work. Anywhere the tasks are fuzzy, open-ended, or have long feedback loops, we think Elicit (or our next product) could be a fit. Journalism, think-tanks, policy work.