Three Epoch employees – Matthew Barnett, Tamay Besiroglu, and Ege Erdil – have left to launch Mechanize, an AI startup aiming for broad automation of ordinary labour:
Today we’re announcing Mechanize, a startup focused on developing virtual work environments, benchmarks, and training data that will enable the full automation of the economy.
We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do at their jobs. ...
Currently, AI models have serious shortcomings that render most of this enormous value out of reach. They are unreliable, lack robust long-context capabilities, struggle with agency and multimodality, and can’t execute long-term plans without going off the rails.
To overcome these limitations, Mechanize will produce the data and evals necessary for comprehensively automating work. Our digital environments will act as practical simulations of real-world work scenarios, enabling agents to learn useful abilities through RL. ...
The explosive economic growth likely to result from completely automating labor could generate vast abundance, much higher standards of living, and new goods and services that we can’t even imagine today. Our vision is to realize this potential as soon as possible.
I started a new company with @egeerdil2 and @tamaybes that's focused on automating the whole economy. We're taking a big bet on our view that the main value of AI will come from broad automation rather than from "geniuses in a data center".
The Mechanize website is scant on detail. It seems broadly bad that the alumni from a safety-focused AI org have left to form a company which accelerates AI timelines (and presumably is based on/uses evals built at Epoch).
It seems noteworthy that Epoch AI retweeted the announcement, wishing the departing founders best of luck – which feels like a tacit endorsement of the move.
Habryka wonders whether payment would have had to be given to Epoch for use of their benchmarks suite.
Links
- Official Twitter announcement
- See also this shortform on LessWrong
To be honest, I don't necessarily think it's as bad as people claim, though I still don't think it was a great action relative to available alternatives, and is at best not the best thing you could decide on for making AI safe, relative to other actions.
One of my core issues, and a big crux here is that I don't really believe that you can succeed at the goal of automating the whole economy with cheap robots without also allowing actors to speed up the race to superintelligence/superhuman AI researchers a lot.
And if we put any weight on misalignment, we should be automating AI safety, not AI capabilities, so this is quite bad.
Jaime Sevilla admits that the reason he supports Mechanize's effort is for selfish reasons:
https://x.com/Jsevillamol/status/1913276376171401583
Edit: @Jaime Sevilla has stated that he won't go to Mechanize, and will stay at Epoch, sorry for any confusion.
My personal take is that there are pretty reasonable arguments that what we have seen in AI/ML since 2015 suggests AI will be a big deal. I like the way I have seen Yoshua Bengio talk about it "over the next few years, or a few decades". I share the view that either of those possibilities are reasonable. People who are highly confident that something like AGI is going to arrive over the next few years are more confident in this than I am, but I think that view is within the bounds of reasonable interpretation of the evidence. I think it is also with-in the... (read more)