Three Epoch employees – Matthew Barnett, Tamay Besiroglu, and Ege Erdil – have left to launch Mechanize, an AI startup aiming for broad automation of ordinary labour:
Today we’re announcing Mechanize, a startup focused on developing virtual work environments, benchmarks, and training data that will enable the full automation of the economy.
We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do at their jobs. ...
Currently, AI models have serious shortcomings that render most of this enormous value out of reach. They are unreliable, lack robust long-context capabilities, struggle with agency and multimodality, and can’t execute long-term plans without going off the rails.
To overcome these limitations, Mechanize will produce the data and evals necessary for comprehensively automating work. Our digital environments will act as practical simulations of real-world work scenarios, enabling agents to learn useful abilities through RL. ...
The explosive economic growth likely to result from completely automating labor could generate vast abundance, much higher standards of living, and new goods and services that we can’t even imagine today. Our vision is to realize this potential as soon as possible.
I started a new company with @egeerdil2 and @tamaybes that's focused on automating the whole economy. We're taking a big bet on our view that the main value of AI will come from broad automation rather than from "geniuses in a data center".
The Mechanize website is scant on detail. It seems broadly bad that the alumni from a safety-focused AI org have left to form a company which accelerates AI timelines (and presumably is based on/uses evals built at Epoch).
It seems noteworthy that Epoch AI retweeted the announcement, wishing the departing founders best of luck – which feels like a tacit endorsement of the move.
Habryka wonders whether payment would have had to be given to Epoch for use of their benchmarks suite.
Links
- Official Twitter announcement
- See also this shortform on LessWrong
I see it primarily as a social phenomenon because I think the evidence we have today that AGI will arrive by 2030 is less compelling than the evidence we had in 2015 that AGI would arrive by 2030. In 2015, it was a little more plausible that AGI could arrive by 2030 because that was 15 years away and who knows what can happen in 15 years.
Now that 2030 is a little less than 5 years away, AGI by 2030 is a less plausible prediction than it was in 2015 because there's less time left and it's more clear it won't happen.
I don't think the reasons people believe AGI will arrive by 2030 are primarily based on evidence but are primarily a sociological phenomenon. People were ready to believe this regardless of the evidence going back to Ray Kurzweil's The Age of Spiritual Machines in 1999 and Eliezer Yudkowsky's "End-of-the-World Bet" in 2017. People don't really pay attention to whether the evidence is good or bad, they ignore obvious evidence and arguments against near-term AGI, and they mostly make a choice to ignore or attack people who express disagreement and instead tune into the relentless drumbeat of people agreeing with them. This is sociology, not epistemology.
Don't believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)
Expert opinion has changed? First, expert opinion is not itself evidence, it's people's opinions about evidence. What evidence are the experts basing their beliefs on? That seems way more important than someone just saying a number based on an intuition.
Second, expert opinion does not clearly support the idea of near-term AGI.
As of 2023, the expert opinion on AGI was... well, first of all, really confusing. The AI Impacts survey found that the experts believed there is a 50% chance by 2047 that "unaided machines can accomplish every task better and more cheaply than human workers." And also that there's a 50% chance that by 2116 "machines could be built to carry out the task better and more cheaply than human workers." I don't know why these predictions are 69 years apart.
Regardless, 2047 is sufficiently far away that it might as well be 2057 or 2067 or 2117. This is just people generating a number using a gut feeling. We don't know how to build AGI and we have no idea how long it will take to figure out how to. No amount of thinking of numbers or saying numbers can escape this fundamental truth.
We actually won't have to wait long to see that some of the most attention-catching near-term AI predictions are false. Dario Amodei, the CEO of Anthropic (a company that is said to be "literally creating God"), has predicted that by some point between June 2025 and September 2025, 90% of all code will written by AI rather than humans. In late 2025 and early 2026, when it's clear Dario was wrong about this (when, not if), maybe some people will start to be more skeptical of attention-grabbing expert predictions. But maybe not.
There are already strong signs of AGI discourse being irrational and absurd. On April 16, 2025, Tyler Cowen claimed that OpenAI's o3 model is AGI and asked, "is April 16th AGI day?". In a follow-up post on April 17, seemingly in response to criticism, he said, "I don’t mind if you don’t want to call it AGI", but seemed to affirm he still thinks o3 is AGI.
On one hand, I hope that in 5 years the people who promoted the idea of AGI by 2030 will lose a lot of credibility and maybe will do some soul-searching to figure out how they could be so wrong. On the other hand, there is nothing preventing people from being irrational indefinitely, such as:
I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic. (Sorry!) So, I'm being blunt about this to try to change that a little.