I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
As I said, I don't think your statement was wrong, but I want to give people a more accurate perception as to how AI is currently affecting scientific progress: it's very useful, but only in niches which align nicely with the strengths of neural networks. I do not think similar AI would produce similarly impressive results in what my team is doing, because we already have more ideas than we have the time and resources to execute on.
I can't really assess how much speedup we could get from a superintelligence, because superintelligences don't exist yet and may never exist. I do think that 3xing research output with AI in science is an easier proposition than building digital super-einstein, so I expect to see the former before the latter.
I found this article well written, although of course I don't agree that AGI by 2030 is likely. I am roughly in agreement with this post by an AI expert responding to the other (less good) short- timeline article going around.
I thought instead of critiquing the parts that I'm not an expert in, I might take a look at the part of this post that intersects with my field, when you mention material science discovery, and pour just a little bit of cold water on it.
A recent study found that an AI tool made top materials science researchers 80% faster at finding novel materials, and I expect many more results like this once scientists have adapted AI to solve specific problems, for instance by training on genetic or cosmological data.
So, an important thing to note is that this was not an LLM (neither was alphafold), but a specially designed deep learning model for generating candidate material structures. I covered a bit about them in my last article, this is a nice bit of evidence for their usefulness. The possibility space for new materials is ginormous and humans are not that good at generating new ones: the paper showed that this tool boosted productivity by making that process significantly easier. I don't like how the paper described this as "idea generation": it evokes the idea that the AI is making it's own newtonian flashes of scientific insight, but actually it's just mass generating candidate materials that an experienced professional can sift through.
I think your quoted statement is technically true, but it's worth mentioning that the 80% faster figure was just for the people previously in the top decile of performance (ie the best researchers), for people who were not performing well there was not evidence of a real difference. In practice the effect of the tool on progress was less than this: it was plausibly attributed to increasing the number of new patents at a firm by roughly 40%, and increasing the number of actual prototypes by 20%. You can also see that the productivity is not continuing to increase: they got their boost from the improved generation pipeline, and now the bottleneck is somewhere else.
To be clear, this is still great, and a clear deep learning success story, but it's not really in line with colonizing the mars in 2035 or whatever the ASI people are saying now.
In general, I'm not a fan of the paper, and it really could have benefited from some input from an actual material scientist.
I feel like this should be caveated with a "long timelines have gotten short... within people the author knows about in tech circles".
I mean, just two months ago someone asked a room full of cutting edge computational physicists whether their job could be replaced by an AI soon, and the response was audible laughter and a reply of "not in our lifetimes".
On one side you could say that this discrepancy is because the computational physicists aren't as familiar with state of the art genAI, but on the flipside, you could point out that tech circles aren't familiar with state of the art physics, and are seriously underestimating the scale of task ahead of them.
I'd be worried about getting sucked into semantics here. I think it's reasonable to say that it passes the original turing test, described by Turing in 1950:
I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
I think given the restrictions of an "average interrogator" and "five minutes of questioning", this prediction has been achieved, albeit a quarter of a century later than he predicted. This obviously doesn't prove that the AI can think or substitute for complex business tasks (it can't), but it does have implications for things like AI-spambots.
The method in the case of quantum physics was to meet their extraordinary claims with extraordinary evidence. Einstein did not resist the findings of quantum mechanics, only their interpretations, holding out hope that he could make a hidden variable theory work. Quantum mechanics become accepted because they were able to back up their theories with experimental data that could be explained in no other way.
Like a good scientist, I'm willing to follow logic and evidence to their logical conclusions. But when I actually look at the "logic" that is being used to justify doomerist conclusions, it always seems incredibly weak (and I have looked, extensively). I think people are rejecting your arguments not because you are a rogue outsider, but because they don't think your arguments are very good.
I feel like the counterpoint here is that R&D is incredibly hard. In regular development, you have established methods of how to do things, established benchmarks of when things are going well, and a long period of testing to discover errors, flaws, and mistakes through trial and error.
In R&D, you're trying to do things that nobody has ever done before, and simultaneously establish methods, benchmarks, and errors for that new method, which carries a ton of potential pitfalls. Also, nobody has ever done it before, so the AI is always inherently out-of-training to a much greater degree than in regular work.
I did read your scenario. I'm guessing you didn't read my articles? I'm closely tracking the use of AI in material science, and the technical barriers to things like nanotechnology.
"AI" is not a magic word that makes technical advancements appear out of nowhere. There are fundamental physical limits to what you can realistically model with finite computer resources, and the technical hurdles to drexlerian nanotech are absurd in their difficulty. To make experimental advances in something like nanotech, you need extensive experimentation. The AI does not have nanotech to build those labs, and it takes more than a year for humans to build it.
I usually try to avoid the word "impossible" when talking about speculative scenarios... but by giving it a 1 year time limit, the scenario you have written is impossible.
I work in computational material science and have spent a lot of time digging into drexlerian nanotech. The idea that drexler style nanomachines can be invented in 2026 is straight up absurd. Progress towards nanomachines has stalled out for decades. This is not a "20 years from now" type project, absent transformative AI speedups the tech could be a century away, or even straight up impossible. And the effect of AI on material science is far from transformative at present, this is not going to change in 1 year.
You are not doing your cause a service by proposing scenarios that are essentially impossible.
I am having trouble understanding why AI safety people are even trying to convince the general public that timelines are short.
If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous.
Also, if you make a bold prediction about short timelines and turn out to be wrong, won't people stop taking you seriously the next time around?