Superforecaster, former philosophy PhD, Giving What We Can member since 2012. Currently trying to get into AI governance.Â
Less clearly, sure. I'm mostly warning about complacency about liberals being safe from error just because you can use liberal ideas to criticize bad things liberals have done, rather than defending communism. Certainly lots of communists have, for example, attacked Stalinism in communist terms.
I don't really understand why liberalism is getting the prefix "classical" here though. The distinction between "classical" and other forms of liberalism, like social liberalism, is more about levels of government support for the poor through the welfare state and just how strong a presumption we should have in favour of market solutions vs government ones, with agreement on secularism, individual human rights, free speech, pluralism, a non-zero sum conception of markets and trade etc. I also think that insofar as "liberals" have an unusually good record, this doesn't distinguish "liberals" in the narrow sense from other pro-democratic traditions that accept pluralism: i.e. European social democracy on the left, and European Christian democracy, and Anglosphere mainstream conservatism 1965-2015 on the right. If anything classical liberals might have a worse record than many of these groups, because I think classical liberal ideas were used in the 19th century by the British Empire to justify not doing anything about major famines. Â Of course there is a broad sense of liberal in which all these people are "liberals" too, and they may well have been influenced by classical liberalism. But they aren't necessarily on the same side as classical liberals in typical policy debates.
I think there is something to this, but the US didn't just "prop up" Suharto in the sense of had normal relations of trade and mutual favours even though he did bad things. (That indeed may well be the right attitude to many bad governments, and ones that many lefitsts might demand the US to take to bad left-wing governments, yes.) They helped install him, a process which was incredibly bloody and violent, even apart from the long-term effects of his rule: https://en.wikipedia.org/wiki/Indonesian_mass_killings_of_1965%E2%80%9366
Remember also that the same people are not necessarily making all of these arguments. Relatively few radical leftists saying the first two things are also making a huge moral deal about the US failing to help Ukraine, I think. Even if they are strongly against the Russian invasion. It's mostly liberals who are saying the 3rd one.Â
Communism is a "reason-based" ideology, at least originally, in that it sees itself as secular and scientific and dispassionate and based on hard economics, rather than tradition or God. I mean, yes, Marxists tend to be more keen on evoking sociological explanations for people's beliefs than liberals are, but even Marxists usually believe social science is possible and even liberals admit people's beliefs are distorted by bias all the time, so the difference is one of emphasis rather than fundamental commitment I think. Â
This isn't a defence of communism particularly. The mere fact that people claim that something is the output of reason and science doesn't mean it actually is. That goes for liberalism too.Â
"Classical liberalism provides the intellectual resources to condemn the Jakarta killings. "Â
Communism probably also provides intellectual resources that would enable you to condemn most of the many very bad things communists have done, but that doesn't mean that those outcomes aren't relevant to assessing how good an idea communism is in practice.Â
Not that you said otherwise, and I am a liberal, not a communist. But I do think sometimes liberals can be a bit too quick to conclude that all crimes of liberal regimes having nothing distinctive to do with liberalism, while presuming that communist and fascist and theocratic crimes are inherent products of communism/fascism/theocracy. (I have less than zero time for fascism or theocracy, to be clear.)Â
The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hinton's views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. I'm pretty sure I've seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what it's worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have
I don't know if/how much EA money should go to AI safety either. EAs are trying to find the single best thing, and it's very hard to know what that is, and many worthwhile things will fail that bar. Maybe David Thorstad is right, and small X-risk reductions have relatively low value because another X-risk will get us in the next few centuries anyway*. What I do think is that society as a whole spending some resources caring about the risk of AGI arriving in the next ten years is likely optimal, and that it's not more silly to do so than to do many other obviously good things. I don't actually give to AI safety myself, and I only work on AI-related stuff-forecasting etc., I'm not a techy person-because it's what people are prepared to pay more for, and people being prepared to pay me to work on near-termist causes is less common, though it does happen. I myself give to animal welfare, not AI safety.Â
If you really believe that everyone putting money into Open AI etc. will only see returns if they achieve AGI that seems to me to be a point in favour of "there is a non-negligible risk of AGI in the next 10 years". I don't believe that, but if I did I that alone would significantly raise the chance I give to AGI within the next 10 years. But yes, they have some incentive to lie here, or to lie to themselves, obviously. Nonetheless, I don't think that means their opinion should get zero weight. For it to actually have been some amazing strategy for them to talk up the chances of AGI, *because it attracted cash* you'd have to believe they can fool outsiders with serious money on the line, and that this will be profitable for them in the long term, rather than crashing and burning when AGI does not arrive. I don't think that is wildly unlikely or anything, indeed, I think it is somewhat plausible-though my guess is Anthropic in particular believe their own hype. But it does require a fairly high amount of foolishness on behalf of other quite serious actors. I'm much more sure of "raising large amounts of money for stuff that obviously won't work is relatively hard" than I am of any argument about how far we are from AGI that looks at the direct evidence, since the latter sort of arguments are very hard to evaluate. I'd feel very differently here if we were arguing about 50% chance of AI in ten years, or even 10% chance. It's common for people to invest in things that probably won't work but have a high pay-off if they do. But what your saying is that Richard is wrong for thinking there is a non-negligible risk, because the chance is significantly under 1%. I doubt there are many takers for like "1 in 1000" chance of a big pay-off. Â
It is of course not THAT unlikely that they are fooling the serious money: serious investors make mistakes and even the stock market does. Nonetheless, being able to attract serious investment that is genuinely only investing because they think you'll achieve X, whilst simultaneously being under huge media attention and scrutiny is a credible signal that you'll eventually achieve X.Â
I don't think the argument I've just given is all that definitive, because they have other incentives to hype, like attracting top researchers (who I think it is probably eaiser to fool, because if they are fooled about AGI working at a big lab was probably good for them anyway; quite different from what happens to people funding the labs who are fooled who just lose money.) So it's possible that the people pouring serious money in don't take any of the AGI stuff seriously. Nonetheless, I trust "serious organisations with technical prowess seem to be trying to do this" as a signal to take something minimally seriously, even if they have some incentive to lie.Â
Similarly, if you really think Microsoft and Google have taken decisions that will crash their stock if AGI doesn't arrive, I think a similar argument applies: Are you really sure you're better at evaluating whether there is a non-negligible chance that a tech will be achieved by the tech industry than Microsoft and Google? Eventually, if AGI is not arriving from the huge training runs that are being planned in the near future, people will notice, and Microsoft and Google don't want to lose money in 5 years from now either. Again, it's not THAT implausible that they are mistaken, mistakes happen. But you aren't arguing that there probably won't be AGI in ten years-a claim I actually strongly agree with!-but rather that Richard was way out in saying that it's a tail risk we should take seriously given how important it would be.Â
Slower progress on one thing than another does not mean no progress on the slower thing.Â
"despite those benchmarks not really being related to AGI in any way." This is your judgment, but clearly it is not the judgment of some of the world's leading scientific experts in the area. (Though there may well be other experts who agree with you.)Â
*Actually Thorstad's opinion is more complicated than that, he says that this is true conditional on X-risk currently being non-negligible, but he doesn't himself endorse the view that it is currently non-negligible as far as I can tell.Â
Yeah, I am inclined to agree-for what my opinion is worth which on this topic is probably not that much-that there will be many things AIs can't do even once they have a METR 80% time-horizon of say 2 days. But I am less sure of that than I am of the meta-level point about this being an important crux.Â
Anthropic aren't objecting to killbots as a matter of principle though, they are just saying the tech isn't reliable yet. The stand on surveillance seems principled and I absolutely admire Amodei for risking his business to do the right thing, but let's avoid deceiving ourselves about what his stance actually is.