DM

David Mathers🔸

5685 karmaJoined

Bio

Superforecaster, former philosophy PhD, Giving What We Can member since 2012. Currently trying to get into AI governance. 

Posts
11

Sorted by New

Comments
664

"i don't believe very small animals feel pain, and if they do my best guess would be it would be thousands to millions orders of magnitude less pain than larger animals."

I'll repeat what regular readers of the forum are bored of me saying about this. As a philosophy of consciousness PhD, I barely ever heard the idea that small animals are conscious, but their experiences are way less intense. At most, it might be a consequence of integrated information theory, but not one I ever saw discussed and most people in the field don't endorse that one theory anyway. I cannot think of any other theory which implies this, or any philosophy of mind reason to think it is so. It seems very suspiciously like it is just something EAs say to avoid commitments to prioritizing tiny animals that seem a bit mad. Even if we take seriously the feeling that those commitments are a bit mad, there are any number of reasons that could be true apart from "small conscious brains have proportionally less intense experiences than large conscious brains." The whole idea also smacks to me of the idea that pain is literally a substance, like water or sand that the brain somehow "makes" using neurons as an ingredient, in the way that combining two chemicals might make a third via a reaction, and how much of the product you get out depends on how much you put in. On  mind-body dualist views this picture might make  some kind of surface sense, though it'll get a bit complicated once you start thinking about the possibility of conscious aliens without neurons. But on more popular physicalist views of consciousness, this picture is just wrong: conscious pain is not stuff that the brain makes. 

Nor does it particularly seem "commonsense" to me. A dog has a somewhat smaller brain than a human, but I don't think most people think that their dog CAN feel pain, but it feels somewhat less pain than it appears, because it's brain is a bit smaller than a persons. Of course, it could be intensity is the same once you hit a certain brain size no matter how much you then scale up, but it starts to dop off proportionately when you hit a certain level of smallness, but that seems pretty ad hoc.  

I think when people say it is rapidly decreasing they may often mean that the the % of the world's population living in extreme poverty is declining over time, rather than that the total number of people living in extreme poverty is going down?

I think when people say it is rapidly decreasing they may often mean that the the % of the world's population living in extreme poverty is declining over time, rather than that the total number of people living in extreme poverty is going down?

Yes, please do not downvote Yarrow's post just because it's style is a bit abrasive, and it goes against EA consensus. She has changed my mind quite a lot, as the person who kicked off the dispute, and Connacher who worked on the survey is clearly taking her criticisms seriously.

Yeah, the error here was mine sorry. I didn't actually work on the survey, and I missed that it was actually estimating the % of the panel agreeing we are in a scenario, not the chance that that scenario will win a plurality of the panel. This is my fault not Connacher's. I was not one of the survey designers, so please do not assume from this that the people at the FRI who designed the survey didn't understand their own questions or anything like that. 

For what it's worse, I think this is decent evidence that the question is too confusing to be useful given that I mischaracterized it even though I was one of the forecasters. So I largely, although not entirely withdraw the claim that you should update on the survey results. (That is, I think it still constitutes suggestive evidence that you are way out of line with experts-and superforecasters- but no longer super-strong.) 

I also somewhat withdraw the claim that we should take even well-designed expert surveys as strong evidence of the actual distribution of opinions. I had forgotten the magnitude of the framing effect that titotal found for the human extinction questions. That really does somewhat call the reliability of even a decently designed survey into question. That said, I don't really see a better way to get at "what do experts" think than surveys here, and I doubt they have zero value. But people should probably test multiple framings more. Nonetheless "there could be a big framing effect because it asks for a %", i.e. the titotal criticism, could apply to literally any survey, and I'm a bit skeptical of "surveys are a zero value method of getting at expert opinion".

So I think I  concede that you were right not to be massively moved by the survey, and I was wrong to say you should be. That said, maybe I'm wrong, but I seem to recall that you frequently imply that EA opinion on the plausibility of AGI by 2032 is way out of step with what "real experts" think. If your actual opinion is that no one has ever done a well-designed survey, then you should probably stop saying that. Or cite a survey you think is well-designed that actually shows other people are more out of step with expert opinion than you are, or say that EAs are out of step with expert opinion in your best guess, but you can't really claim with any confidence that you are any more in line with it. My personal guess is that your probabilities are in fact several orders of magnitude away from the "real" median of experts and superforecasters, if we could somehow control for framing effects, but I admit I can't prove this. 

But I will say that if taken at face value the survey still shows a big gap between what experts think and your "under 1 in 100,000 chance of AGI by 2032"  (That is, you didn't complain when I attributed that probability to you in the earlier thread, and I don't see any other way to interpret "more likely that JFK is secretly still alive" given you insisted you meant it literally.) Obviously, if someone is thinking that the most likely outcome in 2030 we will be in a situation where approx. 1 in 4 people on the panel think we are already in the rapid scenario, they probably don't think the chance of AGI by 2032 is under 1 in 100,000, since they are basically predicting that we're going to bear near the upper end of the moderate scenario, which makes it hard to give a chance of AGI by 2 years after 2030 that low. (I suppose they just could have a low opinion of the panel, and think some of the members will be total idiots, but I consider that unlikely.) I'd also say that if forecasters made the mistake I did in interpreting the question, then again, they are clearly out of step with the probability you give. I'm also still prepared to defend the survey against some of your other criticisms

I haven't done the sums myself, but do we know for sure that they can't make money without being all that useful, so long as a lot of people interact with them everyday? 

 Is Facebook "useful"? Not THAT much. Do people pay for it? No, it's free. Instagram is even less useful than Facebook which at least used to actually be good for organizing parties and pub nights.  Does META make money? Yes. Does equally useless TikTok make money? I presume so, yes. I think tech companies are pretty expert in monetizing things that have no user fee, and aren't that helpful at work. There's already a massive user base for Chat-GPT etc. Maybe they can monetize it even without it being THAT useful. Or maybe the sums just don't work out for that, I'm not sure. But clearly the market thinks they will make money in expectation. That's a boring reason for rejecting "it's a bubble" claims and bubbles do happen, but beating the market in pricing shares genuinely is quite difficult I suspect.

 Of course, there could also be a bubble even if SOME AI companies make a lot of money. That's what happened with the Dot.com bubble. 
 

Ok, there's a lot here, and I'm not sure I can respond to all of it, but I will respond to some of it. 

-I think you should be moved just by my telling you about the survey. Unless you are super confident either that I am lying/mistaken about it, or that the FRI was totally incompetent in assembling an expert panel, the mere fact that I'm telling you the median expert credence in the rapid scenario is 23% in the survey ought to make you think there is at least a pretty decent chance that you are giving it several orders of magnitude less credence than the median expert/superforecaster. You should already be updating on there being a decent chance that is true, even if you don't know for sure. Unless you already believed there was a decent chance you were that far out of step with expert opinion, but I think that just means you were already probably doing the wrong thing in assigning ultra-low credence. I say "probably" because the epistemology of disagreement IS very complicated, and maybe sometimes it's ok to stick to your guns in the face of expert consensus. 

-"Physical impossibility". Well, it's not literally true that you can't scale any further at all. That's why they are building all those data centers for eyewatering sums of money. Of course, they will hit limits eventually and perhaps soon-probably monetary before physical.  But you admit yourself that no one has actually calculated how much compute is needed to reach AGI. And indeed, that is very hard to do. Actually Epoch, who are far from believers in the rapid scenario as far as I can tell think quite a lot of recent progress has come from algorithmic improvements, not scaling: https://blog.redwoodresearch.org/p/whats-going-on-with-ai-progress-and  Text search for "Algorithmic improvement" or "Epoch reports that we see". So progress could continue to some degree even if we did hit limits on scaling. As far as I can tell, most of the people who do believe in the rapid scenario actually expect scaling of training compute to at least slow down a lot relatively soon, even though the expect big increases in the near future. Of course, none of this proves that we can reach AGI with current techniques just by scaling, and I am pretty dubious of that for any realistic amount of scaling. But I don't think you should be talking like the opposite has been proven. We don't know how much compute is needed for AGI with the techniques of today or the techniques available by 2029, so we don't know whether the needed amount of compute would breach physical or financial or any other limits. 

-LLM "Narrowness" and 2018 baseline: Well, I was probably a bit inexact about the baseline here. I guess what I meant was something like this. Before 2018ish, as a non-technical person, I never really heard anything about exciting AI stuff, even though I paid attention to EA a lot, and people in EA already cared a lot about AI safety and saw it as a top cause area. Since then, there has been loads of attention, literal founding fathers of the field like Hinton say there is something big going on, I find LLMs useful for work, there have been relatively hard to fake achievements like doing decently well on the Math Olympiad, and College students can now use AI to cheat on their essays, a task that absolutely would have been considered to involve "real intelligence" before Chat-GPT. More generally, I remember a time, as someone who learnt a bit of cognitive science while studying philosophy, when  the problem with AI was essentially being presented as "but we just can't hardcode all our knowledge in, and on the other hand, its not clear neural nets can really learn natural languages". Basically AI was seen as something that struggled with anything that involved holistic judgment based on pattern-matching and heuristics, rather than hard-coded rules. That problem now seems somewhat solved: We now seem to be able to get AIs to learn how to use natural language correctly, or play games like Go that can't be brute forced by exact calculation, but rely on pattern-recognition and "intuition". These AIs might not be general, but the techniques for getting them to learn these things might be a big part of how you build an AI that actually is, since the seem to be applicable to large variety of kinds of data: image recognition, natural language, code, Go and many other games, information about proteins.  The techniques for learning seem more general than many of the systems. That seems like relatively impressive progress for a short time to me as a layperson. I don't particularly think that should move anyone else that much, but it explains why it is not completely obvious to me, why we could not reach AGI by 2030 at current rates of progress. And again, I will emphasize, I think this is very unlikely. Probably my median is that real AGI is 25 years away. I just don't think it is 1 in a million "very unlikely". 

I want to emphasize here though, that I don't really think anything under the 3rd dash here should change your mind. That's more just an explanation of where I am coming from, and I don't think it should persuade anyone of anything really. But I definitely do think the stuff about expert opinion should make you tone down your extremely extreme confidence, even if just a bit. 

I'd also say that I think you are not really helping your own cause here by expressing such an incredibly super-high level of certainty, and making some sweeping claims that you can't really back up, like that we know right now that physical limits have a strong bearing on whether AGI will arrive soon. I usually upvote the stuff you post here about AGI, because I genuinely think you raise good, tough questions, for the many people around here with short timelines. (Plenty of those people probably have thought-through answers to those questions, but plenty probably don't and are just following what they see as EA consensus.)  But I think you also have a tendency to overconfidence that makes it easier for people to just ignore what you say. This come out in you doing annoying things you don't really need to do, like moving quickly in some posts from "scaling won't reach AGI" to "AI boom is a bubble that will unravel" without much supporting argument, when obviously, AI models could make vast revenues without being full AGI. It gives the impression of someone who is reasoning in a somewhat motivated manner, even as they also have thought about the topic a lot and have real insights. 

That's pretty incomprehensible to me even as a considerable skeptic of the rapid scenario. Firstly, you have experts giving a 23% chance and it's not moving you up even to like over 1 in a 100,000, let's say, although probably the JFK scenario is a hell of a lot less likely than that, even if his assassination was faked, despite there literally being a huge crowd who saw his head get blown off in public, still he have to be 108 to still be alive. Secondly, in 2018, AI could do to a first approximation basically nothing outside of highly specialized uses like chess computers, that did not use current ML techniques. Meanwhile, this year, I, a philosophy PhD, asked Claude about an idea that I had seriously thought about turning into a paper one day back when I was still in philosophy, and it came up with a very clever objection that I had not thought of myself. I am fairly, even if not 100% sure that this objection is not in the literature anywhere. Given that we've gone from nothing to "high quality philosophical arguments at times" in like 7 years, and there are some moderately decent reasons for thinking models good at AI research tasks could set off a positive feedback loop, and far more money and effort is being thrown at AI than ever before, it seems hard to me to think it is 99,999 in 100,000 sure that we won't get AGI by 2030, even though the distance to cross is still very large, and current success on benchmarks somewhat misleading. 

There is an ambiguity about "capabilities" versus deployment here to be fair. Your "that will not happen" seems somewhat more reasonable to me if we are requiring that the AIs are actually deployed and doing all this stuff versus merely that models capable of doing this stuff have been created. I think it was the latter we were forecasting, but I'm not 100% certain. 

https://leap.forecastingresearch.org/  The stuff is all here somewhere, though it's a bit difficult to find all the pieces quickly and easily. 

For what it's worth, I think the chance of the rapid scenario is considerably less than 23%, but a lot more than under 0.1%. I can't remember the number I gave when I did the survey as a superforecaster, but maybe 2-3%? But I do think chances are getting rather higher by 2040, and it's good we are preparing now. 

 ". I think an outlandish scenario like we find out JFK is actually still alive is more likely than that"

If you really mean this literally, I think it is extremely obviously false, in a way that I don't think merely 0.1% is. 

Load more