Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
The responses to my comment have provided a real object lesson to me about how a rough throwaway remark (in this case: my attempt to very briefly indicate what my other post was about) can badly distract readers from one's actual point! Perhaps I would have done better to entirely leave out any positive attempt to here describe the content of my other post, and merely offer the negative claim that it wasn't about asserting specific probabilities.
My brief characterization was not especially well optimized for conveying the complex dialectic in the other post. Nor was it asserting that my conclusion was logically unassailable. I keep saying that if anyone wants to engage with my old post, I'd prefer that they did so in the comments to that postāensuring that they engage with the real post rather than the inadequate summary I gave here. My ultra-brief summary is not an adequate substitute, and was never intended to be engaged with as such.
On the substantive point: Of course, ideally one would like to be able to "model the entire space of possibilities". But as finite creatures, we need heuristics. If you think my other post was offering a bad heuristic for approximating EV, I'm happy to discuss that more over there.
On (what I take to be) the key substantive claim of the post:
I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities.
There seems room for people to disagree on priors about which claims are "strong and antecedently implausible". For example, I think Carl Shulman offers a reasonably plausible case for existential stability if we survive the next few centuries. By contrast, I find a lot of David's apparent assumptions about which propositions warrant negligible credence to be extremely strong and antecedently implausible. As I wrote in x-risk agnosticism:
David Thorstad seems to assume that interstellar colonization could not possibly happen within the next two millennia. This strikes me as a massive failure to properly account for model uncertainty. I canāt imagine being so confident about our technological limitations even a few centuries from now, let alone millennia. He also holds the suggestion that superintelligent AI might radically improve safety to be āgag-inducingly counterintuitiveā, which again just seems a failure of imagination. You donāt have to find it the most likely possibility in order to appreciate the possibility as worth including in your range of models.
I think it's important to recognize that reasonable people can disagree about what they find antecedently plausible or implausible, and to what extent. (Also: some eventsālike your home burning down in a fireāmay be "implausible" in the sense that you don't regard them as outright likely to happen, while still regarding them as sufficiently probable as to be worth insuring against.)
Such disagreements may be hard to resolve. One can't simply assume that one's own priors are objectively justified by default whereas one's interlocutor is necessarily unjustified by default until "supported by extensive argument". That's just stacking the deck.
I think a healthier dialectical approach involves stepping back to more neutral ground, and recognizing that if you want to persuade someone who disagrees with you, you will need to offer them some argument to change their mind. Of course, it's fine to just report one's difference in view. But insisting, "You must agree with my priors unless you can provide extensive argument to support a different view, otherwise I'll accuse you of bad epistemics!" is not really a reasonable dialectical stance.
If the suggestion is instead that one shouldn't attempt to assign probabilities at all then I think this gets into the problems I explore in Good Judgment with Numbers and (especially) Refusing to Quantify is Refusing to Think, that it effectively implies giving zero weight. But we can often be in a position to know that a non-zero (and indeed non-trivially positive) estimate is better than zero, even if we can't be highly confident of precisely what the ideal estimate would be.
Hi David, I'm afraid you might have gotten caught up in a tangent here! The main point of my comment was that your post criticizes me on the basis of a misrepresentation. You claim that my "primary argumentative move is to assign nontrivial probabilities without substantial new evidence," but actually that's false. That's just not what my blog post was about.
In retrospect, I think my attempt to briefly summarize what my post was about was too breezy, and misled many into thinking that its point was trivial. But it really isn't. (In fact, I'd say that my core point there about taking higher-order uncertainty into account is far more substantial and widely neglected than the "naming game" fallacy that you discuss in the present post!) I mention in another comment how it applied to Schwitzgebel's "negligibility argument" against longtermism, for example, where he very explicitly relies on a single constant probability model in order to make his case. Failing to adequately take model uncertainty into account is a subtle and easily-overlooked mistake!
A lot of your comment here seems to misunderstand my criticism of your earlier paper. I'm not objecting that you failed to share your personal probabilities. I'm objecting that your paper gives the impression that longtermism is undermined so long as the time of perils hypothesis is judged to be likely false. But actually the key question is whether its probability is negligible. Your paper fails to make clear what the key question to assess is, and the point of my 'Rule High Stakes In' post is to explain why it's really the question of negligibility that matters.
To keep discussions clean and clear, I'd prefer to continue discussion of my other post over on that post rather than here. Again, my objection to this post is simply that it misrepresented me.
It's not a psychological question. I wrote a blog post offering a philosophical critique of some published academic papers that, it seemed to me, involved an interesting and important error of reasoning. Anyone who thinks my critique goes awry is welcome to comment on it there. But whether my philosophical critique is ultimately correct or not, I don't think that the attempt is aptly described as "personal insult", "ridiculous on [its] face", or "corrosive to productive, charitable discussion". It's literally just doing philosophy.
I'd like it if people read my linked post before passing judgment on it.
The meta-dispute here isn't the most important thing in the world, but for clarity's sake, I think it's worth distinguishing the following questions:
My linked post suggests that the answer to Q1 is "Yes". I find it weird that others in the comments here are taking stands on this textual dispute a priori, rather than by engaging with the specifics of the text in question, the quotes I respond to, etc.
My primary complaint in this comment thread has simply been that the answer to Q2 is "No" (if you read my post, you'll see that it's instead warning against what I'm now calling the "best model fallacy", and explaining how I think various other writingsāincluding Thorstad'sāseem to go awry as a result of not attending to this subtle point about model uncertainty). The point of my post is not to try to assert or argue for any particular probability assignment. Hence Thorstad's current blog post misrepresents mine.
***
There's a more substantial issue in the background:
Q3. What is the most reasonable prior probability estimate to assign to the time of perils hypothesis? In case of disagreement, does one party bear a special "burden of proof" to convince the other, who should otherwise be regarded as better justified by default?
I have some general opinions about the probability being non-negligibleāI think Carl Shulman makes a good case hereābut it's not something I'm trying to argue about with those who regard it as negligible. I don't feel like I have anything distinctive to contribute on that question at this time, and prefer to focus my arguments on more tractable points (like the point I was making about the best model fallacy). I independently think Thorstad is wrong about how the burden of proof applies, but that's an argument for another day.
So I agree that there is some "talking past" happening here. Specifically, Thorstad seems to have read my post as addressing a different question (and advancing a different argument) than what it actually does, and made unwarranted epistemic charges on that basis. If anyone thinks my 'Rule High Stakes In' post similarly misrepresents Thorstad (2022), they're welcome to make the case in the comments to that post.
As I see it, I responded entirely reasonably to the actual text of what you wrote. (Maybe what you wrote gave a misleading impression of what you meant or intended; again, I made no claims about the latter.)
Is there a way to mute comment threads? Pursuing this disagreement further seems unlikely to do anyone any good. For what it's worth, I wish you well, and I'm sorry that I wasn't able to provide you with the agreement that you're after.
Honestly, I still think my comment was a good one! I responded to what struck me as the most cruxy claim in your post, explaining why I found it puzzling and confused-seeming. I then offered what I regard as an important corrective to a bad style of thinking that your post might encourage, whatever your intentions. (I made no claims about your intentions.) You're free to view things differently, but I disagree that there is anything "discourteous" about any of this.
There's "understanding" in the weak sense of having the info tokened in a belief-box somewhere, and then there's understanding in the sense of never falling for tempting-but-fallacious inferences like those I discuss in my post.
Have you read the paper I was responding to? I really don't think it's at all "obvious" that all "highly trained moral philosophers" have internalized the point I make in my blog post (that was the whole point of my writing it!), and I offered textual support. For example, Thorstad wrote: "the time of perils hypothesis is probably false. I conclude that existential risk pessimism may tell against the overwhelming importance of existential risk mitigation." This is a strange thing to write if he recognized that merely being "probably false" doesn't suffice to threaten the longtermist argument!Ā
(Edited to add: the obvious reading is that he's making precisely the sort of "best model fallacy" that I critique in my post: assessing which empirical model we should regard as true, and then determining expected value on the basis of that one model. Even very senior philosophers, like Eric Schwitzgebel, have made the same mistake.)
Going back to the OP's claims about what is or isn't "a good way to argue," I think it's important to pay attention to the actual text of what someone wrote. That's what my blog post did, and it's annoying to be subject to criticism (and now downvoting) from people who aren't willing to extend the same basic courtesy to me.
This sort of "many gods"-style response is precisely what I was referring to with my parenthetical: "unless one inverts the high stakes in a way that cancels out the other high-stakes possibility."
I don't think that dystopian "time of carols" scenarios are remotely as credible as the time of perils hypothesis. If someone disagrees, then certainly resolving that substantive disagreement would be important for making dialectical progress on the question of whether x-risk mitigation is worthwhile or not.
What makes both arguments instances of the nontrivial probability gambit is that they do not provide significant new evidence for the challenged claims. Their primary argumentative move is to assign nontrivial probabilities without substantial new evidence.
I donāt think this is a good way to argue. I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities.
I'd encourage Thorstad to read my post more carefully and pay attention to what I am arguing there. I was making an in principle point about how expected value works, highlighting a logical fallacy in Thorstad's published work on this topic. (Nothing in the paper I responded to seemed to acknowledge that a 1% chance of the time of perils would suffice to support longtermism. He wrote about the hypothesis being "inconclusive" as if that sufficed to rule it out, and I think it's important to recognize that this is bad reasoning on his part.)
Saying that my "primary argumentative move is to assign nontrivial probabilities without substantial new evidence" is poor reading comprehension on Thorstad's part. Actually, my primary argumentative move was explaining how expected value works. The numbers are illustrative, and suffice for anyone who happens to share my priors (or something close enough). Obviously, I'm not in that post trying to persuade someone who instead thinks the correct probability to assign is negligible. Thorstad is just radically misreading what my post is arguing.
(What makes this especially strange is that, iirc, the published paper of Thorstad's to which I was replying did not itself argue that the correct probability to assign to the ToP hypothesis is negligible, but just that the case for the hypothesis is "inconclusive". So it sounds like he's now accusing me of poor epistemics because I failed to respond to a different paper than the one he actually wrote? Geez.)
Distinguish pro tanto vs all-things-considered (or "net") high stakes. The statement is literally true of pro tanto high stakes: the 1% chance of extremely high stakes is by itself, as far as it goes, an expected high stake. But it's possible that this high stake might be outweighed or cancelled out by other sufficiently high stakes among the remaining probability space (hence the subsequent parenthetical about "unless one inverts the high stakes in a way that cancels out...").
The general lesson of my post is that saying "there's a 99% chance there's nothing to see here" has surprisingly little influence on the overall expected value. You can't show the expected stakes are low by showing that it's extremely likely that the actual stakes are low. You have to focus on the higher-stakes portions of probability space, even if small (but non-negligible).