[EDIT: Thanks for the questions everyone! Just noting that I'm mostly done answering questions, and there were a few that came in Tuesday night or later that I probably won't get to.]
Hi everyone! I’m Ajeya, and I’ll be doing an Ask Me Anything here. I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything.
About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA.
I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything!
[The following question might just be confused, might not be important, and will likely be poorly phrased/explained.]
In your recent 80k appearance, you and Rob both say that the way the self-sampling assumption (SSA) leads to the doomsday argument seems sort-of "suspicious". You then say that, on the other hand, the way the self-indication assumption (SIA) causes an opposing update also seems suspicious.
But I think all of your illustrations of how updates based on the SIA can seem suspicious involved infinities. And we already know that loads of things involving infinities can seem counterintuitive or suspicious. So it seems to me like this isn't much reason to feel that SIA in particular can cause suspicious updates. In other words, it seems like maybe the "active ingredient" causing the suspiciousness in the examples you give is infinity, not SIA. Whereas the way the SSA leads to the doomsday argument doesn't have to involve infinity, so there it seems like SSA is itself suspicious.
Does that sound correct to you? Do you think that that make the SIA effectively less suspicious than SSA, and thereby pushes further against the doomsday argument?
(I obviously don't think we should necessarily dismiss things just because they feel "suspicious". But it could make sense to update a bit away from them for that reason, and, to the extent that that's true, a difference in the suspiciousness of SSA vs SIA could matter.)
(Btw, although I'm still not sure I understand SSA and SIA properly, your explanation during the 80k interview caused me to feel like I probably at least understood the gist, for the first time, so thanks for that!)
Thanks, I'm glad you found that explanation helpful!
I think I broadly agree with you that SIA is somewhat less "suspicious" than SSA, with the small caveat that I think most of the weirdness can be preserved with a finite-but-sufficienty-giant world rather than a literally infinite world.