What a belief implies about what someone does depends on many other things, like other beliefs and their options in the world. If, e.g., there are more opportunities to work on x-risk reduction than s-risk reduction, then it might be true that optimistic longtermists are less likely than pessimsitic longtermists to form families (because they're more focused on work) than pessimistic longtermists.
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism?
As my answer made clear, the point I really want to emphasise is that this feels like an absurd exercise — there's no reason to believe that longtermist beliefs are heritable or selected for in our ancestral environment.
Yes, I do think this: "Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism."
That's what I think our prior should be, and generally we shouldn't accept evolutionary debunking arguments for moral beliefs unless there's actual findings in evolutionary psychology that suggest evolutionary pressure is the best explanation for them. I think it's indeed trivially easy to come up with some story for why any given belief is subject to evolutionary debunking, but these stories are so easy to come up with that they provide essentially no meaningful evidence that the debunking is warranted, unless further substantiated.
E.g., I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations, is at least as plausible as your claim about optimistic longtermism. Or we might think agnostic longtermism is selected for, because we're cognitive misers and thinking about the long-term future is too intensive and not decision relevant to be selected for. In fact, I think none of these claims is very plausible at all, because I don't think it's likely evolution is selecting for these kinds of beliefs at this level of detail.
My argument about neutrality toward creating lives also counts against your claim, because if it were true that there was evolutionary pressure toward pro-natalist, optimistic longtermism, I would predict we'd not see intuitions for neutrality about creating future lives be so prevalent. But they are prevalent, so this is another reason I don't think your claim is plausible.
I mean... it's quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true.
I agree that you can construct hypothetical scenarios in which a given trait is selected for (though even then you have to postulate that it's heritable, which you didn't specify here). But your claim is is not trivially true, and it does not establish that optimism regarding the long-term future of humanity has in fact been selected for in human evolutionary history. Other beliefs that are more plausibly susceptible to evolutionary debunking include the idea that we have special obligations to our family members, since these are likely connected to kinship ties that have been widely studied across many species.
So I think a key crux between us is on the question: what does it take for a belief to be vulnerable to evolutionary debunking? My view is that it should actually be established in the field of evolutionary psychology that the belief is best explained as the direct[1] product of our evolutionary history. (Even then, as I think you agree, that doesn't falsify the belief, but it gives us reason to be suspicious of it.)
I asked ChatGPT how evolutionary psychologists typically try to show that a psychological trait was selected for. Here was its answer:
Evolutionary psychologists aim to show that a psychological trait is a product of selection by demonstrating that it likely solved adaptive problems in our ancestral environment. They look for traits that are universal across cultures, appear reliably during development, and show efficiency and specificity in addressing evolutionary challenges. Evidence from comparative studies with other species, heritability data, and cost-benefit analyses related to reproductive success also support such claims. Altogether, these approaches help build a case that the trait was shaped by natural or sexual selection rather than by learning or cultural influence alone.
I think you might say that you don't have to show that a belief is best explain by evolutionary pressure, just that there's some selection for it. In fact, I don't think you've done that (because e.g. you have to show that it's heritable). But I think that's not nearly enough, because "some evolutionary pressure toward belief X" is a claim we can likely make about any belief at all. (E.g., pessimism about the future can be very valuable, because it can make you aware of potential dangers that optimists would miss.)
Also, in response to this:
On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is "to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?". Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.
I'm not sure why you think non-longtermist beliefs are irrelevant. Your claim is that optimistic longtermist beliefs are vulnerable to evolutionary debunking. But that would only be true if they were plausibly a product of evolutionary pressures which should apply to populations that have been subject to evolutionary selection; otherwise they're not a product of our evolutionary history. And so evidence of what humans generally are prone to believe seems highly relevant. The fact that many people, perhaps most, are pre-theoretically disposed toward views that push away from optimistic longtermism and pro-natalism casts further doubt on the claim that the intuitions that push people toward optimistic longtermism and pro-natalism have been selected for.
I used "direct" here because, in some sense, all of our beliefs are the product of our evolutionary history.
I don't think it's plausible that optimistic longtermism is vulnerable to evolutionary debunking, because:
I think if you were to turn this into an academic paper, I'd be interested to see if you could defend the claim that pro-natalist beliefs have been selected for in human evolutionary history.
Hi Rebecca,
Thanks for the question!
We did consider this as an option, and it's possible there are some versions of this we could do in the future, but it's not part of next steps at the moment. The basic reason is that this new strategic approach is the continuation of the direction 80k has been going for many years, so there’s not a segment of 80k with a separate focus to “spin-off” into a new entity.
Thanks for the additional context! I think I understand your views better now and I appreciate your feedback.
Just speaking for myself here, I think I can identify some key cruxes between us. I'll take them one by one:
I think the impact of most actions here is basically chaotic.
I disagree with this. I think it's better if people have a better understanding of the key issues raised by the emergence of AGI. We don't have all the answers, but we've thought about these issues a lot and have ideas about what kinds of problems are most pressing to address and what some potential solutions are. Communicating these ideas more broadly and to people who may be able to help is just better in expectation than failing to do so (all else equal), even though, as with any problem, you can't be sure you're making things better, and there's some chance you make things worse.
I also think "make the world better in meaningful ways in our usual cause areas before AGI is here" probably helps in many worlds, due to things like AI maybe trying to copy our values, or AI could be controlled by the UN or whatever and it's good to get as much moral progress in there as possible beforehand, or just updates on the amount of morally aligned training data being used.
I don't think I agree with this. I think the value of doing work in areas like global health or helping animals is largely in the direct impact of these actions, rather than any impact on what it means for the arrival of AGI. I don't think even if, in an overwhelming success, we cut malaria deaths in half next year, that will meaningfully increase the likelihood that AGI is aligned or that the training data reflects a better morality. It's more likely that directly trying to work to create beneficial AI will have these effects. Of course, the case for saving lives from malaria is still strong, because people's lives matter and are worth saving.
I think that more serious consideration of the Existential Risk Persuasion Tournament leads one to conclude that wildly transformational outcomes just aren't that likely in the short/medium term.
Recall that the XPT is from 2022, so there's a lot that's happened since. Even still, here's what Ezra Karger noted about expectations of the experts and forecasters views when we interviewed him on the 80k podcast:
One of the pieces of this work that I found most interesting is that even though domain experts and superforecasters disagreed strongly, I would argue, about AI-caused risks, they both believed that AI progress would continue very quickly.
So we did ask superforecasters and domain experts when we would have an advanced AI system, according to a definition that relied on a long list of capabilities. And the domain experts gave a year of 2046, and the superforecasters gave a year of 2060.
My understanding is that XPT was using the definition of AGI used in the Metaculus question cited in Niel's original post (though see his comment for some caveats about the definition). In March 2022, that forecast was around 2056-2058; it's now at 2030. The Metaculus question also has over 1500 forecasters, whereas XPT had around 30 superforecasters, I believe. So overall I wouldn't consider XPT to be strong evidence against short timelines.
I think there is some general "outside view" reason to be sceptical of short timelines. But I think there are good reasons to think that kind of perspective would miss big changes like this, and there is enough reason to believe short timelines are plausible to take action on that basis.
Again, thanks for engaging with all this!
One reason we use phrases “making AGI go well,” rather than some alternatives, is because 80k is concerned about risks like lock-in of really harmful values, in addition to human disempowerment and extinction risk — so I sympathise with your worries here.
Figuring out how to avoid these kinds of risks is really important, and recognising that they might arise soon is definitely within the scope of our new strategy. We have written about ways the future can look very bad even if humans have control of AI, for example here, here, and here.
I think it’s plausible to worry that not enough is being done about these kinds of concerns — that depends a lot on how plausible they are and how tractable the solutions are, which I don’t have very settled views on.
You might also think that there’s nothing tractable to do about these risks, so it’s better to focus on interventions that pay off in the short-term. But my view at least is that it is worth putting more effort into figuring out what the solutions here might be.
Hey Rocky —
Thanks for sharing these concerns. These are really hard decisions we face, and I think you’re pointing to some really tricky trade-offs.
We’ve definitely grappled with the question of whether it would make sense to spin up a separate website that focused more on AI. It’s possible that could still be a direction we take at some point.
But the key decision we’re facing is what to do with our existing resources — our staff time, the website we’ve built up, our other programmes and connections. And we’ve been struggling with the fact that the website doesn’t really fully reflect the urgency we believe is warranted around rapidly advancing AI. Whether we launch another site or not, we want to honestly communicate about how we’re thinking about the top problem in the world and how it will affect people’s careers. To do that, we need to make a lot of updates in the direction this post is discussing.
That said, I’ve always really valued the fact that 80k can be useful to people who don’t agree with all our views. If you’re sceptical about AI having a big impact in the next few decades, our content on pandemics, nuclear weapons, factory farming — or our general career advice — can still be really useful. I think that will remain true even with our strategy shift.
I also think this is a really important point:
If transformative AI is just five years away, then we need people who have spent their careers reducing nuclear risks to be doing their most effective work right now—even if they’re not fully bought into AGI timelines. We need biosecurity experts building robust systems to mitigate accidental or deliberate pandemics—whether or not they view that work as directly linked to AI.
I think we’re mostly in agreement here — work on nuclear risks and biorisks remain really important, and last year we made efforts to make sure our bio and nuclear content was more up to date. We recently made an update about mirror bio risks, because they seem especially pressing.
As the post above says: “When deciding what to work on, we’re asking ourselves ‘How much does this work help make AI go better?’, rather than ‘How AI-related is it?’” So to the extent that other work has a key role to play in the risks that surround a world with rapidly advancing AI, it’s clearly in scope of the new strategy.
But I think it probably is helpful for people doing work in areas like nuclear safety and bio to recognise the way short AI timelines could affect their work. So if 80k can communicate that to our audience more clearly, and help people figure out what that means they should do for their careers, it could be really valuable.
And if we are truly on the brink of catastrophe, we still need people focused on minimizing human and nonhuman suffering in the time we have left.
I do think we should be absolutely clear that we agree with this — it’s incredibly valuable that work to minimise existing suffering continues. I support that happening and am incredibly thankful to those who do it. This strategy doesn’t change that a bit. It just means 80k thinks our next marginal efforts are best focused on the risks arising from AI.
On the broader issue of what this means for the rest of the EA ecosystem, I think the risks you describe are real and are important to weigh. One reason we wanted to communicate this strategy publicly is so others could assess it for themselves and better coordinate on their paths forward. And as Conor said, we really wish we didn’t have to live in a world where these issues seem as urgent as they do.
But I think I see the costs of the shift as less stark. We still plan to have our career guide up as a central piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.
At the highest level, though, we do face a decision about whether to focus more on AI and the plausibly short timelines to AGI, or to spend time on a wider range of problem areas and take less of a stance on timelines. Focusing more does have the risk that we won’t reach our traditional audience as well, which might even reduce our impact on AI; but declining to focus more has the risk of missing out on other audiences we previously haven’t reached, failing to faithfully communicate our views about the world, and missing out on big opportunities to positively work on what we think is the most pressing problem we face.
As the post notes, while we are committed to making the strategic shift, we’re open to changing our minds if we get important updates about our work. We’ll assess how we’re performing on the new strategy, whether there are any unexpected downsides, and whether developments in the world are matching our expectations. And we definitely continue to be open to feedback from you and others who have a different perspective on the effects 80k is having in the world, and we welcome input about what we can do better.
Thanks for sharing this fun paper!
I think I disagree with several key parts of the argument.
I think this makes a pretty important error in reasoning. Grant that philosophers in general are among the most skeptical people on the planet. Then you select a 6% segment of them. The generalization that these are still among the most skeptical people on the planet is erroneous. This 6% could have of (e.g.) average levels of skepticism, and it's the rest of the group that brings up the average level of skepticism of the group.
This is among the passages commonly interpreted as Jesus discussing hell. However, note that it doesn't actually show Jesus discussing hell as we've been thought to think of it. First, he's clearly speaking in metaphor — he's not talking about literal sheep and goats. It's not clear what the "eternal punishment" he's referring to is. Some people interpret this as more of a "final" punishment, e.g. death, rather than eternal suffering. And indeed, if Jesus were referring to hell as traditionally conceived, I'd expect him to be clearer about this.
Many scholars on the topic have written extensively about this. My understanding is that there's little solid basis for getting the traditionally understood concept of hell out of the core ancient sources. And I'd expect, if it were true, and Jesus really were communicating about something as important as hell with divine knowledge, there would be no ambiguity about it. (Since the Quran comes after and is influenced by Christian sources, I don't think we should read it as a separate source of evidence.)
I think this is a very strong reason to doubt the plausibility of hell. And there are many other such reasons:
The weight of these considerations drives the plausibility of hell extremely low, much lower in my view than the possibility of x-risk from risks like nuclear weapons, pandemics, AI, or even natural sources like asteroids (which, unlike hell, we know exist and have previously impacted the lives of species).
I think this does make the odds of a religious catastrophe pascalian, and worth rejecting on that basis.
Even if the risk weren't pascalian, I think there's another problem with this argument, with reference to this part of the argument:
The problem here is that if you advocate for the wrong religion, you might increase the chance people go to hell, because some religions think believing in another religion would make you go to hell. So actions on this basis have to grapple with the possibilities of infinite bliss and infinite suffering, and we often might have just as much reason to think we're increasing one or decreasing the other. And since there's no reliable method for coming to consensus on these kinds of religious questions, we should think a problem like "reduce the probability people will go to hell" — even if the risk level wasn't pascalian — is entirely intractable.