Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3) mate choice, relationships, families , pronatalism, and population ethics as cause areas.
I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.
Well, the main asymmetry here is that the Left-leaning 'mainstream' press doesn't understand or report the Right's concerns about Leftist authoritarianism, but it generates and amplifies the Left's concerns about 'far Right authoritarianism'.
So, any EAs who follow 'mainstream' journalism (e.g. CNN, MSNBC, NY Times, WaPo) will tend to repeat their talking points, their analyses, and their biases.
Most reasonable observers, IMHO, understand that the US 'mainstream' press has become very left-leaning and highly biased over the last few decades, especially since 2015, and it is functioning largely as a propaganda wing of the Democratic Party. (Consider, for example, the 'mainstream' media's systematic denial of Biden's dementia for the last several years, until the symptoms became too painfully obvious, to everyone, to ignore. Such journalists would never have run cover for Trump, if he'd been developing dementia; they would have been demanding his resignation years ago.)
In any case, the partisan polarization on such issues is, perhaps, precisely why EAs should be very careful not to wade into these debates unless they have a very good reason for doing so, a lot of political knowledge and wisdom, an ability to understand both sides, and a recognition that these political differences are probably neither neglected nor tractable.
If we really want to make a difference in politics, I think we should be nudging the relevant decision-makers, policy wonks, staffers, and pundits into developing a better understanding of the global catastrophic risks that we face from nuclear war, bioweapons, and AI.
Yelnats - thanks for this long, well-researched, and thoughtful piece.
I agree that political polarization, destabilization, and potential civil war in the US (and elsewhere) are worthy of more serious consideration within EA, since they amplify many potential catastrophic risks and extinction risks.
However, I would urge you to try much harder to develop a less partisan analysis of these issues. This essay comes across (to me, as a libertarian centrist with some traditionalist tendencies) as a very elaborate rationalization for 'Stop Trump at all costs!', based on the commonly-repeated claim that 'Trump is an existential threat to democracy'. And a lot of the rhetoric, and examples, are basically repeating highly partisan Democratic Party talking points, which have been promoted ad nauseum by CNN, MSNBC, Washington Post, NY Times, etc. And, many of which have been debunked upon further investigation.
EAs tend to lean Left. We know this from EA surveys. Rich EAs (such as SBF) have donated very large sums of money to Democratic candidates. That makes it very important for us to become more aware of our own political biases, when we address issues such as polarization.
In my opinion, both current US political parties are showing some highly authoritarian tendencies. You mentioned some authoritarian tendencies from the Republican side. But you seem to have overlooked many authoritarian trends on the Democratic/Leftist side, which have included:
Many on the Left think of 'authoritarianism' as a purely Right-wing phenomenon, following the Frankfurt School Leftists such as Adorno et al. publishing 'The authoritarian personality' (1950). However, more recent work in political psychology shows that there are plenty of Leftist authoritarians. Also, history reveals plenty of examples of authoritarian socialists, such as Lenin, Stalin, Mao, Pol Pot, Castro, etc -- who are responsible for tens of millions of deaths.
Moreover, the standard 2-D graph of political orientation, which includes a Left-vs-Right dimension, but also an Authoritarian-vs-Libertarian dimension, reminds us that the Right does not have a monopoly on authoritarianism.
So, I would urge you to continue this work, but to re-examine your own political biases, and perhaps to collaborate with researchers who hold more diverse political views, such as Centrists, Libertarians, Conservatives, Neo-Reactionaries, Nationalists, Populists, etc.
I expect this comment to be downvoted into oblivion by EAs who reflexively think 'Trump bad, Progressives good'.
But I beseech you all, consider the possibility that the Democrats are just as much of a threat to American democracy and liberty as the Republicans have ever been.
Raemon -- I strongly agree, and I don't think EAs should be overthinking this as much as we seem to be in the comments here. Some ethical issues are, actually, fairly simple.
OpenAI, Deepmind, Meta, and even Anthropic are pushing recklessly ahead with AGI capabilities development. We all understand the extinction risks and global catastrophic risks that this imposes on humanity. These companies are not aligned with EA values of preserving human life, civilization, and sentient well-being.
Therefore, instead of 80k Hours advertising jobs at such companies, which does give them our EA seal of moral approval, we should be morally stigmatizing them, denouncing them, and discouraging people from working with them.
If we adopt a 'sophisticated', 'balanced', mealy-mouthed approach where we kinda sorta approve of them recruiting EAs, but only in particular kinds of safety roles, in hope of influencing their management from the inside, we are likely to (1) fail to influence management, and (2) undermine our ability to use a moral stigmatization strategy to slow or pause AGI development.
In my opinion, if EAs banded together to advocate an immediate pause on any further AGI development, and adopted a public-relations strategy of morally stigmatizing any work in the AI industry, we would be much more likely to reduce AI extinction risk, than if spend our time trying to play 4-D chess in figuring out how to influence AI companies from the inside.
Some industries are simply evil and reckless, and it's good for us to say so.
Let's be honest with ourselves. The strategy we've followed for a decade, of trying to influence AI companies from the inside, to slow capabilities development and to promote AI alignment work, has failed. The strategy of trying to promote government regulation to slow reckless AI development is showing some signs of success, but is probably too slow to actually inhibit AI capabilities development. This leaves the informal public-relations strategy of stigmatizing the industry, to reduce up its funding, reduce its access to talent, and to make it morally embarrassing rather than cool to work in AI.
But EAs can only pursue the moral stigmatization strategy to slow AGI development if we are crystal clear that working on AGI development is a moral evil that we cannot endorse.
Michael -- I agree with your assessment here, both that the CEARCH report is very helpful and informative, but also that their estimated likelihood of nuclear (only 10% per century) seems much lower than seems reasonable, and much lower than other expert estimates that I've seen.
Just as a lot can happen in a century of AI development, a lot can happen over the next century that could increase the likelihood of nuclear war.
sammyboiz - I strongly agree. Thanks for writing this.
There seems to be no realistic prospect of solving AGI alignment or superalignment before the AI companies develop AGI or ASI. And they don't care. There are no realistic circumstances under which OpenAI, or DeepMind, or Meta, would say 'Oh no, capabilities research is far outpacing alignment; we need to hire 10x more alignment researchers, put all the capabilities researchers on paid leave, and pause AGI research until we fix this'. It will not happen.
Alternative strategies include formal governance work. But they also include grassroots activism, and informal moral stigmatization of AI research. I think of PauseAI as doing more of the last two, rather than just focusing on 'governance' per se.
As I've often argued, if EAs seriously think that AGI is an extinction risk, and that the AI companies seeking AGI cannot be trusted to slow down or pause until they solve the alignment and control problems, then our only realistic option is to use social, cultural, moral, financial, and government pressure to stop them. Now.
Alex - thanks for the helpful summary of this exciting new book.
It looks like a useful required textbook for my 'Psychology of Effective Altruism' course (syllabus here), next time I teach it!