Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3) mate choice, relationships, families , pronatalism, and population ethics as cause areas.
I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.
JackM - these alleged 'tremendous' benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it's deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
Conor -- yes, I understand that you're making judgment calls about what's likely to be net harmful versus helpful.
But your judgment calls seem to assume -- implicitly or explicitly -- that ASI alignment and control are possible, eventually, at least in principle.
Why do you assume that it's possible, at all, to achieve reliable long-term alignment of ASI agents? I see no serious reason to think that it is possible. And I've never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, possible.
And if ASI alignment isn't possible, then all AI 'safety research' at AI companies aiming to build ASI is, in fact, just safety-washing. And it all increases X risk by giving a false sense of security, and encouraging capabilities development.
So, IMHO, 80k Hours should re-assess what it's doing by posting these ads for jobs inside AI companies -- which are arguably the most dangerous organizations in human history.
This is a good video; thanks for sharing.
But I have to ask: why is 80k Hours still including job listings for AGI development companies that are imposing extinction risks on humanity?
I see dozens of jobs on the 80k Hours job board for positions at OpenAI, Anthropic, xAI, etc -- and not just in AI safety roles, but in capabilities development, lobbying, propaganda, etc. And even the 'AI safety jobs' seem to be there for safety-washing/PR purposes, with no real influence on slowing down AI capabilities development.
If 80k Hours wants to take a principled stand against reckless AGI development, then please don't advertise jobs where EAs are enticed by $300,000+ salaries to push AGI development.
Good post. Thank you.
But, I fear that you're overlooking a couple of crucial issues:
First, ageism. Lots of young people are simply biased against older people -- assuming that we're closed-minded, incapable of learning, ornery, hard to collaborate with, etc. I've encountered this often in EA.
Second, political bias. In my experience, 'signaling value-alignment' in EA organizations and AI safety groups isn't just a matter of showing familiarity with EA and AI concepts, people, strategies, etc. It's also a matter of signaling left-leaning political values, atheism, globalism, etc -- values which have no intrinsic or logical connection to EA or AI safety, but which are simply the water in which younger Millennials and Gen Z swim.
I trust my kids and grandkids to solve their own problems in the future.
I don't trust our generation to make sure our kids and grandkids survive.
Avoiding extinction is the urgent priority; all else can wait. (And, life is already getting better at a rapid rate for the vast majority of the world's people. We don't face any urgent or likely extinction risks other than technologies of our own making.)
I generally support the idea of 80k Hours putting more emphasis on AI risk as a central issue facing our species.
However, I think it's catastrophically naive to frame the issue as 'helping the transition to AGI go well'. This presupposes that there is a plausible path for (1) AGI alignment to be solved, for (2) global AGI safety treaties to be achieved and enforced in time, and for (3) our kids to survive and flourish in a post-AGI world.
I've seen no principled arguments to believe that any of these three things can be achieved. At all. And certainly not in the time frame we seem to have available.
So the key question is -- if there is actually NO credible path for 'helping the transition to AGI go well', should 80k Hours be pursuing a strategy that amounts to a whole lot of cope, and rearranging deck chairs on the Titanic, and gives a false sense of comfort and security to AI devs, and EA people, and politicians, and the general public?
I think 80k Hours has done a lot of harm in the past by encouraging smart young EAs to join AI companies to try to improve their safety cultures form within. As far as I've seen, that strategy has been a huge failure for AI safety, and a huge win for immoral AI companies following a deeply cynical strategy of safety-washing their capabilities development. OpenAI, DeepMind, Anthropic, xAI have all made noise about AI risks... and they've all hired EAs... and they've carried on, at top speed, racing towards AGI.
Perhaps there was some hope, 10 years ago, that installing a cadre of X-risk-savvy EAs in the heart of the AI industry might overcome its reckless incentives to pursue capabilities over safety. I see no such hope any more. Capabilities work has accelerated far faster than safety work.
If 80k Hours is going to take AI risks seriously, its leadership team needs to face the possibility that there is simply no safe way to develop AGI -- at least not for the next few centuries, until we have a much clearer understanding of how to solve AI alignment, including the very thorny game-theoretic complications of coordinating between billions of people and potentially trillions of AGIs.
And, if there is no safe way to develop AGI, let's stop pretending that there is one. Pretending is dangerous. Pretending gives misleading signals to young researchers, and regulators, and ordinary citizens.
If the only plausible way to survive the push towards AGI is to entirely shut down the push towards AGI, that's what 80k Hours needs to advocate. Not nudging more young talent into serving as ethical window-dressing and safety-washers for OpenAI and Anthropic.
Strongly endorsing Greg Colbourn's reply here.
When ordinary folks think seriously about AGI risks, they don't need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
Alex - thanks for the helpful summary of this exciting new book.
It looks like a useful required textbook for my 'Psychology of Effective Altruism' course (syllabus here), next time I teach it!
Well, the main asymmetry here is that the Left-leaning 'mainstream' press doesn't understand or report the Right's concerns about Leftist authoritarianism, but it generates and amplifies the Left's concerns about 'far Right authoritarianism'.
So, any EAs who follow 'mainstream' journalism (e.g. CNN, MSNBC, NY Times, WaPo) will tend to repeat their talking points, their analyses, and their biases.
Most reasonable observers, IMHO, understand that the US 'mainstream' press has become very left-leaning and highly biased over the last few decades, especially since 2015, and it is functioning largely as a propaganda wing of the Democratic Party. (Consider, for example, the 'mainstream' media's systematic denial of Biden's dementia for the last several years, until the symptoms became too painfully obvious, to everyone, to ignore. Such journalists would never have run cover for Trump, if he'd been developing dementia; they would have been demanding his resignation years ago.)
In any case, the partisan polarization on such issues is, perhaps, precisely why EAs should be very careful not to wade into these debates unless they have a very good reason for doing so, a lot of political knowledge and wisdom, an ability to understand both sides, and a recognition that these political differences are probably neither neglected nor tractable.
If we really want to make a difference in politics, I think we should be nudging the relevant decision-makers, policy wonks, staffers, and pundits into developing a better understanding of the global catastrophic risks that we face from nuclear war, bioweapons, and AI.
Jason -- your reply cuts to the heart of the matter.
Is it ethical to try to do good by taking a job within an evil and reckless industry? To 'steer it' in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?
I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.