Bio

Participation
4

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
721

My new interview (48 mins) on AI risks for Bannon's War room: https://rumble.com/v6z707g-full-battleground-91925.html

This was my attempt to try out a few new arguments, metaphors, and talking points to raise awareness about AI risks among MAGA conservatives. I'd appreciate any feedback, especially from EAs who lean to the Right politically, about which points were most or least compelling.

PS the full video of my 15-minute talk was just posted today on the NatCon YouTube channel; here's the link

David -- I considered myself an atheist for several decades (partly in alignment with my work in evolutionary psychology), and would identify now as an agnostic (insofar as the Simulation Hypothesis has some slight chance of being true, and insofar as 'Simulation-Coders' aren't functionally any different from 'Gods', from our point of view).

And I'm not opposed to various kinds of reproductive tech, regenerative medicine research, polygenic screening, etc.

However, IMHO, too many atheists in the EA/Rationalist/AI Safety subculture have been too hostile or dismissive of religion to be effective in sharing the AI risk message with religious people (as I alluded to in this post). 

And, I think way too much overlap has developed between transhumanism and the e/acc cult that dismisses AI risk entirely, and/or that embraces human extinction and replacement by machine intelligences. Insofar as 'transhumanism' has morphed into contempt for humanity-as-it-is, and into a yearning for hypothetical-posthumanity-as-it-could be, then I think it's very dangerous.

Modest, gradual, genetic selection or modification of humans to make them a little healthier or smarter, generation by generation? That's fine with me. 

Radical replacement of humanity by ASIs in order to colonize the galaxy and the lightcone faster? Not fine with me.

Arepo - thanks for your comment.

To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.

And I might have added that thousands of AI devs employed by AI companies to build AGI/ASI have very strong incentives not to learn about too much about AI risks and AI safety of the sort that EAs have talked about for years, because such knowledge would cause massive cognitive dissonance, ethical self-doubt, regret (as in the case of Geoff Hinton), and/or would handicap their careers and threaten their salaries and equity stakes. 

Remmelt - thanks for posting this. 

Senator Josh Hawley is a big deal, with a lot of influence. I think building alliances with people like him could help slow down reckless AGI development. He may not be as tuned into AI X-risk as your typical EA is, but he is, at least, resisting the power of the pro-AI lobbyists.

Thanks for sharing this. 

IMHO, if EAs really want effective AI regulation & treaties, and a reduction in ASI extinction risk, we need to engage more with conservatives, including those currently in power in Washington. And we need to do so using the language and values that appeal to conservatives.  

Joel -- have you actually read the Bruce Gilley book? 

If you haven't, maybe give it a try before dismissing it as something that's 'extremely useful to avoid associating ourselves with'.

To me, EA involves a moral obligation to seek the truth about contentious political topics, especially those that concern the origins and functioning of successful institutions -- which is what the whole colonialism debate is centrally about. And not ignoring these topics just to stay inside the Overton window.

I think EA should be careful not to take the 'colonialism studies' too seriously -- i.e. the view that colonialism was almost entirely bad, and that decolonialism was almost entirely good -- esp in subsaharan Africa. That's the view that seems to be spilling over here into the assumption that 'colonialism was bad, neocolonialism is bad; so if EA is neocolonialist, then EA is bad'.

For a counter-argument against this 'colonialism studies' dogma, see the recent book 'The case of colonialism' (2023) by Bruce Gilley. IMHO, he makes a pretty compelling case that in almost every case, colonialism was one of the best things that ever happened to indigenous cultures (e.g. in spreading the rule of law, developing infrastructure, improving education, promoting economic development, decreasing tribal warfare and rape, promoting women's rights, etc), and decolonialism was one of the worst things (e.g. in backsliding into counter-productive Marxist revolutionary zeal and/or corrupt kleptocracies).

If Gilley's general point is correct, then EAs should not feel ashamed about various of our global health and global poverty-reduction projects sounding a bit 'neocolonialist'. 

Jason -- your reply cuts to the heart of the matter.

Is it ethical to try to do good by taking a job within an evil and reckless industry? To 'steer it' in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?

I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.

JackM - these alleged 'tremendous' benefits are all hypothetical and speculative. 

Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.

This is why I think it's deeply unethical for 80k Hours to post jobs to work on ASI within AI companies. 

Load more