I will be online to answer questions morning-afternoon US Eastern time on Friday 17 December. Ask me anything!
About me:
- I am co-founder and Executive Director of the Global Catastrophic Risk Institute.
- I am also an editor at the journals Science and Engineering Ethics and AI & Society, and an honorary research affiliate at CSER.
- I have seen the field of global catastrophic risk grow and evolve over the years. I’ve been involved in global catastrophic risk since around 2008 and co-founded GCRI in 2011.
- My work focuses on bridging the divide between theoretical ideals about global catastrophic risk, the long-term future, outer space, etc. and the practical realities of how to make a positive difference on these issues. This includes research to develop and evaluate viable options for reducing global catastrophic risk, outreach to important actors (policymakers, industry, etc.), and activities to support the overall field of global catastrophic risk.
- The topics I cover are a bit eclectic. I have worked across a range of global catastrophic risks, especially artificial intelligence, asteroids, climate change, and nuclear weapons. I also work with a variety of research disciplines and non-academic professions. A lot of my work involves piecing together these various perspectives, communities, etc. This includes working at the interface between EA communities and other communities relevant to global catastrophic risk.
- I do a lot of advising for people interested in getting more involved in global catastrophic risk. Most of this is through the GCRI Advising and Collaboration Program. The program is not currently open; it will open again in 2022.
Some other items of note:
- Common points of advice for students and early-career professionals interested in global catastrophic risk, a write up of running themes from the advising I do (originally posted here).
- Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising, our recent annual report on the current state of affairs at GCRI.
- Subscribe to the GCRI newsletter or follow the GCRI website to stay informed about our work, next year’s Advising and Collaboration Program, etc.
- My personal website here.
I’m happy to field a wide range of questions, such as:
- Advice on how to get involved in global catastrophic risk, pursue a career in it, etc. Also specific questions on decisions you face: what subjects to study, what jobs to take, etc.
- Topics I wish more people were working on. There are many, so please provide some specifics of the sorts of topics you’re looking at. Otherwise I will probably say something about nanotechnology.
- The details of the global catastrophic risks and the opportunities to address them, and why I generally favor an integrated, cross-risk approach.
- What’s going on at GCRI: our ongoing activities, plans, funding, etc.
- The intersection of animal welfare and global catastrophic risk/long-term future, and why GCRI is working on nonhumans and AI ethics (see recent publications 1, 2, 3, 4).
- The world of academic publishing, which I’ve gotten a behind-the-scenes view of as a journal editor.
One type of question I will not answer is advice on where to donate money. GCRI does take donations, and I think GCRI is an excellent organization to donate to. We do a lot of great work on a small budget. However, I will not engage in judgments about which other organizations may be better or worse.
I glanced at GCRI's research you linked. I think AI is a big deal in expectation, but I'm prima facie skeptical about the value of "AI ethics." My baseline imagination is that we get capabilities first, then figure out what to do with AI. I'm substantially more optimistic about our ability to make good decisions after we have strong AI, and I think the moral importance of the time after we get strong AI dominates the time before (in expectation). Of course, GCRI isn't the only institution to do AI ethics work, so I might be missing something — what's the basic case for doing AI ethics now? (Feel free to refer me to something already written rather than writing a reply yourself; there may be good existing writeups.)
Thanks for the question. This is a good thing to think critically about. With respect to strong AI, the short answer is that it's important to develop these sorts of ideas in advance. If we wait until we already have the technology, it could be too late. There are some scenarios in which waiting is more viable, such as the idea of a long reflection, but this is only a portion of the total scenario space, and even then, the outcomes could depend on the initial setup. Additionally, ethics can also matter for near-term / weak AI, including in ways that affect global catastrophic risk, such as in the context of environmental or military affairs.