I will be online to answer questions morning-afternoon US Eastern time on Friday 17 December. Ask me anything!
About me:
- I am co-founder and Executive Director of the Global Catastrophic Risk Institute.
- I am also an editor at the journals Science and Engineering Ethics and AI & Society, and an honorary research affiliate at CSER.
- I have seen the field of global catastrophic risk grow and evolve over the years. I’ve been involved in global catastrophic risk since around 2008 and co-founded GCRI in 2011.
- My work focuses on bridging the divide between theoretical ideals about global catastrophic risk, the long-term future, outer space, etc. and the practical realities of how to make a positive difference on these issues. This includes research to develop and evaluate viable options for reducing global catastrophic risk, outreach to important actors (policymakers, industry, etc.), and activities to support the overall field of global catastrophic risk.
- The topics I cover are a bit eclectic. I have worked across a range of global catastrophic risks, especially artificial intelligence, asteroids, climate change, and nuclear weapons. I also work with a variety of research disciplines and non-academic professions. A lot of my work involves piecing together these various perspectives, communities, etc. This includes working at the interface between EA communities and other communities relevant to global catastrophic risk.
- I do a lot of advising for people interested in getting more involved in global catastrophic risk. Most of this is through the GCRI Advising and Collaboration Program. The program is not currently open; it will open again in 2022.
Some other items of note:
- Common points of advice for students and early-career professionals interested in global catastrophic risk, a write up of running themes from the advising I do (originally posted here).
- Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising, our recent annual report on the current state of affairs at GCRI.
- Subscribe to the GCRI newsletter or follow the GCRI website to stay informed about our work, next year’s Advising and Collaboration Program, etc.
- My personal website here.
I’m happy to field a wide range of questions, such as:
- Advice on how to get involved in global catastrophic risk, pursue a career in it, etc. Also specific questions on decisions you face: what subjects to study, what jobs to take, etc.
- Topics I wish more people were working on. There are many, so please provide some specifics of the sorts of topics you’re looking at. Otherwise I will probably say something about nanotechnology.
- The details of the global catastrophic risks and the opportunities to address them, and why I generally favor an integrated, cross-risk approach.
- What’s going on at GCRI: our ongoing activities, plans, funding, etc.
- The intersection of animal welfare and global catastrophic risk/long-term future, and why GCRI is working on nonhumans and AI ethics (see recent publications 1, 2, 3, 4).
- The world of academic publishing, which I’ve gotten a behind-the-scenes view of as a journal editor.
One type of question I will not answer is advice on where to donate money. GCRI does take donations, and I think GCRI is an excellent organization to donate to. We do a lot of great work on a small budget. However, I will not engage in judgments about which other organizations may be better or worse.
Thanks for the question. To summarize, I don't have a clear ranking of the risks, and I don't think it makes sense to rank them in terms of tractability. There are some tractable opportunities across a variety of risks, but how tractable they are can vary a lot depending on one's background and other factors.
First, tractability of a risk can vary significantly from person to person or from opportunity to opportunity. There was a separate question on which risks a few select individuals could have the largest impact on; my answer to that is relevant here.
Second, this is a good topic to note the interconnections between risks. There is a sense in which AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity are not distinct from each other. For example, nuclear power helps with climate change but can increase nuclear weapons risks, as in international debate over the nuclear program of Iran. Nuclear explosives have been proposed to address asteroid risk, but this could also affect nuclear weapons risks; see discussion in my paper Risk-risk tradeoff analysis of nuclear explosives for asteroid deflection. Pandemics can affect climate change; see e.g. Impact of COVID-19 on greenhouse gases emissions: A critical review. Improving international relations and improving the resilience of civilization helps across a range of risks. This makes it further difficult to compare the tractability of these various risks.
Third, I see tractability and neglectedness as being closely related. When a risk gets a lot of attention, a lot of the most tractable opportunities have already been taken or will be taken anyway.
With those caveats in mind, some answers:
Climate change is distinctive in the wide range of opportunities to reduce the risk. On one hand, this makes it difficult for dedicated effort to significantly reduce the overall risk, because so many efforts are needed. On the other hand, it does create some relatively easy opportunities to reduce the risk. For example, when you're walking out of a room, you might as well turn the lights off. This might not have a massive risk reduction, but the unit of work here is trivially small. More significant examples include living somewhere in which you don't need to drive everywhere and eating more of a vegan diet; these are both also worth doing for a variety of other reasons. That said, the most significant examples involve changes to policy, industry, etc that are unfortunately generally difficult to implement.
Nuclear weapons opportunities vary a lot in terms of tractability. There is a sense in which reducing nuclear weapons risk is easy: just don't launch the nuclear weapons! There is a different sense in which reducing the risk is very difficult: at its core, the risk derives from adversarial relations between certain major countries, and reducing the risk may depend on improving these relations, which is difficult. In between, there are a lot of opportunities to influence nuclear weapons policy. These are mostly very high-skill activities that benefit from advanced training in both international security and global catastrophic risk. For people who are able to train in these fields, I think the opportunities are quite good. Otherwise, there still are opportunities, but they are perhaps more limited.
Asteroid risk is an interesting case because the extreme portion of the risk may actually be more tractable. Large asteroids cause more extreme collisions, and because they are larger, they are also easier for astronomy research to detect. Indeed, a high percentage of the largest asteroids are believed to already be detected. None of the ones detected are on collision course with Earth. Much of the residual global catastrophic risk may involve more complex scenarios, such as involving smaller asteroids triggering inadvertent nuclear war; see my papers on this scenario here and here. My impression is that there may be some compelling opportunities to reduce the risk from these scenarios.
For AI, at the moment I think there are some excellent opportunities related to near-term AI governance. The deep learning revolution has put AI high on the agenda for public policy. There are active high-level initiatives to establish AI policy going on right now, and there are good opportunities to influence these policies. Once these policies are set, they may remain largely intact for a long time. It's important to take advantage of these opportunities while they still exist. Additionally, I think there is low-hanging fruit in other domains. One example is corporate governance, which has gotten relatively little attention especially from people with an orientation toward long-term catastrophic risks; see my recent post on long-term AI corporate governance with Jonas Schuett of the Legal Priorities Project. Another example is AI ethics, which has gotten surprisingly little attention; see my work with Andrea Owe of GCRI here, here, here, and here. There may also be good opportunities on AI safety design techniques, though I am less qualified to comment on this.
For biosecurity, I am less active on it at the moment, so I am less qualified to comment. Also, COVID-19 significantly changes the landscape of opportunities. So I don't have a clear answer on this.