Nguyen, Chi (2020) My understanding of Paul Christiano’s iterated amplification AI safety research agenda, Effective Altruism Forum, August 15.
Wiblin, Robert & Keiran Harris (2018) Dr Paul Christiano on how OpenAI is developing real solutions to the "AI alignment problem", and his vision of how humanity will progressively hand over decision- making to AI systems, 80,000 Hours, October 2.
Rice, Issa (2018)(2018a) List of discussions between Paul Christiano and Wei Dai, Cause Prioritization Wiki.
Rice, Issa (2018b) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki.
Ngo, Richard (2020) EA reading list: Paul Christiano, Effective Altruism Forum, August 4.
Paul Christiano. Official website.
Paul Christiano is an American AI safety researcher. Christiano runs the language model alignment team at OpenAIAlignment Research Center, and is a research associate at the Future of Humanity Institute, a board member at Ought and a technical advisor for Open Philanthropy. Previously, he ran the language model alignment team at OpenAI.
Paul Christiano. Effective Altruism Forum account.