Hi! I'm Cullen. I've been a Research Scientist in the Policy team at OpenAI since August. I also am a Research Affiliate at the Centre for the Governance of AI at the Future of Humanity Institute, where I interned in the summer of 2018.
I graduated from Harvard Law School cum laude in May 2019. There, I led the Harvard Law School and Harvard University Graduate Schools Effective Altruism groups. Prior to that, I was an undergraduate at the University of Michigan, where I majored in Philosophy and Ecology & Evolutionary Biology. I'm a member of Giving What We Can, One For The World, and Founder's Pledge.
Some things I've been thinking a lot about include:
- How to make sure AGI benefits everyone
- Law and AI development
- Law's relevance for AI policy
- Whether law school makes sense for EAs
- Social justice in relation to effective altruism
I'll be answering questions periodically this weekend! All answers come in my personal capacity, of course. As an enthusiastic member of the EA community, I'm excited to do this! :D
[Update: as the weekend ends, I will be slower replying but will still try to reply to all new comments for a while!]
In a comment from October 2019, Ben Pace stated that there is currently no actionable policy advice the AI safety community could give to the President of the United States. I'm wondering to what extent you agree with this.
If the US President or an influential member of Congress was willing to talk one-on-one with you for a couple hours on the issue of AI safety policy, what advice would you give them?
Hm, I haven't thought about this particular issue a lot. I am more focused on research and industry advocacy right now than government work.
I suppose one nice thing would be to have an explicit area of antitrust leniency carved out for cooperations on AI safety.