Hi! I'm Cullen. I've been a Research Scientist in the Policy team at OpenAI since August. I also am a Research Affiliate at the Centre for the Governance of AI at the Future of Humanity Institute, where I interned in the summer of 2018.
I graduated from Harvard Law School cum laude in May 2019. There, I led the Harvard Law School and Harvard University Graduate Schools Effective Altruism groups. Prior to that, I was an undergraduate at the University of Michigan, where I majored in Philosophy and Ecology & Evolutionary Biology. I'm a member of Giving What We Can, One For The World, and Founder's Pledge.
Some things I've been thinking a lot about include:
- How to make sure AGI benefits everyone
- Law and AI development
- Law's relevance for AI policy
- Whether law school makes sense for EAs
- Social justice in relation to effective altruism
I'll be answering questions periodically this weekend! All answers come in my personal capacity, of course. As an enthusiastic member of the EA community, I'm excited to do this! :D
[Update: as the weekend ends, I will be slower replying but will still try to reply to all new comments for a while!]
Which actors do you think one should try to influence to make sure that a potential transition to a world with AGI goes well (e.g. so that it leads to widely shared benefits)? For instance, do you think one should primarily focus on influencing private companies or governments? I'd be interested in learning more about the arguments for whatever conclusions you have. Thanks!
The boring answer is that there's a variety of relationships that need to be managed well in order for AGI deployment to go optimally. Comparative advantage and opportunity are probably good indicators of where the most fruitful work for any given individual is. That said, I think working with industry can be pretty highly leveraged since it's more nimble and easier to persuade than government IMO.