Hi! I'm Cullen. I've been a Research Scientist in the Policy team at OpenAI since August. I also am a Research Affiliate at the Centre for the Governance of AI at the Future of Humanity Institute, where I interned in the summer of 2018.
I graduated from Harvard Law School cum laude in May 2019. There, I led the Harvard Law School and Harvard University Graduate Schools Effective Altruism groups. Prior to that, I was an undergraduate at the University of Michigan, where I majored in Philosophy and Ecology & Evolutionary Biology. I'm a member of Giving What We Can, One For The World, and Founder's Pledge.
Some things I've been thinking a lot about include:
- How to make sure AGI benefits everyone
- Law and AI development
- Law's relevance for AI policy
- Whether law school makes sense for EAs
- Social justice in relation to effective altruism
I'll be answering questions periodically this weekend! All answers come in my personal capacity, of course. As an enthusiastic member of the EA community, I'm excited to do this! :D
[Update: as the weekend ends, I will be slower replying but will still try to reply to all new comments for a while!]
I've been thinking a lot about this recently too. Unfortunately I didn't see this AMA until now but hopefully it's not too late to chime in. My biggest worry about SJ in relation to EA is that the political correctness / cancel culture / censorship that seems endemic in SJ (i.e., there are certain beliefs that you have to signal complete certainty in, or face accusations of various "isms" or "phobias", or worse, get demoted/fired/deplatformed) will come to affect EA as well.
I can see at least two ways of this happening to EA:
From your answers so far it seems like you're not particularly worried about this. If you have good reasons to not worry about this, please share them so I can move on to other problems myself.
(I think SJ is already actively doing harm because it pursues actions/policies based on these politically correct beliefs, many of which are likely wrong but can't be argued about. But I'm more worried about EA potentially doing this in the future because EAs tend to pursue more consequential actions/policies that will be much more disastrous (in terms of benefits foregone if nothing else) if they are wrong.)
Example of institutions being taken over by cancel culture and driving out their founders:
... (read more)