Passionate about policy and AI.
Feel free to reach out about anything, or just to say hi!
I have strong attention to detail and enjoy creating solid experiences for people. I'm especially interested in working in AI safety or governance, tech policy, security, or international relations. My skillset and experience includes operations, events, facilitation, HR, community management, and research. I am looking for roles that allow me to work as part of a team, in London or remotely in the same time zone.
I've been involved in Effective Altruism since 2015, including founding a society and working in movement building. Later, I quit my job as a digital consultant to work on AI. I did a data science bootcamp, the AIM Research Training Program, some AI Governance research assistance and facilitated BlueDot Impact's AI Governance course. Recently I have been Operations and Community Manager at Pivotal.
This was a great experience and I learnt a lot:
We really wanted to complete the project in a tight timeframe. I actually posted this 2 weeks after we finished because it was the first chance I had.
Some reflections:
I think that the amount of time we set aside was too short for us, and we could still have made worthwhile improvements with more time to reflect, such as:
(I'll come back and reply to this comment with more of my own reflections if I think of more and get more time in the next day or two) (edit: formatting)
Can applicants update their application after submitting?
This was an extremely useful feature of Lightspeed Grants, because the strength of my application significantly improved every couple of weeks.
If it’s not a built-in feature, can applicants link to a google doc?
Thank you answering our questions!
How does Animal Welfare/Global Health affect AI Safety? Very brief considerations.
I think someone might build super strong AI in the next few years, and this could affect most of the value of the future. If true, I think it implies that the majority of any value from an intervention or cause area comes from how it affects whether AI goes well. Even if that's very slight and indirect. Relatedly, I think whether AI goes well depends on whether states will be able to coordinate.
How do Animal Welfare interventions affect whether AI goes well?
– I think the Moral Circle expansion is relevant.
– Helping reach climate targets seems relevant to help with international coordination.
– But I think that Animal Welfare interventions place a cost on society such as by raising the price of food and increasing pressure on our governments in high-income countries.
How do Global Health interventions affect whether AI goes well?
– I think that it reduces the pressure on governments in LMICs and gives them a safer society. This gives their Governments slightly more room to come to peaceful international agreements.
– But it may also enable more people to contribute to AI, whether that be AI capabilities development, chip manufacture (or AI safety/governance)
Overall, I slightly lean towards global health being better. Perhaps RP's tools shed light on this. (I haven't checked!)