Executive Director of the Swift Centre for Applied Forecasting (led projects with U.K. Gov., Google DeepMind, and on AI security and capability risks).
Co-founder of ‘Looking for Growth’ - a political movement for growth in the U.K.
CTO of Praxis - a AI led assessment platform for schools
Former Head of Policy at ControlAI (co-authored ‘A Narrow Path’)
Former Director of Impactful Government Careers
Former Head of Development Policy at HM Treasury
Former Head of Strategy at the Centre for Data Ethics and Innovation
Former Senior Policy Advisor at HM Treasury, leading on the economic and financial response to the war in Ukraine, and the modelling and allocation of the UK's 'Official Development Assistance' budget.
MSc in Cognitive and Decision Sciences from UCL, my dissertation was an experimental study using Bayesian reasoning to improve predictive reasoning and forecasting in U.K. public policy officials and analysts
I am looking for individuals and groups that are interested in improving institutional decision making, whether that's within the typical high-power institutions such as governments/civil services, multilateral bodies, large multinational corporations, or smaller EA organisations that are delivering high-impact work.
I have a broad range of experience, but can probably be of best help on the topics of:
Very cool. I’m a forever optimist when it comes to the potential of AI tools to improve decision making and how people reason about the world or interact with the world.
There is such a huge risk with any such tools of incentive misalignment (I.e. quality of reasoning and error reduction often isn’t well rewarded in most professional contexts).
For these to work, I strongly believe the integration method is absolutely critical. A stand alone platform or app, or something that needs to be proactively engaged is going to struggle I fear.
Something that works with organisations and groups to build the better incentives would be high impact I feel.
Mental health support for those working on AI risks and policy?
During the numerous projects I work on relating to AI risks, policies, and future threats/scenarios, I speak to a lot of people who bring exposed to issues of catastrophic and existential nature for the first time (or grappling with them for the first time in detail). This combined with the likelihood that things will get worse before they better, makes me frequently wonder: are we doing enough around mental health support?
Things that I don’t know exist but feel they should. Some may sound OTT but I expect you could fund all of these for c.$300k, which relative to the amount being spent in the sector as a whole, is tiny in exchange for resilience of the talent we’re building.
Much of the community’s focus is rightly on technical alignment and governance. However, there seems to be a significant blind spot regarding societal adaptation, specifically, how we raise and educate the next generation.
Our current education model is predicated on a learn skills to provide economic value loop. When transformative AI disrupts this model, we risk creating a generation that is not only economically displaced but fundamentally disenfranchised and without a clear sense of purpose. Historically, large populations of disenfranchised young people have been a primary driver of societal collapse and political volatility.
If the transition to a post-AGI world is chaotic due to human unrest, our ability to manage technical safety drops significantly. Is anyone seriously funding or working on how education/raising children needs to change to fit with an AGI era? It seems like ensuring the next generation is psychologically and philosophically prepared for a world of transformative AI is a necessary prerequisite for a stable transition.
I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!
Thank you for responding Catherine! It’s very much appreciated.
This should therefore be easily transferable into feedback to the grantee.
I think this is where we disagree - this written information often isn’t in a good shape to be shared with applicants and would need significant work before sharing.
I think this is my fundamental concern. Reasoning transparency and systematic processes to record grant maker’s judgments and show how they are updating their position should be intrinsic to how they are evaluating the applications. Otherwise they can’t have much confidence in the quality of their decisions or hope to learn from what judgment errors they make when determining which grants to fund (as they have no clear way to track back why they made a grant and whether or not that was a predictor for its success/failure).
I am not so aware of the assumption you make up front, and would agree with you that anyone making such an assumption is being naive. Not least because humans on average (and even supers under many conditions) are objectively inaccurate at forecasting - even if relatively good given we don’t have anything better yet.
I think the more interesting and important when it comes to AI forecasting and claiming they are “good”, is to look at the reasoning process that they undertaken to do that. How are they forming reference classes, how are they integrating specific information, how are they updating their posterior to form an accurate inference and likelihood of the event occurring? Right now, they can sort of do (1), but from my experience don’t do well at all at integration, updating, and making a probabilistic judgment. In fairness, humans often don’t either. But we do it more consistently than current AI.
For your post, this suggests to me that AI could be used to help base rate/reference class creation, and maybe loosely support integration.
Thanks!