J

JamesN

Executive Director @ Swift Centre for Applied Forecasting
195 karmaJoined Working (6-15 years)London, UK
www.swiftcentre.org

Bio

Participation
3

Executive Director of the Swift Centre for Applied Forecasting (led projects with U.K. Gov., Google DeepMind, and on AI security and capability risks). 

Co-founder of ‘Looking for Growth’ - a political movement for growth in the U.K. 

CTO of Praxis - a AI led assessment platform for schools

Former Head of Policy at ControlAI (co-authored ‘A Narrow Path’)

Former Director of Impactful Government Careers

Former Head of Development Policy at HM Treasury

Former Head of Strategy at the Centre for Data Ethics and Innovation

Former Senior Policy Advisor at HM Treasury, leading on the economic and financial response to the war in Ukraine, and the modelling and allocation of the UK's 'Official Development Assistance' budget.

MSc in Cognitive and Decision Sciences from UCL, my dissertation was an experimental study using Bayesian reasoning to improve predictive reasoning and forecasting in U.K. public policy officials and analysts

How others can help me

I am looking for individuals and groups that are interested in improving institutional decision making, whether that's within the typical high-power institutions such as governments/civil services, multilateral bodies, large multinational corporations, or smaller EA organisations that are delivering high-impact work.

How I can help others

I have a broad range of experience, but can probably be of best help on the topics of:

  • AI policy and strategy
  • Scenario analysis, foresight, and forecasting
  • Decision making under uncertainty - Government policy making, especially in the U.K. and international institutions
  • Career development and changes

Posts
2

Sorted by New

Comments
39

Mental health support for those working on AI risks and policy?

During the numerous projects I work on relating to AI risks, policies, and future threats/scenarios, I speak to a lot of people who bring exposed to issues of catastrophic and existential nature for the first time (or grappling with them for the first time in detail). This combined with the likelihood that things will get worse before they better, makes me frequently wonder: are we doing enough around mental health support?

Things that I don’t know exist but feel they should. Some may sound OTT but I expect you could fund all of these for c.$300k, which relative to the amount being spent in the sector as a whole, is tiny in exchange for resilience of the talent we’re building. 

  • Structured proactive therapy or mental health resilience sessions tied into Fellowship programs.
  • Regular in-built mental health support within organisations dealing with AI risks etc., particularly around helping to anti-catastrophise (there are several threat models which are only catastrophically bad if many, many, many things go down and happen counter to regular incentives - but are very worrying to people (especially new entrants to the field) - it feels support to help prioritise the risks would help both outcomes and mental health.
  • Free to use services for mental health support for those in the field.

I'll be in D.C. on the 12th February and morning of the 13th, before heading to EAG. If people are around and want to meet, feel free to drop me a DM!

I think that’s quite a broad remit. What’s the focus of improving the decisions? Better problem identification/specification? Better data analysis and evidence base? Better predictive accuracy? Better efficiency/Adaptiveness/robustness?

Much of the community’s focus is rightly on technical alignment and governance. However, there seems to be a significant blind spot regarding societal adaptation, specifically, how we raise and educate the next generation.

Our current education model is predicated on a learn skills to provide economic value loop. When transformative AI disrupts this model, we risk creating a generation that is not only economically displaced but fundamentally disenfranchised and without a clear sense of purpose. Historically, large populations of disenfranchised young people have been a primary driver of societal collapse and political volatility.

If the transition to a post-AGI world is chaotic due to human unrest, our ability to manage technical safety drops significantly. Is anyone seriously funding or working on how education/raising children needs to change to fit with an AGI era? It seems like ensuring the next generation is psychologically and philosophically prepared for a world of transformative AI is a necessary prerequisite for a stable transition.


 


 


 


 

I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!

Is this just a combination of anchoring with confirmation bias? Or have I misunderstood.

Thank you for responding Catherine! It’s very much appreciated.

This should therefore be easily transferable into feedback to the grantee.

I think this is where we disagree - this written information often isn’t in a good shape to be shared with applicants and would need significant work before sharing.

I think this is my fundamental concern. Reasoning transparency and systematic processes to record grant maker’s judgments and show how they are updating their position should be intrinsic to how they are evaluating the applications. Otherwise they can’t have much confidence in the quality of their decisions or hope to learn from what judgment errors they make when determining which grants to fund (as they have no clear way to track back why they made a grant and whether or not that was a predictor for its success/failure).

I am not so aware of the assumption you make up front, and would agree with you that anyone making such an assumption is being naive. Not least because humans on average (and even supers under many conditions) are objectively inaccurate at forecasting - even if relatively good given we don’t have anything better yet.

I think the more interesting and important when it comes to AI forecasting and claiming they are “good”, is to look at the reasoning process that they undertaken to do that. How are they forming reference classes, how are they integrating specific information, how are they updating their posterior to form an accurate inference and likelihood of the event occurring? Right now, they can sort of do (1), but from my experience don’t do well at all at integration, updating, and making a probabilistic judgment. In fairness, humans often don’t either. But we do it more consistently than current AI.

For your post, this suggests to me that AI could be used to help base rate/reference class creation, and maybe loosely support integration.

Thank you for posting this, is it definitely nice to get a funders perspective on this!

From the other side (someone who has applied for grants and received little to no feedback on them), and having been involved in very large scale grant making through my governmental role, I fear your point (1) below is likely to be the greatest influence on grantmakers not providing feedback. Unfortunately, this I find (and did when I was a grantmaker and was prevented/unable to provide feedback) is often a cover for a lack of transparent and good reasoning practice in the grant decision process. 

The vast majority of EAs are aware of reasoning transparency and good Bayesian reasoning practices. I'd hope, as I assume many members of the EA community do, that EA grant makers have a defined method to record the grantmakers judgments and what is updating their view of a grants potential impact and likelihood of success. Not least because this would allow them to identify errors and any systematic biases that grantmakers may have, and thus improve as necessary. This should therefore be easily transferable into feedback to the grantee. 

The fact this isn't done raises questions for me. Are there such systematic processes? If not, how do grantmakers have confidence in their decision making a priori? If there are such processes to record reasoning, why can't they be summarised and provided for feedback?

The post you linked by Linch and the concern he raises that by being transparent about the reasons for not making a grant may risk applicants overupdating on the feedback seems unfounded/unevidenced. I also question how relevant given they weren't funded anyway, so why would you be concerned they'd over update? If you don't tell them they were a near miss and what changes may change your mind, then instead the risk is they either update randomly or the project is just completely canned - which feels worse for edge cases.
 

There might have been a lot of factors that led to a decision. 

Sometimes there are multiple small reasons to be hesitant about a person doing a particular project – none of which are “deal breakers”. There is often so much uncertainty and decision-makers realistically have so little information such that the only option is to rely on small bits of information to update their view on the person/project. Each of these factors on their own might just be a small grain of sand on a scale, or be felt with low confidence, but together they might build up to tip the scale. 

I tend to agree with you, though would rather people were more on the “close early” side of the coin than the “hold out”. Simply because the sunk cost fallacy and confirmation bias in your own idea is incredibly strong and I see no compelling reason for how current funders in the EA space help counteract these (beyond maybe being aware of them more than the average funder).

In an ideal system the funders should be driving most of these decisions by requiring clear milestones and evaluation processes for who they fund. If the funder did this they would be able to identify predictive signals of success and help avoid early or late closures (e.g. “we see on average policy advocacy groups that have been successful have met fewer/more comparable milestones and recommend continued/stopping funding”). This can still allow the organisation to pitch for why they are outside of the average, but the funder should be in the best position to know what is signalling success and what isn’t.

Unfortunately I don’t see such a system and I fear the incentives aren’t aligned in the EA ecosystem to create it. The organisations getting funded enjoy the looser, less funder involved setup. And funders de-risk their reputational risk by not properly evaluating what is working and why, and they can continue funding projects they are personally interested in but have questionable causal impact chains. *noting I think EA GHD has much less of this issue mainly because funders anchor on GiveWell assessments which is to a large degree delivering the mechanism I outline above.

Load more