Bio

Participation
2

I have particular expertise in:
- Developing and implementing policy
- Improving decision making within organisations, mainly focused on improving people’s reasoning process (e.g. predictive/forecasting) that underpins how they make and communicate judgments. 
- AI Safety policy

This has been achieved through being:

1) Director of Daymark Decision Insights, a company which provides consultative services and tailor made workshops related to improving decision making and reasoning processes to high-impact organisations (https://www.daymark-di.com/). More recently I’ve provided specific consultative services on policy development and advocacy to a large-scale AI safety organisation.

2) Director of Impactful Government Careers - an organisation focused on helping individuals find, secure, and excel in high impact civil service roles.

3) I spent 5 years working in the heart of the UK Government, with 4 of those at HM Treasury. With roles including:

- Head of Development Policy, HM Treasury
- Head of Strategy, Centre for Data Ethics and Innovation
- Senior Policy Advisor, Strategy and Spending for Official Development Assistance, HM Treasury

These roles have involved: advising UK Ministers on policy, spending, and strategy issues relating to international development; assessing the value for money of proposed high-value development projects; developing the 2021 CDEI Strategy and leading the organisational change related to this.

4) I’ve completed an MSc in Cognitive and Decision Sciences at UCL, where I have focused my research on probabilistic reasoning and improving individual and group decision-making processes. My final research project involved an experimental study into whether a short course (2 hours) on Bayesian reasoning could improve individual's single-shot accuracy when forecasting geopolitical events.

How others can help me

I am looking for individuals and groups that are interested in improving institutional decision making, whether that's within the typical high-power institutions such as governments/civil services, multilateral bodies, large multinational corporations, or smaller EA organisations that are delivering high-impact work.

How I can help others

I have a broad range of experience, but can probably be of best help on the topics of:

  • Improving institutional decision making, particularly through embedding decision science methods to deliver more accurate, efficient, and inclusive reasoning processes.
  • Reasoning under uncertainty, particularly how to improve predictive reasoning (both causal inference, i.e. 'will x action lead to y outcome', and forecasting, i.e. 'will x event happen').
  • Working in the UK Civil Service, particularly central Government and opportunities for maximising impact in such roles.
  • Getting things done in Government - how to best utilise your role, skills, and those of your teams/stakeholders, to support the decision-making process and deliver high-impact results.
  • Changing career (as someone that has done two large career changes)

On the side, a colleague and me run a small project helping to improve predictive reasoning which can be found here: https://www.daymark-di.com/. If you are interested in finding out more, feel free to drop me a message.

Posts
2

Sorted by New

Comments
37

I think that’s quite a broad remit. What’s the focus of improving the decisions? Better problem identification/specification? Better data analysis and evidence base? Better predictive accuracy? Better efficiency/Adaptiveness/robustness?

Much of the community’s focus is rightly on technical alignment and governance. However, there seems to be a significant blind spot regarding societal adaptation, specifically, how we raise and educate the next generation.

Our current education model is predicated on a learn skills to provide economic value loop. When transformative AI disrupts this model, we risk creating a generation that is not only economically displaced but fundamentally disenfranchised and without a clear sense of purpose. Historically, large populations of disenfranchised young people have been a primary driver of societal collapse and political volatility.

If the transition to a post-AGI world is chaotic due to human unrest, our ability to manage technical safety drops significantly. Is anyone seriously funding or working on how education/raising children needs to change to fit with an AGI era? It seems like ensuring the next generation is psychologically and philosophically prepared for a world of transformative AI is a necessary prerequisite for a stable transition.


 


 


 


 

I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!

Is this just a combination of anchoring with confirmation bias? Or have I misunderstood.

Thank you for responding Catherine! It’s very much appreciated.

This should therefore be easily transferable into feedback to the grantee.

I think this is where we disagree - this written information often isn’t in a good shape to be shared with applicants and would need significant work before sharing.

I think this is my fundamental concern. Reasoning transparency and systematic processes to record grant maker’s judgments and show how they are updating their position should be intrinsic to how they are evaluating the applications. Otherwise they can’t have much confidence in the quality of their decisions or hope to learn from what judgment errors they make when determining which grants to fund (as they have no clear way to track back why they made a grant and whether or not that was a predictor for its success/failure).

I am not so aware of the assumption you make up front, and would agree with you that anyone making such an assumption is being naive. Not least because humans on average (and even supers under many conditions) are objectively inaccurate at forecasting - even if relatively good given we don’t have anything better yet.

I think the more interesting and important when it comes to AI forecasting and claiming they are “good”, is to look at the reasoning process that they undertaken to do that. How are they forming reference classes, how are they integrating specific information, how are they updating their posterior to form an accurate inference and likelihood of the event occurring? Right now, they can sort of do (1), but from my experience don’t do well at all at integration, updating, and making a probabilistic judgment. In fairness, humans often don’t either. But we do it more consistently than current AI.

For your post, this suggests to me that AI could be used to help base rate/reference class creation, and maybe loosely support integration.

Thank you for posting this, is it definitely nice to get a funders perspective on this!

From the other side (someone who has applied for grants and received little to no feedback on them), and having been involved in very large scale grant making through my governmental role, I fear your point (1) below is likely to be the greatest influence on grantmakers not providing feedback. Unfortunately, this I find (and did when I was a grantmaker and was prevented/unable to provide feedback) is often a cover for a lack of transparent and good reasoning practice in the grant decision process. 

The vast majority of EAs are aware of reasoning transparency and good Bayesian reasoning practices. I'd hope, as I assume many members of the EA community do, that EA grant makers have a defined method to record the grantmakers judgments and what is updating their view of a grants potential impact and likelihood of success. Not least because this would allow them to identify errors and any systematic biases that grantmakers may have, and thus improve as necessary. This should therefore be easily transferable into feedback to the grantee. 

The fact this isn't done raises questions for me. Are there such systematic processes? If not, how do grantmakers have confidence in their decision making a priori? If there are such processes to record reasoning, why can't they be summarised and provided for feedback?

The post you linked by Linch and the concern he raises that by being transparent about the reasons for not making a grant may risk applicants overupdating on the feedback seems unfounded/unevidenced. I also question how relevant given they weren't funded anyway, so why would you be concerned they'd over update? If you don't tell them they were a near miss and what changes may change your mind, then instead the risk is they either update randomly or the project is just completely canned - which feels worse for edge cases.
 

There might have been a lot of factors that led to a decision. 

Sometimes there are multiple small reasons to be hesitant about a person doing a particular project – none of which are “deal breakers”. There is often so much uncertainty and decision-makers realistically have so little information such that the only option is to rely on small bits of information to update their view on the person/project. Each of these factors on their own might just be a small grain of sand on a scale, or be felt with low confidence, but together they might build up to tip the scale. 

I tend to agree with you, though would rather people were more on the “close early” side of the coin than the “hold out”. Simply because the sunk cost fallacy and confirmation bias in your own idea is incredibly strong and I see no compelling reason for how current funders in the EA space help counteract these (beyond maybe being aware of them more than the average funder).

In an ideal system the funders should be driving most of these decisions by requiring clear milestones and evaluation processes for who they fund. If the funder did this they would be able to identify predictive signals of success and help avoid early or late closures (e.g. “we see on average policy advocacy groups that have been successful have met fewer/more comparable milestones and recommend continued/stopping funding”). This can still allow the organisation to pitch for why they are outside of the average, but the funder should be in the best position to know what is signalling success and what isn’t.

Unfortunately I don’t see such a system and I fear the incentives aren’t aligned in the EA ecosystem to create it. The organisations getting funded enjoy the looser, less funder involved setup. And funders de-risk their reputational risk by not properly evaluating what is working and why, and they can continue funding projects they are personally interested in but have questionable causal impact chains. *noting I think EA GHD has much less of this issue mainly because funders anchor on GiveWell assessments which is to a large degree delivering the mechanism I outline above.

JamesN
45
15
0
3
3

The science underpinning this study is unfortunately incredibly limited. For instance, there isn’t even basic significance testing provided. Furthermore, the use of historic events to check forecasting accuracy, and the limited reporting of proper measures to prevent the model utilising knowledge not available to the human forecasters (with only a brief mention of the researchers providing the search link with pre-approved) is also very poor.

I’m all for AI tools improving decision making and have undertaken several side-projects myself on this. But studies like this should be highlighted for their lack of scientific standards and thus, we should be sceptical of how much we use them to update our judgments of how good AI should be at forecasting currently (which to me is quite low given they struggle to causally reason)

What are some of the best (relatively) small podcasts on AI and/or policy that people would recommend? I know of all the big ones, but keen to see if there are any nascent ones that are worth sharing with others.

Load more