Bio

Participation
2

I have particular expertise in:
- Developing and implementing policy
- Improving decision making within organisations, mainly focused on improving people’s reasoning process (e.g. predictive/forecasting) that underpins how they make and communicate judgments. 
- AI Safety policy

This has been achieved through being:

1) Director of Daymark Decision Insights, a company which provides consultative services and tailor made workshops related to improving decision making and reasoning processes to high-impact organisations (https://www.daymark-di.com/). More recently I’ve provided specific consultative services on policy development and advocacy to a large-scale AI safety organisation.

2) Director of Impactful Government Careers - an organisation focused on helping individuals find, secure, and excel in high impact civil service roles.

3) I spent 5 years working in the heart of the UK Government, with 4 of those at HM Treasury. With roles including:

- Head of Development Policy, HM Treasury
- Head of Strategy, Centre for Data Ethics and Innovation
- Senior Policy Advisor, Strategy and Spending for Official Development Assistance, HM Treasury

These roles have involved: advising UK Ministers on policy, spending, and strategy issues relating to international development; assessing the value for money of proposed high-value development projects; developing the 2021 CDEI Strategy and leading the organisational change related to this.

4) I’ve completed an MSc in Cognitive and Decision Sciences at UCL, where I have focused my research on probabilistic reasoning and improving individual and group decision-making processes. My final research project involved an experimental study into whether a short course (2 hours) on Bayesian reasoning could improve individual's single-shot accuracy when forecasting geopolitical events.

How others can help me

I am looking for individuals and groups that are interested in improving institutional decision making, whether that's within the typical high-power institutions such as governments/civil services, multilateral bodies, large multinational corporations, or smaller EA organisations that are delivering high-impact work.

How I can help others

I have a broad range of experience, but can probably be of best help on the topics of:

  • Improving institutional decision making, particularly through embedding decision science methods to deliver more accurate, efficient, and inclusive reasoning processes.
  • Reasoning under uncertainty, particularly how to improve predictive reasoning (both causal inference, i.e. 'will x action lead to y outcome', and forecasting, i.e. 'will x event happen').
  • Working in the UK Civil Service, particularly central Government and opportunities for maximising impact in such roles.
  • Getting things done in Government - how to best utilise your role, skills, and those of your teams/stakeholders, to support the decision-making process and deliver high-impact results.
  • Changing career (as someone that has done two large career changes)

On the side, a colleague and me run a small project helping to improve predictive reasoning which can be found here: https://www.daymark-di.com/. If you are interested in finding out more, feel free to drop me a message.

Posts
2

Sorted by New
2
JamesN
· · 1m read
2
JamesN
· · 1m read

Comments
25

I'm considering my future donation options, either directly to charities or through a fund. I know EA Funds is still somewhat cash constrained but I'm also a little concerned with the natural variance in grant quality. 

I'd be interested in why others have or have not chosen to donate to EA Funds, and if so, would they again in the future?

I respect people may prefer to answer this by DM, so please do feel free to drop me a message there if posting here feels uncomfortable.

There seems to be quite a few people who are keen to take up the bet against extinction in 2027... are there many who would be willing to take the opposite bet (on equal terms (i.e. bet + inflation) as opposed to the 200% loss that Greg is on the hook for)?

Do people also mind where their bet goes? In this case I see the money went to PauseAI, would people be willing to make the bet if the money went to that person for them to just spend as they want? I could see someone who believed p(doom) by 2027 was 90%+ might just want more money to go on holiday before the end if they doubt any intervention will succeed. This is obviously hypothetical for interest sake as a real trade would need some sort of guarantee the money would be repaid etc. etc.

My prior on this is that the opportunity cost of what the money could've been spent on is excessively high and there are much better uses it could go towards. 

Obviously the more novel and actionable the information is, the lower that trade-off becomes. However, I expect that most of the time the information they can provide would be valued too high due to personal intrigue as opposed to it actually being anything that meaningfully moves the dial. Equally, if the information they have was especially ground-breaking then I'd hope the person under the NDA would sacrifice personal wealth to expose that, then retrospectively they may get support with any legal costs etc. A reactive system as opposed proactive would also help prevent weird incentives being created where people would try to hold out on blowing the whistle until they had funds confirmed.

I’m a strong believer that AI can be of massive assistance in this area, especially in areas such as improving forecasting ability where the science is fairly well understood/evidenced (I.e. improve reasoning process to improve forecasting and prediction).

My point of caution would be that exploration here, if not done with sufficient scientific rigour, can result in superficially useful tools that add to AI misuse risks and/or worse decision making. For more info do see the below research paper: https://arxiv.org/abs/2402.01743#:~:text=This report examines a novel,many critical real world problems.

The bottleneck on funding is the biggest issue in unlocking the potential in this space imo, and on trying to improve decision making more broadly.

I find the void between the importance placed on improving reasoning and decision making on cause priority researchers such as 80k and the EA community as a whole, and the appetite from funders to invest is quite huge. A colleague and I have struggled for even relatively small amounts of funding, even when having existing users/clients from which funding would allow us to scale from.

That’s not a complaint - funders are free to determine what they want to fund. But it seems a consistent challenge people who want to improve IIDM face.

It increases my view that such endeavours, though incredibly important, should be focused on earning a profit as the ability to scale as a non-profit will be limited.

As my toddler continues to grow, my wife and I are reaching the point that all parents do (if not already done) where we have to decide what to do about his education. Obviously education and life aren't fixed so there is always a necessary amount of flexibility to any decision. 

My issue (UK focus): Broadly there are three stages in the UK - 1) nursery, 2) primary school, 3) secondary/high school. With the world becoming increasingly uncertain and complex, I have very little confidence that any current institution (in stage 2 and 3) is set up and run well enough to prepare a child for the future - beyond the basic reading and writing skills. As a small example for instance, there is a shocking lack of investment in both resources and capability on key topics such as computer science, maths, engineering, rationality, and political science. However, given this is a quick post I won't go into the depth of my reasoning of why I feel there is a structural problem with the education system that would be almost impractical (or at least, too politically costly) to resolve. To add some potential credence, this isn't just wandering thoughts but they have been generated through the experiences of multiple family (including my wife) and friends who work/worked in primary or secondary schools.

My question: What are others' thoughts on this (both parents and non-parents)? Do you have similar concerns? Do you not? If you do, what are your intended actions to mitigate?

Nice post, and I like the short snappy sections for quick reading - thanks!

My main reflection is whether this is actually a neglected? Exercise is a huge industry, and I feel the psychological benefits as well as the physical are quite often utilised in marketing and advertising (though I may be biased given I go to the gym quite regularly). Perhaps the specific link to depression is leveraged less, but I feel its there by proxy in a lot of cases.

I guess a fairer question would be: what would need to exist, that doesn't currently, for this to be assessed as not a neglected issue?

This is an important reflection, and one I've found myself querying when seeing various programs claim they are hyper effective. Incredibly well performing interventions are rare, but we might expect to see a higher number of them to be showcased on this forum given there is already a selection bias from the membership/readership here. 

However, I do feel the community naturally creates an incentive to inflate (conciously or not) the CEA of interventions - afterall, if you aren't working on something which can compete with AMF, then why take money away from that? The fix to this being you live in the ambiguity of your intervention and argue that under certain assumptions, your program could be better. 

As you effectively note, the problem is could (a priori) judgments are riddled with reasoning risks and errors, which is why I feel the community could do more to better support and also challenge reasoning methods (cognitive and computational). For example, lots of posts mention key uncertainties people have on their interventions, but they often don't state the second order probabilities of them (not even GiveWell does this consistently) along with how much that uncertainty fundamentally underpins the intervention. A relatively simple fix, which could be a community norm. 

I agree - though not so much on everything gets funded anyway point.

I think there is also a wider meta question which is what is the best use of EA's marginal time/energy/money. My (highly unjustified) judgement would be that people donating for such causes aren't motivated by effectiveness, or at least are motivated much more by emotion. So the likelihood of changing their donation based on an argument around effectiveness may be quite hard to achieve.

I'm also not sure on the scale of difference between the worst and best charities for such causes (i.e. is the best cancer charity 100x better than the worst)? It'd be great to know, but assuming not, this would also reduce the benefit of any success.

A more effective solution achieve the same goal by proxy would seem to be just influencing the existing major funds or initiatives to focus more on the marginal impact of every £ they receive.

Thanks for posting this!

I want to highlight up front that I am a big supporter of any work that aims to improve institutional decision making. I believe it’s a highly impactful area with unparalleled potential given the decision power (in both terms of spending and benefit potential) of large institutions is immense. I personally feel there’s a big moral and EA argument in supporting solutions that could practically deliver benefits (even small returns given the scale).

In terms of cleaner questions upfront which get to the heart of my uncertainties:

  1. How much will the elements combined improve the quality of decisions that are made? Tied to this - which elements could be cut if needed for time without undermining the benefits you’d expect?
  2. Are there examples of previous decisions that have been made that have been run through this process, to show what different outcomes would have been generated? 
  3. Given the time investment needed to implement this process, why is it advantageous over existing solutions that have been shown to provide substantial improvements in decision making quality (under experimentation) but often face complaints over needing significant time and expertise investment (e.g. training on and aggregating Bayesian models)? 

Further reflections if interested/useful
Having read your paper, I have some concerns over how the solution can be implemented at a beneficial scale. I raise this particularly as a number of the problems you’ve mentioned in the White Paper (e.g. unstructured/limited consultations with experts or limited analysis of the problem space) are driven more by time constraints rather than a clear framework of how to do it. This is an important consideration as planning for catastrophic risks is only half of the problem - we can’t consistently (or at all) predict black swan events and thus decision making at speed in crises is incredibly (if not more) important, as Covid showed us.

Given decision science research, I query the heavy reliance on expert judgment as a key node to improve the predictive accuracy, as there’s a healthy body of evidence that suggests quality of reasoning as opposed to domain expertise is a better predictor for such accuracy. Your White Paper actually seems to account for this by proxy when it highlights specific reasoning methods to drive improved accuracy (e.g. IDEA framework). 

In addition, I’m less sure how beneficial the democratic/deliberation process with citizens is for the risks you are targeting. The examples you note (such as abortion and LGBTQ+ issues) are primarily social issues which lend themselves well to citizen assemblies given they are moral in nature. On the other side, planning policy is quite heavily democratised in the UK and arguably has led to very bad outcomes given wider economic or societal benefits from construction are less tangible than personal concerns around changes to the local area. These externalities aren’t always accurately priced into people’s incentives and thus their judgements aren’t necessarily what’s best for society. Do you see a similar issue for catastrophic risks/how will you mitigate if so?

Load more