CF

Covi Franklin

Humanitarian Professional @ United Nations
35 karmaJoined Working (0-5 years)

Bio

I've worked with the United Nations World Food Programme as a programme and policy officer for 5 years, most recently during the Sudan humanitarian response. I am increasingly interested into developing my thinking around the intersection of AI and disaster preparedness; understanding the growing and intersecting risks posed by integrating AI into national systems and exploring approaches to mitigate risk through effective AI governance.

How others can help me

Interested in exploring opportunities in AI research.

How I can help others

Humanitarian operations/life in the UN

Comments
4

Despite the slightly terrifying implications of the breakdown in unity between America and the rest of the NATO alliance from a security perspective - I think it also offers a really promising opportunity r.e. shifting global AI development and governance towards a safer path in some scenarios. 

Right now US and China have adopted a 'race' dynamic...needless to say this is hugely dangerous and really raises the risk of irresponsible practices and critical errors as we enter the critical phase towards AGI from both AI super-powers. The rupture of UK/EU from US over Greenland/tariffs has led to immediate warming with China (PM Starmer just left Beijing and now there's visa free travel to China for UK citizens and talk of strategic partnerships). Prior to this point there was little reason for China to head any warning from middle powers over AI safety - they were on 'the other side' of the race/struggle for global influence. That strategic image has shifted dramatically. 

With warming relations with China there's possibility for rigorous UK/EU advocacy combined with effective AI policy that focuses more on caution/preparedness for AI response over pure-play race to the finish. So far the Track 1 talks between China and US have yielded limited results - trust remains low and neither side wants to show weakness. If these macro-strategic changes in UK/EU relations with China offer possibility of a route towards influencing their perspective on AI risk maybe it could yield some positive results? Adherence to a multilateral AI safety regime? (perhaps a bit too optimistic?)...but if this can offer an opportunity to China shifting even somewhat to the cautionary side it could open up room for more effective cooperative actions globally, including with the US, and shifting us somewhat from the full steam ahead path we're currently on.

Am I wrong to view any sort of optimism for how this could impact the AI governance space? Considering writing a more fleshed out piece on the topic.

Hi Lizka - really enjoyed the article. Often AI development seems to be discussed largely through the paradigm of 'do we speed up to achieve radical transformation, or do we slowdown to reduce risk'. Hadn't thought in depth about before about the notion of aiming to speed up certain components, while slowing others to better manage the transition.

One thought as you go into your more detailed analyses. While you view epistemic first transformation as preferential, you nod to the risk of an epistemic first transformation leading to the risk of being leveraged to mislead as opposed to enlighten the world. Another risk which sprang to mind with an epistemic first transformation is also that it becomes an immensely powerful tool for those with first access (and it feels fairly reasonable to imagine that any major transformation would likely be accessed by government before being implemented in broader society). If a government with authoritarian tendency had first-access to epistemic architecture which enabled superior strategic insight a substantial risk pathway would be that it could be used to inform permanent power capture/authoritarian lock in (even without being leveraged to misinform citizens/public). The technology and impact could in theory never become public. 

Thanks for your take - I always appreciate slightly less doom and gloom perspectives.

On your point that there's not an imminent unemployment crisis and what impacts we are seeing may be due to other factors. Firstly I think it's inevitable that the direct causes of disruption to the labour market are going to be multifaceted given the current trajectory of global markets (de-coupling, de-globalisation etc.)whatever happens moving forward. In the UK specifically part of the issue is minimum wage has been increased, making employers less inclined to hire grads (and yes it's grad openings which have halved, but we're simultaneously seeing 18-25 unemployment rising...not yet anywhere close to 08 levels but gradually increasing) - but this and other factors aren't independent of AI, but rather accelerate it's impacts (i.e. right now AI probably can't replace all grad jobs, but CEOs are more willing to experiment, explore, and begin premature replacement because of rising grad costs and higher tax rates...but now those jobs are gone they're off the market. The more broad market shocks there are, the more businesses will look to AI to cut costs)

Secondly I definitely take your point that AI may not become orders of magnitude more capable in the next five years and that 'AI is coming for our jobs' could be overblown. I suppose my thinking is - it might. Even if there is say, a 30% chance that in the next 5 years even 30-40% of white collar jobs get replaced...that feels like a massive shock to high-income countries way of life, and a major shift in the social contract for the generation of kids who have got into massive debt for university courses only to find substantially reduced market opportunities. That requires substantial state action. 

And that's the more conservative end of things; if there's any reasonable chance that within even the next 10-15 years AI becomes capable of replacing a higher percentile of the total job market, surely we need some form of effective way of ensuring people with no job opportunities have some degree of resources. Even if you're correct that it's fairly unlikely - to avoid major social instability in the event of such a scenario it feels prudent that governments would be doing serious planning for the potential (just as they would for pandemics or war gaming no matter how unlikely the scenario)

And finally - you may well be right that UBI is not accepted as a good idea in political circles. I'm not wedded to that particular approach, and have heard ideas about negative income tax floating around. But in a scenario where there simply isn't sufficient available employment for a functioning labour market and allocation of basic resources to all citizens - I'd like to think that governments are putting serious thinking and planning into how we ensure society continues to function, and what the social contract might become, and not that we're waiting until that reality comes to pass to plan for an appropriate response.

Are there any signs of governments beginning to do serious planning for the need for Universal Basic Income (UBI) or negative income tax...it feels like there's a real lack of urgency/rigour in policy engagement within government circles. The concept has obviously had its high-level advocates a la Altman but it still feels incredibly distant as any form of reality. 

Meanwhile the impact is being seen in job markets right now - in the UK graduate job opening have plummeted in the last 12 months. People I know are having a hard enough time finding jobs with elite academic backgrounds - let alone the vast majority of people who went to average universities. This is happening today - before there's any consensus of arrival of AGI and widely recognised mass displacement in mid-career job markets. Impact is happening now, but preparation for major policy intervention in current fiscal scenarios seems really far off. If governments do view the risk of major employment market disruption as a realistic possibility (which I believe in many cases they do) are they planning for interventions behind the scene? Or do they view the problem as too big to address until it arrives...viewing rapid response > careful planning in the way the COVID emergency fiscal interventions emerged.

Would be really interested to hear of any good examples of serious thinking/preparation of how some form of UBI could be planned for (logistically and fiscally) in the near time 5 year horizon.