I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.
I agree especially with the first of these (third is outside my scope of expertise). There's a lot of work that feels chronically underserved and unsupported in 'implementation-focused policy advocacy and communication'.
E.g. standards are correctly mentioned. Participation in standards development processes is an unsexy, frequently tedious and time-consuming processes. The big companies have people whose day jobs is basically sitting in standards bodies like ISO making sure the eventual standards reflect company priorities. AI safety/governance has a few people fitting it in alongside a million other priorities.
e.g. the EU AI office is still struggling to staff the AI Office, one of the most important safety implementation offices out there. e.g. I serve on an OECD AI working group, an international working group with WIC, have served on GPAI, PAI, and regularly advise UN and national governments. You can make a huge difference on these things especially if you have the time to develop proposals/recommendations and follow up between meetings. But I see the same faces from academia & cvil society at all of these - all of us exhausted, trying to fit in as much as we can, alongside research, management, fundraising, teaching+student mentorship + essays (for the academics).
Some of this is that it takes time as an independent to reach the level of seniority/recognition to be invited to these working groups. But my impression from being on funding review panels that many of the people who are well-placed to do so still have an uphill battle to get funding for themselves or the support they need. It helps if you've had something flashy and obviously impactful (e.g. AI-2027) but there's a ton of behind-the-scenes work (that I think is much harder for funders to assess, and sometimes harder for philanthropists to get excited about) that an ecosystem with a chance of actually steering this properly and acting as the necessary counterweight to commercial pressures needs. Time and regular participation at national government level (a bunch of them!) plus US, EU, UN, OECD, G7/G20/G77, Africa, Belt&Road/ANSO, GPAI, ITU, ISO, things like Singapore SCAI & much more. Great opportunities for funders (including small funders).
Useful data and analysis thanks, though I'd note that from a TAI/AI risk-focused perspective I would expect the non-safety figures to overcount for some of these orgs. E.g. CFI (where I work) is in there at 25 FTE, but that covers a very broad range of AI governance/ethics/humanities topics, where only a subset (maybe a quarter?) would be specifically relevant to TAI governance (specifically a big chunk of Kinds of Intelligence that mainly does technical evaluation/benchmarking work, but advises policy on the basis of this, and the AI:FAR group). I would expect similar with some of the other 'broader' groups e.g. Ada.
Also in both categories I don't follow the rationale for including GDM but not the other frontier companies with safety/governance teams e.g. Anthropic, OpenAI, xAI (admittedly more minimal). I can see a rationale for including all or none of them.
The text of the plan is here:
http://hk.ocmfa.gov.cn/eng/xjpzxzywshd/202507/t20250729_11679232.htm
Features a section on AI safety:
"Advancing the governance of AI safety. We need to conduct timely risk assessment of AI and propose targeted prevention and response measures to establish a widely recognized safety governance framework. We need to explore categorized and tiered management approaches, build a risk testing and evaluation system for AI, and promote the sharing of information as well as the development of emergency response of AI safety risks and threats. We need to improve data security and personal information protection standards, and strengthen the management of data security in processes such as the collection of training data and model generation. We need to increase investment in technological research and development, implement secure development standards, and enhance the interpretability, transparency, and safety of AI. We need to explore traceability management systems for AI services to prevent the misuse and abuse of AI technologies. We need to advocate for the establishment of open platforms to share best practices and promote international cooperation on AI safety governance worldwide."
I will say though that I really enjoyed this - and it definitely imparts the, ah, appropriate degree of scepticism i might want potential applicants to have ;)
I've a similar concern to Geoffrey's.
When I clicked on the video last week, there's a prominent link to careers, then jobs. At the time, 3 of the top 5 were at AGI companies (Anthropic, OpenAI, GDM). I eventually found the 'should you work at AGI labs?' link, but it was much less obvious. This is the funnel that tens of thousands of people will be following (assuming 1% of people watching the video consider a change of career).
80K has for a long time pushed safety & governance at AGI companies as a top career path. While some of the safety work may have been very safety-dominant, a lot of it has in practice helped companies ship product, and advanced capabilities in doing so (think RLHF etc - see https://arxiv.org/pdf/2312.08039 for more discussion). This is inevitably a higher likelihood in a commercial setting than in e.g. academia.
Policy and governance roles have done some good, but have in practice also contributed to misplaced trust in companies and greater support for e.g. self-governance than might otherwise have been the case. Anecdotally, I became more trusting of OpenAI after working with their researchers on Towards Trustworthy AI (https://arxiv.org/pdf/2004.07213), in light of them individually signing onto (and in some cases proposing) mechanisms such as whistleblower mechanisms and other independent oversight mechanisms. At the same time unbeknownst to them, OpenAI leadership were building clauses into their contracts to strip them of their equity if they criticised the company on leaving. I expect the act by safety-focused academics like myself of coauthoring the report with openAI policy people will have also had the effect of increasing the perceptions of trustworthiness of OpenAI.
By now, almost everyone concerned about safety seems to have left OpenAI, often citing concerns over the ethics, safety-committedness and responsibilit-committedness of leadership. This includes everyone on Towards Trustworthy AI, and I expect many of the people funneled there by 80K. ( I get the impression from speaking to some of them that they feel they were being used as 'useful idiots'). One of the people who left over concerns was Daniel Kokatajlo himself - who indeed had to give up 85% of his family's net worth (temporarily, I believe) in order to be critical of OpenAI.
Another consequence of this funnel is that it's contributed to the atrophy of the academic AI safety and governance pipeline, and loss of interest in supporting this part of the space by funders ('isn't the most exciting work happening inside the companies anyway?'). The most ethically-motivated people, who might otherwise have taken the hit of academic salaries and precarity, had a green light to go for the companies. This has contributed to the atrophy of the independent critical role, and government advisory role, that academia could have played in frontier AI governance.
There's a lot more worth reflecting on than is captured in the 'should you work at AI labs/companies' article. While I've focused on OpenAI to be concrete here, the underlying issues apply to some degree across frontier AI companies.
And there is a lot more that a properly reflective 80K could be doing here. E.g.
Heck, you could even have a little survey to answer before accessing these high risk roles, like when you're investing in a high-risk asset
(this is all just off the top of my head, I'm sure there are better suggestions).
It's long been part of 80k's strategy to put people in high-consequence positions in the hope they can do good, and exert influence around them. It is a high-risk strategy with pretty big potential downsides. There have now been multiple instances in which this plan has been shown not to survive contact with the kind of highly agentic, skilled-at-wielding-power individuals who end up in CEO-and-similar positions (I can think of a couple of Sams, for instance). If 80k is going to be pointing a lot of in-expectation young and inexperienced people in these directions, it might benefit from being a little more reflective about how it does it.
I don't think it's impossible to do good from within companies, but I do expect you need to be skilful, sophisticated, and somewhat experienced. These are AGI companies. Their goal is to build AGI sooner than their competitors build AGI. Their leadership are extremely focused on this. Whether the role is in governance, or safety, it's reasonable to expect ultimately that you as an employee will be expected to help them do that (certainly not hinder them)
Thanks Rían, I appreciate it. And to be fair, this is from my perspective as much a me thing as it is an Oli thing. Like, I don't think the global optimal solution is an EA forum that's a cuddly little safe space for me. But we all have to make the tradeoffs that make most sense for us individually, and this kind of thing is costly for me.
One other observation that might explain some of the different perceptions on 'blame' here.
I don't think Oxford's bureaucracy/administration is good, and I think it did behave very badly at points*. But overall, I don't think Oxford's bureaucracy/behaviour was a long way outside what you would expect for the reference class of thousand-year-old-institutions with >10,000 employees. And Nick knew that was what it was, chose to be situated there, and did benefit (particularly in the early days) from the reputation boost. I think there is some reasonable expectation that having made that choice, he would put some effort into either figuring out how to operate effectively within its constraints, or take it somewhere else.
(*it did at point have the feeling of grinding inevitability of a failing marriage, where beyond a certain point everything one side did was perceived in the worst light and with maximal irritation by the other side, going in both directions, which contributed to bad behaviour I think).
It wasn't carefully chosen. It was the term used by the commenter I was replying to. I was a little frustrated, because it was another example of a truth-seeking enquiry by Milena getting pushed down the track of only-considering-answers-in-which-all-the-agency/wrongness-is-on-the-university side (including some pretty unpleasant options relating to people I'd worked with ('parasitic egregore/siphon money').
>Did Oxford think it was a reputation risk? Were the other philosophers jealous of the attention and funding FHI got? Was a beaurocratic parasitic egregore putting up roadblocks to siphon off money to itself? Garden variety incompetence?
So I just did copy and paste on the most relevant phrase, but flipped it. Bit blunter and more smart-arse than I would normally be (as you've presumably seen from my writing, I normally caveat to a probably-tedious degree), but I was finding it hard to challenge the simplistic fhi-good-uni-bad narrative. It was one line, I didn't think much about it.
I remain of the view that the claim is true/a reasonable interpretation, but de novo / in a different context I would have phrased differently.
Seems worth someone tracking who the major shareholdrs are and how many voting rights they hold - e.g. I'd bet the house that Jaan Tallinn would be against this, so it'd be good to know if there are enough to support him to ward against possibilities like this.