Hide table of contents

The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.)[1] We're presumably missing out on a lot of talent.

I'm not sure what the solution is, or even what the problem is-- I think it's somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it's dropping the ball and it's nobody's job to notice and nobody has great solutions anyway].

If you have information or takes, I'd be excited to learn. If you've been looking for early-career support (an educational program, way to test fit, way to gain experience, summer program, first job in AI strategy/governance/forecasting, etc.), I'd be really excited to hear your perspective (feel free to PM).


(In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically-- kudos to them. Maybe I should ask them why they haven't expanded to AI strategy or if they have takes on that pipeline. For now, maybe they're evidence that someone prioritizing pipeline-improving is necessary for it to happen...)

  1. ^

    Added on May 24: the comments naturally focused on these examples, but I wasn't asserting that summer research programs or courses are the most important bottlenecks-- they just were salient to me recently.

72

0
0

Reactions

0
0
New Answer
New Comment


5 Answers sorted by

To help with the talent pipeline, GovAI currently runs twice-a-year three-month fellowships. We've also started offering one-year Research Scholar positions. We're also now experimenting with a new policy program. Supporting the AI governance talent pipeline is one of our key priorities as an organization.

That being said, we're very very far from filling the community's needs in this regard. We're currently getting far more strong applications than we have open slots. (I believe our acceptance rate for the Summer Fellowship is something like 5% and will probably keep getting lower. We now need to reject people who actually seem really promising.) We'd like to scale our programs up more, but even then there will still be an enormous unmet need. I would definitely welcome more programs in this space!

I would also strongly recommend having a version of the fellowship that aligns with US university schedules, unlike the current Summer fellowship!

I was very glad to see the research scholar pathway open up, it seems exactly right for someone like me (advanced early career, is that a stable segment?).

I’m also glad to hear of the interest too, although it’s too bad that the acceptance rate is lower than ideal. Then again, to many folks coming from academic grant funding ecosystems, 5% is fairly typical, for major funding in my fields at least.

I totally agree there's a gap here. At BlueDot Impact  (/ AGI safety fundamentals), we're currently working on understanding the pipeline for ourselves.

 

We'll be launching another governance course in the next week, and in the longer term we will publish more info on governance careers on our website, as and when we establish the information for ourselves.

In the meantime, there's great advice on this account, mostly targeted at people in the US, but there might be some transferrable lessons:

https://forum.effectivealtruism.org/users/us-policy-careers

May I just add that, as someone who self-studied my way through the public reading list recently, I’d rate many of the resources there very highly.

I also have the impression that there's a gap and would be interested in whether funders are not prioritizing it too much, or whether there's a lack of (sufficiently strong) proposals.

Another AI governance program which just started its second round is Training For Good's EU Tech Policy fellowship, where I think the reading and discussion group part has significant overlap with the AGISF program. (Besides that it has policy trainings in Brussels plus for some fellows also a 4-6 months placement at an EU think tank.)

This is a timely post. It feels like funding is a critical obstacle for many organisations. 

One idea: Given the recent calls by many tech industry leaders for rapid work on AI governance, is there an opportunity to request direct funding from them for independent work in this area. 

To be very specific: Has someone contacted OpenAI and said: "Hey, we read with great interest your recent article about the need for governance of superintelligence. We have some very specific work (list specific items)  in that area which we believe can contribute to making this happen. But we're massively understaffed and underfunded. With $1m from you, we could put 10 researchers working on these questions for 1 year. Would you be willing to fund this work?"

What's in it for them? Two things:

  1. If they are sincere (as I believe they are), then they will want this work to happen, and some groups in the EA sphere are probably better placed to make it happen than they themselves are.
  2. We can offer independence (any results will be from the EA group, not from OpenAI and not influenced or edited by OpenAI) but at the same time we can openly credit them with funding this work, which would be good PR and a show of good faith on their part. 

Forgive me if this is something that everyone is already doing all the time! I'm still quite new to EA! 

Given the (accusations of) conflicts of interest in OpenAI’s calls for regulation of AI, I would be quite averse to relying on OpenAI for funding for AI governance

Comments14
Sorted by Click to highlight new comments since:

Also no governance course from AGI Safety Fundamentals in a while

My independent impression here, having facilitated in this course and in other virtual programs, is that the curriculum provides ~90% of the value of the AGISF Governance course.[1] Therefore, I'd encourage those looking to skill up to simply get started working through the curriculum independently, rather than wait for the next round of the course.[2]

  1. ^

    Caveat: The discussion-and-support aspects of the course may have evolved since early 2022, when I facilitated, in ways that'd change my ~90% estimate.

  2. ^

    This “get started independently” conclusion follows, in my view, even with a much weaker premise: that the curriculum provides ~50% of the course's value, say. And I'd be very surprised if many AGISF alumni believe that less than half of the course's value comes from the curriculum.

(Sure. I was mostly just trying to complain but I appreciate you being more constructive. The relevant complaint in response is that AGISF hasn't improved/updated their curriculum much + nobody's made and shared a better one.)

Rethink Priorities unfortunately wasn't able to offer our own Fellowship this year due to capacity reasons and financial constraints (especially post-FTX), but we would be excited to potentially run one next year.

Instead this year we put a lot of our ops capacity behind the Existential Risks Alliance via our Special Projects program and helped them run a large fellowship this year. I hope that was helpful to the community.

I think I'm also just generally more excited to create more permanent jobs in AI governance and strategy work than do fellowships (though of course we can and will do both) as I think a bigger bottleneck right now is what people do after they do a fellowship and making sure we have enough permanent opportunities for people. I think the "what do I do after SERI MATS?" is also a problem too.

This seems like a useful topic to raise. Here's a pretty quickly written & unsatisfactory little comment: 

  • I agree that there's room to expand and improve the pipeline to valuable work in AI strategy/governance/policy. 
  • I spend a decent amount of time on that (e.g. via co-leading RP AI Governance & Strategy team, some grantmaking with EA Infrastructure Fund, advising some talent pipeline projects, and giving lots of career advice).
  • If a reader thinks they could benefit from me pointing you to some links or people to talk to, or via us having a chat (e.g. if you're running a talent pipeline project or strongly considering doing so), feel free to DM me. 
    • (But heads up that I'm pretty busy so may reply slowly or just with links or suggested people to talk to, depending on how much value I could provide via a chat but not via those other quicker options.)

One specific thing I'll mention in case it's relevant to some people looking at this post: The AI Governance & Strategy team at Rethink Priorities (which I co-lead) is hiring for a Compute Governance Researcher or Research Assistant. The first application stage takes 1hr, and the deadline is June 11. @readers: Please consider applying and/or sharing the role! 

We're hoping to open additional roles sometime around September. One way to be sure you'd be alerted if and when we do is filling in our registration of interest form.  

FYI I prefer "AI governance" over "AI strategy" because I think the latter pushes people towards trying to just sit down and think through arbitrarily abstract questions, which is very hard (especially for junior people). Better to zoom in more, as I discuss in this post.

[anonymous]7
0
0

Nice post, and I appreciate you noticing something that bugged you and posting about it in a pretty constructive manner.

In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically-- kudos to them. Maybe I should ask them why they haven't expanded to AI strategy or if they have takes on that pipeline.

I know that around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance. I'm not sure if the idea is still alive, though.

Also, CBAI ran a pilot AI strategy research fellowship this past winter, which I participated in and found worthwhile. At the time they were, I think, planning on running a bigger version of the fellowship in the summer, though it appears that's no longer happening.

no summer research programs this year from [...] SERI SRF

On the other hand, ERA, formerly known as CERI, and CHERI are running fellowships this summer, and I expect they'll both have several AI governance fellows. (Though I do also expect, from what I know of these programs, that their AI governance focus will be more on applied governance than on strategy/theoretical governance. I don't have a strong stance on whether this is overall positive or negative, but it does mean there's less of an AI strategy pipeline.)

around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance

I'm on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they've rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn't strongly die (i.e. I don't think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).

I think something like this is good to make happen though, and if others want to help make it happen, let me know and I'll loop you in with the people who were discussing it.

Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.

Correct that CBAI does not have plans to run a research fellowship this summer (though we might do one again in the winter), but we are tentatively planning on running a short workshop this summer that I think will at least slightly ease this bottleneck by connecting people worried about AI safety to the US AI risks policy community in DC - stay tuned (and email me at trevor [at] cbai [dot] ai if you'd want to be notified when we open applications).

(And I heard MATS almost had a couple strategy/governance mentors. Will ask them.)

(Again, thanks for being constructive, and in the spirit of giving credit, yay to GovAI, ERA, and CHERI for their summer programs. [This is yay for them trying; I have no knowledge of the programs and whether they're good.])

(I now realize my above comments probably don't show this, but I do agree with you that the AI strategy(+governance) pipeline is looking particularly weak at present, and that the situation is pretty undignified given that building this pipeline is perhaps one of the most important things we—the EA movement/community—could be doing.)

This situation was somewhat predictable and avoidable, in my view. I’ve lamented the early-career problem in the past but did not get many ideas for how to solve it. My impression has been that many mid-career people in relevant organizations put really high premiums on “mentorship,” to the point that they are dismissive of proposals that don’t provide such mentorship. 

There are merits to emphasizing mentorship, but the fact has been that there are major bottlenecks on mentorship capacity and this does little good for people who are struggling to get good internships. The result for me personally was at least ~4 internships that were not very relevant to AI governance, were not paid, and did not provide substantial career benefits (E.g., mentorship).

In summary, people should not let the perfect be the enemy of the good: I would have gladly taken an internship working on AI governance topics, even if I had almost no mentorship (and even if I had little or no compensation). I also think there are ways of substituting this with peer feedback/engagement.

I have multiple ideas for AI governance projects that are not so mentorship-dependent, including one pilot idea that, if it worked, could scale to >15 interns and entry-level researchers with <1 FTE experienced researcher in oversight. But I recognize that the ideas may not all be great (or at least their merits are not very legible). Unfortunately, we don’t seem to have a great ecosystem for sharing and discussing project ideas, at least if you aren’t well connected with people to provide feedback through your job or through HAIST/MAIA or other university groups.


Ultimately, I might recommend that someone aggregate a list of past programs and potential proposals, evaluate the importance of various goals and characteristics (e.g., mentorship, skill development, topic education, networks, credentials/CVs), and identify the key constraints/bottlenecks (e.g., funding vs. mentorship).

Curated and popular this week
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
 ·  · 6m read
 · 
This is a crosspost from my new Substack Power and Priorities where I’ll be posting about power grabs, AI governance strategy, and prioritization, as well as some more general thoughts on doing useful things.  Tl;dr I argue that maintaining nonpartisan norms on the EA Forum, in public communications by influential community members, and in funding decisions may be more costly than people realize. Lack of discussion in public means that people don’t take political issues as seriously as they should, research which depends on understanding the political situation doesn’t get done, and the community moves forward with a poor model of probably the most consequential actor in the world for any given cause area - the US government. Importantly, I don’t mean to say most community members shouldn’t maintain studious nonpartisanship! I merely want to argue that we should be aware of the downsides and do what we can to mitigate them.    Why nonpartisan norms in EA are a big deal Individual politicians (not naming names) are likely the most important single actors affecting the governance of AI. The same goes for most of the cause areas EAs care about. While many prominent EAs think political issues may be a top priority, and politics is discussed somewhat behind closed doors, there is almost no public discussion of politics. I argue the community’s lack of a public conversation about the likely impacts of these political actors and what to do in response to them creates large costs for how the community thinks about and addresses important issues (i.e. self-censorship matters actually). Some of these costs include:  * Perceived unimportance: I suspect a common, often subconscious, thought is, 'no prominent EAs are talking about politics publicly so it's probably not as big of a deal as it seems'. Lack of public conversation means social permission is never granted to discuss the issue as a top priority, it means the topic comes up less & so is thought about less, and i
 ·  · 4m read
 · 
Context: I’m a senior fellow at Conservation X Labs (CXL), and I’m seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity.  I think this represents the wild animal welfare community’s first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions.  Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target harm to other animals. In the conservation context, rodenticides are currently used in large-scale island rat and mouse eradications as a way of protecting endemic species. But these rodenticides kill lots of native species in addition to the mice and rats. So advancements in fertility control would be a benefit to both conservation- and welfare-focused stakeholders. CXL is a respected conservation organization with a track record of securing follow-on investments for technologies we support (see some numbers below). We are interested in co-organizing a "Big Think" workshop with WAI and BIWFC. The event will launch an open innovation program (e.g., a prize or a challenge process) to accelerate fertility control development. The program would specifically target island conservation applications where conservation groups are already motivated to replace rodenticides, but would likely