I’m a generalist and open sourcerer that does a bit of everything, but perhaps nothing particularly well. I'm also the Co-Director of Kairos, an AI safety fieldbuilding org.
I was previously the AI Safety Group Support Lead at CEA and a Software Engineer in the Worldview Investigations Team at Rethink Priorities.
(Answering personally, not necessarily endorsed by the other authors)
I think there's a lot of nuance here. To be clear, I don't think it is the case that people without AI safety context are never a good fit for soft ops roles, and indeed I've seen a few orgs do this successfully, but the nature of those roles tend to be different.
Maybe the biggest consideration here is the level of ownership required by the role. If you're a program manager in a small org, you're basically making highly strategic judgment calls on a weekly basis, and when people don't have strong mental models around AI safety, they tend to make the wrong ones. I think at present the majority of demand for soft ops roles in the ecosystem looks like this: they're roles that require one to make frequent judgment calls that benefit highly from field-specific context and a solid internalization of our priorities. There are important exceptions to this, especially at bigger orgs, where soft ops roles can be more specialized and therefore there's less of a need for a high degree of context and deep mission-alignment (for example, I suspect many soft ops roles at Coefficient Giving are like this).
I also think AI safety makes this particularly hard, because newcomers tend to start with very bad mental models about our priorities; unlike other fields where it's much easier to grasp what the key goals and types of prioritized interventions are. In my experience, it can take months for people to get to the level of context needed to do core work, and it often fails to occur, making investments like this very expensive and risky for organizations.
Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence.
Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard).
Hopefully this is auspicious for things to come?
Just FYI, in public policy literature there’s already a concept to describe warning shots, focusing events. I frequently suggest that people read Focusing Events, Mobilization, and Agenda Setting by Birkland, the classical paper on this.
I've been very concerned that EA orgs, particularly the bigger ones, would be too slow to orient and react to changes in the urgency of AI risk, so I'm very happy that 80k is making this shift in focus.
Any change this size means a lot of work in restructuring teams, their priorities and what staff is working on, but I think this move ultimately plays to 80k's strengths. Props.
There's a lot in this post that I strongly relate to. I also recently left CEA, although after having worked for a much smaller period of time: only 6 months. To give some perspective on how much I agree with Lizka, I'll quote from the farewell letter I wrote to the team:
While I will admit that it took some getting used to, I’m still surprised at how fast I started feeling part of the CEA team and, moreover, how much I came to admire its culture. If you had told me back then that this is what CEA was like, I don’t think I would have bought it. I mean, sure, you can put a lot of nice-sounding principles into your website, but that doesn’t mean you actually embody them. It turns out that it is possible to embody them, and it was then my turn.
I even remember Jessica trying to convince me during my work trial that CEA was friendly and even silly sometimes. To me, CEA was just the scary place where all the important people worked at. I now know what she meant. (...) It’s now gone from a scary place to my favorite team of people. It’s become much more special to me than I ever suspected.
So I want to second Lizka's thoughts: I feel very honored to have worked with them.
Hi Karen,
Thanks for engaging with our post. To be clear, do you think that there's a problem related to how much we signpost the roles we need publicly? It's my impression that the majority of vacant generalist roles are indeed posted and circulated through job boards and LinkedIn, it's just that most of the applicants don't meet the criteria orgs are searching for.