I fully agree.
My understanding of most structured interview formats is that follow up or clarifying questions are still expected and encouraged. There's limited value in robotically sticking to a list of identical questions, and missing out on the opportunity to get additional information with a follow-up question.
I find the best interviews feel like a structured conversation. There's interaction between the panel and the candidate, because that's how we really interact when we meet together. There are efforts made to help the candidate feel relaxed and comfortable, and to value the experiences that they share with the panel. I cover the same questions in the same order with the candidate, but we might spend longer on some questions than others, depending on the background and strengths / weakness of the candidate.
|> If you're a program manager in a small org, you're basically making highly strategic judgment calls on a weekly basis, and when people don't have strong mental models around AI safety, they tend to make the wrong ones. I think at present the majority of demand for soft ops roles in the ecosystem looks like this: they're roles that require one to make frequent judgment calls that benefit highly from field-specific context and a solid internalization of our priorities.
Firstly, I agree that in smaller organisations people do a bit of many things, and having some technical background is helpful.
The way you describe the programme manager role sounds to me like a research manager type position. These positions are most likely to need a technical background. However, there could equally be programme manager roles which dont require technical expertise.
For example, if I'm managing the development of a training course, I hire a subject matter expert to develop the content. I'm managing timelines, I'm managing staffing, I'm managing budgets, I'm managing quality - but I dont need subject matter expertise to do this management role.
Even if I have some subject matter expertise, this expertise might not be fully transferable - I can have deep knowledge about using Excel, but if I'm overseeing training course development on how to design a more efficient electric car battery, my Excel subject matter expertise might be mostly irrelevant.
Now you could hire a subject matter expert to be this programme manager to oversee the development of these training courses. But they might not have the background or skills for this role, because - they are a subject matter expert, not a programme management expert.
More broadly - I dont want my research experts to following up on visas, or drafting a press release, or doing graphic design for an event invitation, or trying to work out which politician would give them a sympathetic hearing, or how much cash flow we need for the next month. There are specialists who have this expertise already, and it seems more efficient to work with these specialists where it makes sense, instead of overweighting on prior AI safety technical expertise.
> There are important exceptions to this, especially at bigger orgs, where soft ops roles can be more specialized and therefore there's less of a need for a high degree of context and deep mission-alignment (for example, I suspect many soft ops roles at Coefficient Giving are like this).
I agree, I think that larger organisations have a greater capacity to carry specialised non-technical expertise. But I think this specialist expertise could also be available to smaller organisations, perhaps on a fractional basis? If the organisations saw it as being useful. And perhaps if there was a better way of matching expertise to smaller organisations?
I also wonder if there is a potential chicken and egg argument there - ie, that organisations that are willing to accept specialist support (perhaps starting with finance, HR, operations?) are more likely to be stable and successful and grow into a larger organisation? Or conversely, if you have a sole founder and rely on technical researchers for everything, that this kind of structure can place limits on the growth and sustainability of the organisation.
Hi Karen,
I made an account here so I can post this comment.
I wanted to thank you for raising these issues. Similar to you, I'm a senior professional. I'm part of Bluedot and CEA cohorts which also include other senior professionals (20-30 years experience). These professionals (and myself) have already had impactful careers, and are looking to pivot into new impactful roles where our skills and experience can be applied.
With my personal background, it would be much easier (and probably better financially) for me to pivot into corporate AI Governance rather than AI safety. I'd prefer not to do this since it seems less impactful.
I also find the distinction in these discussions between technical / researcher roles and 'generalist' (ie non-technical) roles to be an interesting framing. It seems to me that there are other ways of assessing mission alignment, and there is a risk of overweighting on technical expertise, as HannahGB questions below.
To give a concrete (but invented) example:
Scenario 1: Lets imagine that we have an AI safety organisation doing wonderful research. The researchers take their results, and present at a technical conference. Attendees are very impressed, and this inspires other researchers to continue to develop the research pathway.
Scenario 2: Lets imagine the same research at a different AI safety organisation. The researcher briefs the executive leadership of the organisation on their results. The organisation works with their policy lead to tease out the implications of the research results. The policy and communications leads work together to define an advocacy position and their proposals, supported by a strong briefing deck with talking points that are understandable by a semi-technical government audience. A lawyer helps prepare zero draft text for inclusion in regulation. The policy lead links the organisation with regulators from their network, to present the research and advocate for improved governance. The Comms lead arranges publicity and advocacy for the discussion through their comms networks. And the researcher still gets to present the research at a technical conference.
It seems to me like there's a lot of Scenario 1 work happening in the field. And then some separate organisations working on AI governance and oversight from a legislative and policy perspective. My question is whether there is also opportunity for more of the Scenario 2 type approach?
Adding to this - people with mental illnesses in developing countries are often stigmatised and shunned by their families, and at worst imprisoned. They are imprisoned due to (1) public order offences (ie being disruptive in public) and (2) a lack of other facilities to accommodate them long term (ie hospital facilities or mental health programmes).
There is a lot that could be done relatively cheaply if this was taken up as a priority.