Y

yams

68 karmaJoined

Comments
6

small but expanding (like everything in the space) is my understanding; there are also a lot of non-rand government and government-adjacent groups devoted to AI safety and nat sec.

I didn't mean to imply that the org had retooled to become entirely AI-focused or something; sorry if that's how it read!

I think your claim is directionally correct all-else-equal; I just don't think the effect is big enough in context, with high enough confidence, that it changes the top-line calculation you're responding to (that 4-5x) at the resolution it was offered (whole numbers).

The naive assumption that scholars can be arranged linearly according to their abilities and admitted one-by-one in accordance with the budget is flawed. If it were true, we could probably say that the marginal MATS scholar at selection was worth maybe <80 percent of the central scholar (the threshold at which I would have written 3-4x above rather than 4-5x). But it's not true.

Mentors pick scholars based on their own criteria (MATS ~doesn't mess with this, although we do offer support in the process). Criteria vary significantly between mentors. It's not the case, for instance, that all of the mentors put together their ordered list of accepted and waitlisted scholars and end up competing for the same top picks. This happens some, but quite rarely relative to the size of the cohort. If what you've assumed actually had a strong effect, we'd expect every mentor to have the same (or even very similar) top picks. They simply don't.

MATS 6 is both bigger and (based on feedback from mentors) more skill-dense than any previous MATS cohort, because it turns out all else does not hold equal as you scale and you can't treat a talent pipe line like a pressure calculation.

Ah, really just meant it as a data point and not an argument! I think if I were reading this I'd want to know the above (maybe that's just because I already knew it?).

But to carry on the thread: It's not clear to me from what we know about the questions in the survey if 'creating' meant 'courting, retraining', or 'sum of all development that made them a good candidate in the first place, plus courting, retraining.' I'd hope it's the former, since the latter feels much harder to reason about commutatively. Maybe this ambiguity is part of the 'roughness' brought up in the OP.

I'm also not sure if 'the marginal graduate is worse than the median graduate' is strongly true. Logically it seems inevitable, but also it's very hard to know ex ante how good a scholar's work will be, and I don't think it's exactly right to say there's a bar that gets lowered when the cohort increases in size. We've been surprised repeatedly (in both directions) by the contributions of scholars even after we feel we've gotten a bead on their abilities (reviewed their research plans, etc).

Often the marginal scholar allows us to support a mentor we otherwise wouldn't have supported, who may have a very different set of selection criteria than other mentors.

In case it's useful to anyone: that 100k number is ~4-5x the actual cost of increasing the size of a MATS cohort by 1.

edit for more fleshed out thoughts and some questions....

and now edited again to replace those questions with answers, since the doc is available...

Reasoning about how exceptional that exceptional technical researcher is is super hard for me because even very sharp people in the space have highly varied impact (like maybe 4+ OOM between the bottom person I'd describe with the language you used and the top person I'd describe in the same language, e.g. Christiano).

Would have been interested to see a more apples to apples with technical researchers on the policy side. Most technical researchers have at least some research and/or work experience (usually ~5 years of the two combined). One of the policy categories is massively underqualified in comparison, and the other is massively overqualified. I'd guess this is downstream of where the community has set the bar for policy people, but I'd take "has worked long enough to actually Know How Government Works, but has no special connections or string-pulling power" at like >10:1 against the kind of gov researcher listed (although I'd also take that kind of gov researcher at less than half the median exchange rate above).

Surprised a UN AI think tank (a literal first, afaik, and likely a necessary precursor for international coordination or avoiding an arms race) would be rated so low, whereas a US think tank (when many US think tanks, including the most important one, have already pivoted to spending a lot of time thinking about AI) was rated so highly.

Without commenting too much on a specific org (anonymity commitments, sorry!), I think we’re in agreement here and that the information you provided doesn’t conflict with the findings of the report (although, since your comment is focused on a single org in a way that the report is simply not licensed to be, your comment is somewhat higher resolution).

One manager creates bandwidth for 5-10 additional Iterator hires, so the two just aren’t weighted the same wrt something like ‘how many of each should we have in a MATS cohort?’ In a sense, a manger is responsible for ~half the output of their team, or is “worth” 2.5-5 employees (if, counterfactually, you wouldn’t have been able to hire those folks at all). This is, of course, conditional on being able to get those employees once you hire the manager. Many orgs also hire managers from within, especially if they have a large number of folks in associate positions who’ve been with the org > 1 year and have the requisite soft skills to manage effectively.

If you told me “We need x new safety teams from scratch at an existing org”, probably I would want to produce (1-2)x Amplifiers (to be managers), and (5-10)x Iterators. Keeping in mind the above note about internal hires, this pushes the need (in terms of ‘absolute number of heads that can do the role’) for Amplifiers relative to Iterators down somewhat.

Fwiw, I think that research engineer is a pretty Iterator-specced role, although with different technical requirements from, i.e. “Member of Technical Staff” and “Research Scientist”, and that pursuing an experimental agenda that requires building a lot of your own tools (with an existing software development background) is probably great prep for that position. My guess is that MATS scholars focused on evals, demos, scalable oversight, or control could make strong research engineers down the line, and that things like CodeSignal tests would help catch strong Research Engineers in the wild.

 

...we’re looking for someone with experience in a research or engineering environment, who is excited about and experienced with people and project management, and who is enthusiastic about our research agenda and mission.

 

I’d also predict that, if management becomes a massive bottleneck to Anthropic scaling, they would restructure somewhat to make the prerequisites for these roles a little less demanding (as has DeepMind, with their People Managers, as opposed to Research Leads, and as have several growing technical orgs, as mentioned in the post).

This is a good point, and something that I definitely had in mind when putting this post together. There are a few thoughts, though, that would temper my phrasing of a similar claim: 

Many interviewees said things like "I want 50 more iterators, 10 amplifiers to manage them, and 1-2 connectors." Interviewees were also working on diverse research agendas, meaning that each of these agendas could probably absorb >100 iterators if not for managerial bottlenecks and, to a lesser extent, funding constraints. This is even more true if those iterators have sufficient research taste (experience) to design their own followup experiments.

This points toward abundant low hanging fruit and a massive experimental backlog field-wide. For this reason and others, I'd probably bump up the 100 number in your hypothetical by a few oom which, given the (fast in an absolute sense but, relative to our actual needs/funds) slow growth of the field, probably means the need for iterators holds even in long timelines, particularly if read as "for at least a few months, please prioritize making more iterators and amplifiers" and not "for all time, no more connectors are needed."

If we just keep tasting the soup, and figuring out what it needs as we go, we'll get better results than if any one-time appraisal or cultural mood becomes dogma.

There's a line I hear paraphrased a lot by the ex-physicists around here, from Paul Dirac, about physics  in the immediate wake of relativity: it was a time when "second-rate physicists could do first-rate work." The AI safety situation seems similar: the rate of growth, the large number of folks who've made meaningful contributions, the immaturity of the paradigm, the proliferation of divergent conceptual models, all point to a landscape in which a lot of dry scientific churning needs doing.

I definitely agree that marginal 'more-of-the-same' talent has diminishing returns. But I also think diverse teams have a multiplicative effect, and my intention in the post is to advocate for a diversified talent portfolio (as in the numbered takeaways section, which is in some sense a list of priorities, but in another sense a list of considerations that I would personally refuse to trade off against if I were the Dictator of AI Safety Field-building). That is, you get more from 5 iterators, one amplifier, and one connector working together on mech interp, than you do from 30 iterators doing the same. But I wasn't thinking about building the mech interp talent pool from scratch in a frictionless vacuum; I was looking at the current mech interp talent pool and trying to see how far it is, right now, from its ideal composition, then fill those gaps (where job openings, especially at small safety orgs, and preferences of grant makers, are a decent proxy for the gaps).

Sorry to go so hard in this response! I've just been living inside thinking about this for 4-5 months, and a lot of this type of background was cut from the initial post for concision and legibility (neither of which are particularly native to me). I'd hoped the comment section might be a good place for me to provide more context and tempering, so thanks so much for engaging!