I asked readers of my blog with experience in AI alignment (and especially AI grantmaking) to fill out a survey about how they valued different goods. I got 61 responses. I disqualified 11 for various reasons, mostly failing the comprehension check question at the beginning, and kept 50.

Because I didn't have a good way to represent the value of "a" dollar for people who might have very different amounts of money managed, I instead asked people to value things in terms of a base unit - a program like MATS graduating one extra technical alignment researcher (at the center, not the margin). So for example, someone might say that "creating" a new AI journalist was worth "creating" two new technical alignment researchers, or vice versa.

One of the goods that I asked people to value was $1 million going to a smart, value-aligned grantmaker. This provided a sort of researcher-money equivalence, which turned out to be $125,000 per researcher on median. I rounded to $100,000 and put this in an experimental second set of columns, but the median comes from a wide range of estimates and there are some reasons not to trust it.

The results are below. You can see the exact questions and assumptions that respondents were asked to make here. Many people commented that there were ambiguities, additional assumptions needed, or that they were very unsure, so I don't recommend using this as anything other than a very rough starting point.

I tried separating responses by policy vs. technical experience, or weighting them by respondent's level of experience/respect/my personal trust in them, but neither of these changed the answers enough to be interesting. 

You can find the raw data (minus names and potentially identifying comments) here.

121

1
0
10

Reactions

1
0
10
Comments28
Sorted by Click to highlight new comments since:
tlevin
163
59
1
5
1

Thanks for running this survey. I find these results extremely implausibly bearish on public policy -- I do not think we should be even close to indifferent between improving the AI policy of the country that can make binding rules on all of the leading labs plus many key hardware inputs and has a $6 trillion budget and the most powerful military on earth by 5% and having $8.1 million more dollars for a good grantmaker, or having 32.5 "good video explainers," or having 13 technical AI academics. I'm biased, of course, but IMO the surveyed population is massively overrating the importance of the alignment community relative to the US government.

I mostly agree with this. The counterargument I can come up with is that the best AI think tanks right now are asking for grants in the range of $2 - $5 million and seem to be pretty influential, so it's possible that a grantmaker who got $8 million could improve policy by 5%, in which case it's correct to equate those two. 

I'm not sure how that fits with the relative technical/policy questions.

How are the best AI think tanks "pretty influential?"

I think "5%" is just very badly defined. If I just go with the most intuitive definition to me, then 32.5 good video explainers would probably improve the AI x-risk relevant competence of the US government by more than 5% (which currently is very close to 0, and 5% of a very small number is easy to achieve). 

But like, any level of clarification would probably wildly swing whatever estimates I give you. Disagreement on this question seems like it will inevitably just lead to arguing over definitions.

"Improve US AI policy 5 percentage points" was defined as

Instead of buying think tanks, this option lets you improve AI policy directly. The distribution of possible US AI policies will go from being centered on the 50th-percentile-good outcome to being centered on the 55th-percentile-good outcome, as per your personal definition of good outcomes. The variance will stay the same.

(This is still poorly defined.)

Hmm, yeah, that is better-defined. I don't have a huge amount of variance within those percentiles, so I think I would probably take the 32.5 video explainers, but I really haven't thought much about it.

Some interesting implications about respondents' median beliefs:

  • It takes 93 typical policy people, or 4 and 1/3 extraordinary policy people, to improve US policy by 5 percentage points
  • Mass protests improve US policy by 0.06%
  • US policy matters 3.25x as much as individual BigCos' internal policies
  • An academic researcher is worth 5x as much as a typical technical researcher
  • An extraordinary technical researcher is worth 1 and 2/3 times as much as an entire new org

(A couple of these sound super wrong to me but I won't say which ones)

Actually, something I am confused about is whether the AI academics are per person*year as the technical researchers in various fields.

Interesting that a typical AI journalist is valued more than a typical AI technical researcher (1.2x) and typical AI policy person (1.7x). 

How many of each are there? I wonder if journalists are uncommon enough that their marginal utility hasn't started sloping down too much yet

I see a couple of the questions have a lot of missing data concentrated at the start of the dataset (e.g. "Fund Edith for one year", "Improve big company safety orientation 5 percentage points"). Is there a particular reason for that, i.e. the questions were added to the survey part way through, after some respondents had already taken it? (This influences how we should interpret the missing data).

 

Yes, I added them partway through after thinking about the question set more.

Thanks, makes sense!

In that case, dropping those two items, the responses seem pretty coherent, in that you can see a fairly clear pattern of support for ~ policy and think tanks, ~ outreach, and ~ technical work cohering together.[1] I think this is reassuring about the meaningfulness of people's responses (while not, of course, suggesting that they got the substantive values right).

  1. ^

    The exact results vary depending on the nuances of the analysis, of course, so I wouldn't read too much into the specifics of the results above without digging into it more yourself, though we found broadly the same pattern across a number of different analyses.

In case it's useful to anyone: that 100k number is ~4-5x the actual cost of increasing the size of a MATS cohort by 1.

edit for more fleshed out thoughts and some questions....

and now edited again to replace those questions with answers, since the doc is available...

Reasoning about how exceptional that exceptional technical researcher is is super hard for me because even very sharp people in the space have highly varied impact (like maybe 4+ OOM between the bottom person I'd describe with the language you used and the top person I'd describe in the same language, e.g. Christiano).

Would have been interested to see a more apples to apples with technical researchers on the policy side. Most technical researchers have at least some research and/or work experience (usually ~5 years of the two combined). One of the policy categories is massively underqualified in comparison, and the other is massively overqualified. I'd guess this is downstream of where the community has set the bar for policy people, but I'd take "has worked long enough to actually Know How Government Works, but has no special connections or string-pulling power" at like >10:1 against the kind of gov researcher listed (although I'd also take that kind of gov researcher at less than half the median exchange rate above).

Surprised a UN AI think tank (a literal first, afaik, and likely a necessary precursor for international coordination or avoiding an arms race) would be rated so low, whereas a US think tank (when many US think tanks, including the most important one, have already pivoted to spending a lot of time thinking about AI) was rated so highly.

Of course, the marginal graduate is worse than the median graduate, and in order for someone to end up participating in MATS many more things need to happen than for MATS to accept them (most MATS students have spent dozens to hundreds of hours reading existing content, or already extensively engaged with existing community institutions, which someone has to pay for). 

As such, this at least does not straightforwardly imply that people think MATS should get more funding (I do think MATS probably should get more funding, but I care about the local validity of this argument here).

Ah, really just meant it as a data point and not an argument! I think if I were reading this I'd want to know the above (maybe that's just because I already knew it?).

But to carry on the thread: It's not clear to me from what we know about the questions in the survey if 'creating' meant 'courting, retraining', or 'sum of all development that made them a good candidate in the first place, plus courting, retraining.' I'd hope it's the former, since the latter feels much harder to reason about commutatively. Maybe this ambiguity is part of the 'roughness' brought up in the OP.

I'm also not sure if 'the marginal graduate is worse than the median graduate' is strongly true. Logically it seems inevitable, but also it's very hard to know ex ante how good a scholar's work will be, and I don't think it's exactly right to say there's a bar that gets lowered when the cohort increases in size. We've been surprised repeatedly (in both directions) by the contributions of scholars even after we feel we've gotten a bead on their abilities (reviewed their research plans, etc).

Often the marginal scholar allows us to support a mentor we otherwise wouldn't have supported, who may have a very different set of selection criteria than other mentors.

If the marginal scholar is better than the median scholar, why would you just not admit the worst scholars and then admit the better scholars? Clearly the marginal scholar would usually be the worst scholar? Are you saying that if you had half the money that the average quality of the cohort would go down instead of up, and that you would be unable to prioritize only admitting the more competent people?

I think your claim is directionally correct all-else-equal; I just don't think the effect is big enough in context, with high enough confidence, that it changes the top-line calculation you're responding to (that 4-5x) at the resolution it was offered (whole numbers).

The naive assumption that scholars can be arranged linearly according to their abilities and admitted one-by-one in accordance with the budget is flawed. If it were true, we could probably say that the marginal MATS scholar at selection was worth maybe <80 percent of the central scholar (the threshold at which I would have written 3-4x above rather than 4-5x). But it's not true.

Mentors pick scholars based on their own criteria (MATS ~doesn't mess with this, although we do offer support in the process). Criteria vary significantly between mentors. It's not the case, for instance, that all of the mentors put together their ordered list of accepted and waitlisted scholars and end up competing for the same top picks. This happens some, but quite rarely relative to the size of the cohort. If what you've assumed actually had a strong effect, we'd expect every mentor to have the same (or even very similar) top picks. They simply don't.

MATS 6 is both bigger and (based on feedback from mentors) more skill-dense than any previous MATS cohort, because it turns out all else does not hold equal as you scale and you can't treat a talent pipe line like a pressure calculation.

You might believe that there are network effects, or that the "best" people are only willing to come along if there's a sufficiently large intellectual scene. (Not saying either is likely, just illustrating that the implied underlying model is not a tautology).

I predict the opposite effect - average intellectual scene quality is a much bigger draw than total number of people (MATS is already large). I expect a larger program is actively detrimental for drawing top people

I'm thinking less of total number of people and more like probability of having specific collaborators work in your exact area or are otherwise useful to have around. 

Ah, fair. Yes, I agree that's a plausible factor, especially for nicher areas

Yeah, I think those are not implausible, but very unlikely.

My ill-informed impression of the RAND situation was that there's a new group inside RAND thinking about AI, and its small in personnel and resources compared to RAND at large. Is that not so?

small but expanding (like everything in the space) is my understanding; there are also a lot of non-rand government and government-adjacent groups devoted to AI safety and nat sec.

I didn't mean to imply that the org had retooled to become entirely AI-focused or something; sorry if that's how it read!

The google form link seems not to work.

I would be particularly interested to know if 'technical AI academic' meant just professors, or included post-docs/PhDs.

Also are we to assume that any non 1person*year annotated question meant causing to exist an entirely new career-up-to-doom/TAI worth of work?

Sorry, I've fixed the Google Form.

Interesting exercise, thanks! The link to view the questions doesn't work though. It says:

The form AI Grantmaking Priorities Survey is no longer accepting responses.
Try contacting the owner of the form if you think that this is a mistake.

Curated and popular this week
Relevant opportunities