Luise

Researching US Securitization of AI Development
547 karmaJoined Pursuing an undergraduate degreeWorking (0-5 years)London, UK
admonymous.co/luisew

Bio

Currently researching how involved the US government may get in the development of AGI and by what method. I try to learn from history and generalize from past cases of US government involvement in developing general-purpose technologies. (As a participant of the Pivotal Research Fellowship.)

Previously, I researched whether cost-benefit analysis used by US regulators might stop/discourage frontier AI regulations. (Supervised by John Halstead, GovAI.)

I also sometimes worry about the big-picture epistemics of EA à la "Is EA just an ideology like any other?".

In the past, I've done operations and recruiting at GovAI, CEA, and the SERI ML Alignment Theory Scholars program. My degree is in Computer Science.

Comments
31

I don't think it is clear what the "crucial step" in AGI development will look like—will it be a breakthrough in foundational science, or massive scaling, or combining existing technologies in a new way? It's also unclear how the different stages of the reference technologies would map onto stages for AGI. I think it is reasonable to use reference cases that have a mix of different stages/'cutoff points' that seem to make sense for the respective innovation.

Ideally, one would find a more principled way to control for the different stages/"crucial steps" the different technologies had. Maybe one could quantify the government control for each of these stages for each technology. And assign weights to the different stages depending on how important the stages might be for AGI. But I had limited time and I think my approach is a decent approximation.

Thank you, these are good points!

On the notion of "USG control":

I agree that the labeling of USG control is imperfect and only an approximation. I think it's a reasonable approximation though.

Almost all of the USG control labels I used were taken from Anderson-Samways's research. He gives explanations for each of his labels, e.g. for the airplane he considers the relevant inventors the Wright brothers who weren't government-affiliated at the time. It's probably best to refer to his research if you want to verify how much to trust the labels.

You may have detailed contentions with each of these labels but you might still expect that, on average, they give a reasonable approximation of USG control. This is how I see the data.

On the list of innovations feeling arbitrary:

I share this concern but, again, I feel the list of innovations is still reasonably meaningful. As I said in the piece:

Choices regarding which stage of development and deployment to identify as “the invention” of the technology aren’t consistent. The most important scientific breakthroughs are often made some time before the first full deployment of a technology which in turn is often done before crucial hurdles to deployment at scale are overcome. This matters for the data insofar as the labeling of the invention year and the extent of USG control aren’t applied consistently to the same stage of development and deployment. This should not be detrimental, considering it’s not clear what the crucial stage of development and deployment for AGI will be either.[6] Nevertheless, it makes the data less precise and more of an approximation.

(I was trying to get at something similar as your concern about "specific versus broad" innovations. "Early stage development versus mass-scale deployment" is often pretty congruent with "specific scientific breakthrough" versus "broad set of related breakthroughs and their deployment".)

The reason why many other important innovations are not on the list is mostly time constraints.

I've got moths in my flat right now and this post made me take solving this more seriously. Thank you!

I found the framing of "Is this community better-informed relative to what disagreers expect?" new and useful, thank you!

To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on "EA tenets". For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it's fine that we were "unusual from the get-go" regarding this prior. (This is of course not very epistemically modest of them.)

Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:

  • It's kind of hard to determine whether EA beliefs are weird because we were weird from the get-go or because we did some novel piece of research/thinking. For example, was Toby Ord concerned about x-risks in 2009 because he had unusual priors or because he had thought about novel considerations that are obscure to outsiders? People would probably introduce their own biases while making this judgment. I think you could even try to make an argument like this about polyamory.
  • People probably generally think a community is better-informed than expected when spending more time engaging with it. At least this is what I see empirically. So for people who've engaged a lot with EA, your policy of updating towards EA beliefs if EA seems better-informed than expected probably leads to deferring asymmetrically more to EA than other communities. Since they will have engaged less with other communities. (Ofc you could try to consciously correct for that.)
  • I overall often have the concern with EA beliefs that "maybe most big ideas are wrong", just like most big ideas have been wrong throughout history. In this frame, our little inside pet theories and EA research provide almost no Bayesian information (because they are likely to be wrong) and it makes sense to closely stick to whatever seems most "common sense" or "established". But I'm not well-calibrated on how true "most big ideas are wrong" is. (This point is entirely compatible with what you said in the post but it changes the magnitude of updates you'd make.)
     

Side-note: I found this post super hard to parse and would've appreciated it a lot if it was more clearly written!

My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the "largest and smartest" expert group basically 'by default'. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.

Thanks a lot, I think it's really valuable to have your experience written up!

Luise
4
-1Aim
1Clarity
😮 1

Thanks Max!

Sounds like a plausible theory that you lost motivation because you pushed yourself too hard. I'd also pay attention to "dumber" reasons like maybe you had more motivation from supervisors/social environment/more achievable goals in the past.

Similar to my call to take a vacation, maybe it's worth it for you to only do motivating work (like a side project) for 1.5 weeks and see if the tiredness disappears.

All of this with the caveat that you understand your situation a lot better than I do ofc!

yes! From reading about burnout it can seem like it only happens to people who hate their job, work in bad environments, etc. But it can totally happen to people who love their job!

thanks and big agree; I want to see many more different experiences of energy problems written up!

the causes of people's energy problems are so many and varied! It would be great to have many different experiences written up, including stress and anxiety-induced problems.

Thanks for feedback re:appendix, will see if others say the same :)

Load more