Announcing the Institute for Law & AI's 2024 Summer Research Fellowship in Law & AI — apply before EOD Anywhere on Earth, February 16! 

LawAI (formerly the Legal Priorities Project) are looking for talented law students and postdocs who wish to use their careers to address risks from transformative artificial intelligence, to engage in an 8-12 week long fellowship focused on exploring pressing questions at the intersection of law and AI governance.

Fellows will work with their supervisor to pick a research question, and will spend the majority of their time conducting legal research on their chosen topic. They may also assist other LawAI team members with projects, as well as work on their career plans with the assistance of the LawAI team and other AI governance professionals in our network. Fellows will join the team some time between June and October, in a fully remote capacity. We're offering fellows a stipend of $10,000

The following are some examples of topics and questions we'd be particularly keen for fellows to research (though we are open to suggestions of other topics from candidates, which focus on mitigating risks from transformative AI):

  • Liability - How will existing liability regimes apply to AI-generated or -enabled harms? What unique challenges exist, and how can legislatures and courts respond?
  • Existing authority - What powers do US agencies currently have to regulate transformative AI? What constraints or obstacles exist to exercising those powers? How might the major questions doctrine or other administrative law principles affect the exercise of these authorities?
  • First Amendment - How will the First Amendment affect leading AI governance proposals? Are certain approaches more or less robust to judicial challenge? Can legislatures and agencies proactively adjust their approaches to limit the risk of judicial challenge?
  • International institutions - How might one design a new international organization to promote safe, beneficial outcomes from the development of transformative artificial intelligence? What role and function should such an organization prioritize?
  • Comparative law - Which jurisdictions are most likely to influence the safe, beneficial development of AI? What opportunities are being under-explored relative to the importance of law in that jurisdiction? 
  • EU law - What existing EU laws influence the safe, beneficial development of AI? What role can the EU AI Act play, and how does it interact with other relevant provisions, such as the precautionary principle under Art. 191 TFEU in mitigating AI risk? 
  • Anticipatory regulation - What lessons can be learned from historic efforts to proactively regulate new technologies as they developed? Do certain practices or approaches seem more promising than others?
  • Adaptive regulation - What practices best enable agencies to quickly and accurately adjust their regulations to changes in the object of their regulation? What information gathering practices, decision procedures, updating protocols, and procedural rules help agencies keep pace with changes in technology and consumer and market behaviors?
  • Developing other specific AI-governance proposals - For example: How might a government require companies to maintain the ability to take down, patch, or shutdown their models? How might a government regulate highly capable, but low-compute models? How might governments or private industry develop an effective insurance market for AI?

If you're interested in applying, or know of anyone who might be, you can find further details in our application information pack, and apply here before EOD February 16. Feel free to reach out to if you have any questions!





More posts like this

Sorted by Click to highlight new comments since:

This seems like a really exciting fellowship, and I'll make sure to recommend it to some of the law students I interact with. Will a compendium of outputs be released? I know some orgs who would be interested in what these projects throw up.

Thanks so much, we appreciate it!

Currently, we don't have specific plans regarding our outputs, as we expect they could take a few different forms depending on the specific skills and preferences of fellows, as well as the target audience of the pieces — e.g. fellows may produce publishable research pieces, internal write-ups and scoping pieces, private pieces for specific actors, blog posts, etc. Publishable outputs will be available on our website after the fellowship ends, and we'll notify people via our newsletter when those are live — you can sign up on our website if you're interested in being notified!

More from LawAI
Curated and popular this week
Relevant opportunities