This is a linkpost for [https://www.meridiancambridge.org/language-models-course]
Meridian Cambridge, in partnership with Cambridge University's Center for Data Driven Discovery (C2D3), has produced a 16-part lecture series entitled "Language Models and Intelligent Agentic Systems" and the recordings are now online!
The LMaIAS course provides an introduction to core ideas in AI safety. Throughout, we build up from introductory ideas about language modelling and neural networks to discussions of risks posed by advanced AI systems.
Course Structure
The course is divided into four parts:
Part 1: What is a Language Model?
To start the course, we give three lectures covering generative models and next token prediction, the transformer architecture, and scaling laws for large models.
Part 2: Crafting Agentic Systems
Now the foundations are in place, the next four lectures go into details on LLM post-training, reinforcement learning, reward modelling, and agent architectures.
Part 3: Agentic Behaviour
Here we take four lectures to discuss optimisation and reasoning, reward hacking and goal misgeneralisation, out-of-context reasoning and situational awareness, and finally deceptive alignment and alignment faking.
Part 4: Frontiers
For the remainder of the lecture series, we give five lectures covering risks from advanced AI, AI evaluations, AI control and safety cases, AI organisations and agendas, and conclude with a discussion on the future of language models.
You can find the first lecture [here], and the whole course is available [here].
The lectures were created and delivered by Edward James Young, Jason R. Brown, and Lennie Wells, in partnership with Cambridge University's C2D3.
The hope is that this material can be used to help educate people new to the field and provide them with the background knowledge required to effectively contribute to AI safety. Please share this with anybody you think might be interested!
- The Meridian Team