Hide table of contents

I have spent around 100–200 hours listening to AI safety audiobooks, AI Safety Fundamentals course, Rob Miles YouTube, The Sequences, various bits and pieces of a bunch of YouTube AI channels and podcasts, as well as some time thinking through the basic case for X-risk.

When I look at certain heavy academic stuff or try to consume more technical content I sometimes get pretty lost. I am wondering how I can best build up my understanding of the basic technical details and terminology of AI and a broad overview of AI safety, such that I don’t get lost and it isn’t too grueling or over my head.

For context, fun doesn’t necessarily have to mean entertaining, I would find it fun to read a textbook that I can understand and that gradually builds my knowledge;

And easy doesn’t have to mean super basic, it probably needs to start with basics (I know up to pre-calculus but am unusually good at learning new math, I know relatively little about computer science) but then I would like to gradually build up to have a relatively deep understanding of whatever mathematics and technical details I need to really understand the problems at hand.

But easy and fun could also mean really informative podcasts or fiction that actually provides really useful insights, or a YouTube channel that explains the basics really effectively.

I guess the basic idea is I want to focus hard-core on AI, but I don’t want to burn out and want to ease into it in a way that makes me excited about it and enjoy it as much as possible, and I am wondering if anyone has experienced such content themselves, if so I would love to hear about it!

Thanks in advance!

14

0
0

Reactions

0
0
New Answer
New Comment


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att