Hide table of contents

I wrote this post quickly because it's better to share an imperfect post than to never write anything at all.  

 

In a recent post, I mentioned some thoughts about teaching and communication mostly considering some classes I taught on AI risk during my Intro to Philosophy course this fall. 

I think it's easy to agree that teaching is in principle a way to have high impact by directly influencing people's intellectual lives and careers. But here I want to argue that there is something special about teaching philosophy: a philosophy classroom is one of the few places in the world where all sorts of arguments can be heard and discussed and where everyone is invited to question their experiences and very own thoughts. 

What I did this fall 

For that reason, I thought that having a whole section on Science, Technology, and Society with a focus on AI was worth the while. In particular, we discussed some bits of Chalmer's Singularity, we watched a good chunk of this video by Rob Miles, and read excerpts from Ord's Precipice and Bostrom's Superintelligence.  [Feel free to dm me for my syllabus]  

Most of the students had very thoughtful contributions to understanding how a technological singularity could happen and what it would mean for humanity. I asked them to reflect on it as a thought experiment as we weren't trying to produce actual forecasting results or AI timelines. 

Overall, I felt that a philosophy classroom is the best place to discuss difficult and controversial topics and have everyone engaged in the conversation. Those that were easy to agree that AI risk is a plausible scenario had something to argue about and so did those that considered it unlikely or couldn't see if it makes sense or not.

I keep saying that a philosophy classroom is great for such conversations because it really encourages arguing and people have to explain why they think what they think and find the right words to articulate their thoughts and make communication possible and effective. 

Their final essay was to construct their ideal future and discuss the role of science, technology, and philosophy in that future. Many of them seemed to be concerned about the role of future technologies and wanted people to philosophize in the ideal future so that technologies are beneficial for society at large (nobody wrote these exact words, I'm rephraming the spirit of some essays focusing on that idea). 

Why I did this

Two reasons:

First, I thought one of the best philosophical exercises is to learn to discuss controversial topics with real-world implications like AI risk. I wanted to prioritize cultivating their ability to think in slow motion, to argue, to play with thought experiments and wild-seeming scenarios, to filter their intuitions, and to create their own world models without deferring. For that last reason, I made sure they'd never have to memorize stuff like "what Aristotle says in Metaphysics Book A " (I did that sometimes as a student in undergrad, for fun!) and try to appeal to "great thinkers". [As a side note, a couple of them noticed that I have a thing for David Hume and they cited him pretty randomly when it didn't make sense to do so.]

Second, I felt a moral duty to talk about why we might be living in the most important century. The shorter my timelines are getting, the more I think this moral duty increases. But even if it weren't for doomy arguments, I think it's good practice to honestly ask yourself what is the most important problem in your field, more or less broadly, and then focus on that, either as a researcher or a teacher. 

A thought for the future

While I cannot safely say that I plan to teach something like this again, I'm certainly convinced that it was worth it and that I will grab whatever opportunity I'm given to not only incorporate a section like this into future Intro to Philosophy syllabi but also to design (and hopefully teach!) a philosophically-minded course on AI risk, safety, and the wild future ahead of us. 

Comments2
Sorted by Click to highlight new comments since:

I'm glad you enjoyed teaching philosophy, and I don't want to negate that you had an impact onto your students. However, I can't really agree with your optimistic view on the "philosophy classroom' environment.

I've spent 5 years studying philosophy at the university and there is indeed a great benefit to discussing things and disagreeing on them, but what I want to state it goes well only as long as the topic *isn't* controversial. AI risk, I believe, actually falls in this non-controversial category. However, when the topic actually is personal to people and politically-charged, then I've observed that there is no more rational discussion and/or good faith - either in the philosophy classroom, or, let's say, "philosophical spaces" in the Internet. I can have a different opinion than my colleagues on the nature of time and space and it's all good, but when it comes to discussing e.g. abortion and there's disagreement, it's not so "fun" anymore. At least that's my experience.

I know what you mean and I've definitely had this kind of experience (and in particular, last semester which led me to want to leave both my university and academia-- that's how bad it was). What I wanted to emphasize while teaching is that it's valuable to question our own thoughts, emotions, and experiences in the philosophy classroom, and it's disappointing to see that most people are not willing to do that. But hey, at least I tried...

Curated and popular this week
Relevant opportunities