Hide table of contents

I'm prepping a new upper-level undergraduate/graduate seminar on 'AI and Psychology', which I'm aiming to start teaching in Jan 2025. I'd appreciate any suggestions that people might have for readings and videos that address the overlap of current AI research (both capabilities and safety) and psychology (e.g. cognitive science, moral psychology, public opinion). The course will have a heavy emphasis on the psychology, politics, and policy issues around AI safety, and will focus more on AGI and ASI than on narrow AI systems. Content that focuses on the challenges of aligning AI systems with diverse human values, goals, ideologies, and cultures would be especially valuable. Ideal readings/videos would be short, clear, relatively non-technical, recent, and aligned with an EA perspective. Thanks in advance! 

32

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

I was recommended Perplexity for looking for course materials.

You can search academic databases, as well as perform broad searches on the web or YouTube.

Provide context like ChatGPT does. For your purpose, mention that you are building a course on artificial intelligence and psychology and give details about it.

Thanks! Appreciate the suggestion.

This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there. 

This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117 

For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai

Abby - good suggestions, thank you. I think I will assign some Robert Miles videos! And I'll think about the human value datasets.

A few quick ideas:
1. On the methods side, I find the potential use of LLMs/AI as research participants in psychology studies interesting (not necessarily related to safety). This may sound ridiculous at first but I think the studies are really interesting.
From my post on studying AI-nuclear integration with methods from psychology: 

[Using] LLMs as participants in a survey experiment, something that is seeing growing interest in the social sciences (see Manning, Zhu, & Horton, 2024; Argyle et al., 2023; Dillion et al., 2023; Grossmann et al., 2023).

2. You may be interested or get good ideas from the Large Language Model Psychology research agenda (safety-focused). I haven't gone into it so this is not an endorsement.

3. Then you have comparative analyses of human and LLM behavior. E.g. the Human vs. Machine paper (Lamparth, 2024) compares humans and LLMs' decision-making in a wargame. I do something similar with a nuclear decision-making simulation, but it's not in paper/preprint form yet.

Helpful suggestions, thank you! Will check them out.

Comments2
Sorted by Click to highlight new comments since:

This sounds very interesting and closely aligns with my personal long-term career goals. Would the seminar content will be made available online for those looking to complete the course remotely or is this purely in-person?

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s
 ·  · 9m read
 · 
Crosspost from my blog.  Content warning: this article will discuss extreme agony. This is deliberate; I think it’s important to get a glimpse of the horror that fills the world and that you can do something about. I think this is one of my most important articles so I’d really appreciate if you could share and restack it! The world is filled with extreme agony. We go through our daily life mostly ignoring its unfathomably shocking dreadfulness because if we didn’t, we could barely focus on anything else. But those going through it cannot ignore it. Imagine that you were placed in a pot of water that was slowly brought to a boil until it boiled you to death. Take a moment to really imagine the scenario as fully as you can. Don’t just acknowledge at an intellectual level that it would be bad—really seriously think about just how bad it would be. Seriously think about how much you’d give up to stop it from happening. Or perhaps imagine some other scenario where you experience unfathomable pain. Imagine having your hand taped to a frying pan, which is then placed over a flame. The frying pan slowly heats up until the pain is unbearable, and for minutes you must endure it. Vividly imagine just how awful it would be to be in this scenario—just how much you’d give up to avoid it, how much you’d give to be able to pull your hand away. I don’t know exactly how many months or years of happy life I’d give up to avoid a scenario like this, but potentially quite a lot. One of the insights that I find to be most important in thinking about the world is just how bad extreme suffering is. I got this insight drilled into me by reading negative utilitarian blogs in high school. Seriously reflecting on just how bad extreme suffering is—how its intensity seems infinite to those experiencing it—should influence your judgments about a lot of things. Because the world is filled with extreme suffering. Many humans have been the victims of extreme suffering. Throughout history, tort