Hide table of contents

As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.

(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)

First posted: 12/6/22

Last updated: 1/30/23

 

General Cognition

  •  What signs do I need to look for to tell whether a model's cognition has started to emerge, e.g., situational awareness?
  • Will a capacity for "doing science" be sufficient condition for general intelligence?
  • How easy was it for humans to get science (e.g., compared to evolving to take over the world). 

Deception 

  •  What kind of interpretability tools do we need to avoid deception? 
  •  How do we get these interpretability tools and even if we do get them, what if they're like neuroscience for understanding brains (not enough)?
  •  How can I tell whether a model has found another goal to optimize for during its training?
  •  What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?

Agent Foundations 

  • Is the description/modeling of an agent ultimately a mathematical task?
  • From where do human agents derive their goals?
  • Is value fragile

Theory of Machine Learning

  • What explains the success of deep neural networks?
  • Why was connectionism unlikely to succeed? 

Epistemology of Alignment (I've written about this here)

  • How can we accelerate research?
  • Has philosophy ever really helped scientific research e.g., with concept clarification?
  • What are some concrete takeaways from the history of science and technology that could be used as advice for alignment researchers and field-builders? 
  • The emergence of the AI Safety paradigm

Philosophy of Existential Risk 

  • What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology? 
  • What is the best way to think about serious risks in the future without reinforcing a sense of doom? 

Teaching and Communication

  • Younger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case. 
  • The idea of a technological singularity was not difficult to explain and discuss with my students. I think that's surprising given how powerful the weirdness heuristic is.
  • The idea of "agency" or "being an agent" was easy to conflate with "consciousness" in philosophical discussions. It's not clear to me why that was the case since I gave a very specific definition of agency. 
  • Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there's something uniquely human about consciousness/sentience. 
  • The "AIs will take our jobs in the future" seems to be a very common concern both among students and academics. 
  • 80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"

Governance/Strategy

  • Should we try to slow down AI progress? What does this mean in concrete steps? 
  • How should we go about capabilities externalities? 
  • How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?
Comments6


Sorted by Click to highlight new comments since:

+1 to sharing lists of questions.

 What signs do I need to look for to tell whether a model's cognition has started to emerge?

I don't know what 'cognition emerging' means. I suspect the concept is vague/confused.

What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology? 

Why would you want to explain the difference?

I've been asked this question! Or, to be specific, I've been asked something along these lines: human cultures have always been speculating about the end of the world so how is this forecasting x-risk any different? 

[anonymous]3
2
0

Younger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case.

Some hypotheses to test
- Younger people are more likely to hold and signal radical beliefs  and the possibility of extinction is seen as more radical and exciting compared to humanity muddling through like it's done in the past
- Younger people are just beginning to grapple with their own mortality which freaks them out whereas older people are more likely to have made peace with it in some sense
- Older people have survived through many events (including often fairly traumatic ones) so are more likely to have a view of a world that "gets through things" as this aligns with their personal experience
- Older people have been around for a number of past catastrophic predictions that turned out to be wrong?

- Older people have survived through many events (including often fairly traumatic ones) so are more likely to have a view of a world that "gets through things" as this aligns with their personal experience
- Older people have been around for a number of past catastrophic predictions that turned out to be wrong?

Nuclear war has been in the news for more than 60 years, and a high priority has been placed on spending those >60 years influencing public opinion on nuclear war via extremely carefully worded statements by spokespeople, which in turn were ghostwritten by spin doctors and other psychological experts with a profoundly strong understanding of news media corporations. This is the main reason, and possibly the only reason, why neither of the two American policial parties or presidential candidates have ever adopted disarmament as part of their nationwide party platform during any elections in that time period.

They weren't successful at their goals 100% of the time (Soviet Propaganda operations may have contributed), but their efforts (and the fact that nuclear war scared people but never happened once for 60+ years) strongly affected the life experiences and cultural development of older people while they were younger.

I would suggest that new paradigms are most likely to establish themselves among the young because they are still in the part of their life where they are figuring out their views.

You should make manifold markets predicting what you’ll think of these questions in a year or 5 years.

[comment deleted]1
0
0
Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal