Geoffrey Miller is professor of evolutionary psychology, and a long-time participant in the effective altruism (EA) movement. He has recently been emphasizing more how the extreme variability in human social psychology is a neglected consideration for AI alignment, relative to the field’s more technical side. Further, he has been raising the alarm about how this presents a thorny set of cultural, moral and political issues that the EA and AI alignment communities are woefully unprepared to contend with. 

In other words, for the problem of AI alignment:

  1. There is the more philosophical problem of how humans might retain control of general/transformative AI, and what that would even look like.
  2. There is the more STEM-oriented side tangling with the technologies and mathematics to accomplish that goal.
  3. There is another, underrated element of AI alignment, in terms of human psychology, posing questions about how, even assuming AGI would be aligned with a set of fundamentally human values, what those (sets of) values are, and who controls which set(s) of values transformative/general AI would be aligned with. 

His commentary on this subject often touches on its importance for relations between China and western countries, especially the United States, for EA and AI alignment. He most recently reinforced all of this in an in-depth and well-received comment on a post, by Leopold, evaluating how currently the entire field of AI alignment is in a generally abysmal state. From Geoffrey’s comment on how that all relates to China specifically:

We have, in my opinion, some pretty compelling reasons to think that it [the problem of AI alignment] is not solvable even in principle[...] given the deep game-theoretic conflicts between human individuals, groups, companies, and nation-states[emphasis added] (which cannot be waved away by invoking Coherent Extrapolated Volition, or 'dontkilleveryoneism', or any other notion that sweeps people's profoundly divergent interests under the carpet).
[...]
In other words, the assumption that 'alignment is solvable' might be a very dangerous X-risk amplifier, in its own right[...]It may be leading China to assume that some clever Americans are already handling all those thorny X-risk issues, such that China doesn't really need to duplicate those ongoing AI safety efforts, and will be able to just copy our alignment solutions once we get them.

This isn’t the first of Geoffrey’s comments like this I’ve found interesting, so I just checked for other views on the matter he has expressed. 

I was surprised to find 3 pages worth of search results on the EA Forum for his thoughtful comments about the underrated relevance, to EA and AI alignment, of cultural/political divides between China and western countries. This includes over a dozen such comments in the last year alone. Here is a cross-section of Geoffrey's viewpoints among all that commentary I've found most insightful.

On the cruciality for AI alignment and EA of gaining a better understanding of the culture and politics of China and other non-western countries

Politics tends to be very nation-specific and culture-specific, whereas EA aspires to global relevance. Insofar as EAs tend to be from the US, UK, Germany, Australia, and few other 'Western liberal democracies', we might end up focusing too much on the kinds of political institutions and issues typical of these countries. This would lead to neglect of other countries with other political values and issues. But even worse, it might lead us to neglect geopolitically important nation-states such as China and Russia where our 'Western liberal democracy' models of politics just don't apply very well. This could lead us to neglect certain ideas and interventions that could help nudge those countries in directions that will be good for humanity long-term (e.g. minimizing global catastrophic risks from Russian nukes or Chinese AI).

On the risk of excessive pro-America/pro-western bias, and anti-China bias, in effective altruism and AI alignment (This is a long comment, though I’m not excerpting any one part of it, as it’s comprehensive and worth reading in its entirety if you can spare the time for it.)

On the AI arms race in terms of political tensions between China and the United States:

I also encounter this claim [that China could or will easily exploit any slowdown of AI capabilities research in the US] very often on social media. 'If the US doesn't rush ahead towards AGI, China will, & then we lose'. It's become one of the most common objections to slowing down AI research by US companies, and is repeated ad nauseum by anti-AI-safety accelerationists.[...] It’s not at all obvious that China would rush ahead with AI if the US slowed down.
[...]
If China was more expansionist, imperialistic, and aggressive, I'd be more concerned that they would push ahead with AI development for military applications. Yes, they want to retake Taiwan, and they will, sooner or later. But they're not showing the kind of generalized western-Pacific expansionist ambitions that Japan showed in the 1930s. As long as the US doesn't meddle too much in the 'internal affairs of China' (which they see as including Taiwan), there's little need for a military arms race involving AI. 

I worry that Americans tend to think and act as if we are the only people in the world who are capable of long-term thinking, X risk reduction, or appreciation of humanity's shared fate.

On the relevance to AI alignment of differences in academic freedom between China and the US:

I'm not a China expert, but I have some experience running classes and discussion forums in a Chinese university. In my experience, people in China feel considerably more freedom to express their views on a wide variety of issues than Westerners typically think they do. There is a short list of censored topics, centered around criticism of the CCP itself, Xi Jinping, Uyghurs, Tibet, and Taiwan. But I would bet that they have plenty of freedom to discuss AI X risks, alignment, and geopolitical issues around AI, as exemplified by the fact that Kai-Fu Lee, author of 'AI Superpowers' (2018), and based in Beijing, is a huge tech celebrity in China who speaks frequently on college campuses there - despite being a vocal critic of some [government] tech policies.

Conversely, there are plenty of topics in the West, especially in American academia, that are de facto censored (through cancel culture). For example, it was much less trouble to teach about evolutionary psychology, behavior genetics, intelligence research, and even sex research in a Chinese university than in an American university.

Comments2


Sorted by Click to highlight new comments since:

Evan - thanks for pulling together and summarizing some of my EA Forum material on AI alignment in relation to China.

I've been keenly interested in the rise of China for more than 20 years; I've read a fair amount about Chinese history, politics, and culture, I've visited China, and I've taught (online) for a Chinese university. But, as I've often said, I'm not a China expert. So I'd welcome comments, discussion, and observations from other EAs who might know more than I do about any of the relevant issues.

Thanks for collating these comments--that's useful to get that overview.

FWIW, some people at CSER have done good work on this broad topic, working with researchers at Chinese institutions -- e.g. https://link.springer.com/article/10.1007/s13347-020-00402-x

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 6m read
 · 
TLDR: This 6 million dollar Technical Support Unit grant doesn’t seem to fit GiveWell’s ethos and mission, and I don’t think the grant has high expected value. Disclaimer: Despite my concerns I still think this grant is likely better than 80% of Global Health grants out there. GiveWell are my favourite donor, and given how much thought, research, and passion goes into every grant they give, I’m quite likely to be wrong here!   What makes GiveWell Special? I love to tell people what makes GiveWell special. I giddily share how they rigorously select the most cost-effective charities with the best evidence-base. GiveWell charities almost certainly save lives at low cost – you can bank on it. There’s almost no other org in the world where you can be pretty sure every few thousand dollars donated be savin’ dem lives. So GiveWell Gives you certainty – at least as much as possible. However this grant supports a high-risk intervention with a poor evidence base. There are decent arguments for moonshot grants which try and shift the needle high up in a health system, but this “meta-level”, “weak evidence”, “hits-based” approach feels more Open-Phil than GiveWell[1]. If a friend asks me to justify the last 10 grants GiveWell made based on their mission and process, I’ll grin and gladly explain. I couldn’t explain this one. Although I prefer GiveWell’s “nearly sure” approach[2], it could be healthy to have two organisations with different roles in the EA global Health ecosystem. GiveWell backing sure things, and OpenPhil making bets.   GiveWell vs. OpenPhil Funding Approach What is the grant? The grant is a joint venture with OpenPhil[3] which gives 6 million dollars to two generalist “BINGOs”[4] (CHAI and PATH), to provide technical support to low-income African countries. This might help them shift their health budgets from less effective causes to more effective causes, and find efficient ways to cut costs without losing impact in these leaner times. Teams of 3-5
 ·  · 3m read
 · 
We’re excited to announce SparkWell! What is SparkWell? SparkWell is an Anti Entropy program designed to help high-impact nonprofit projects test ideas, develop operational capabilities, and launch as independent entities. We provide a temporary home for a diverse range of promising initiatives. Why have we built this? We believe that we’re living through a transformational period in history. Catastrophic risks loom large, whether from climate change, factory farming, pandemics, nuclear or cyber warfare, or the misalignment or misuse of intelligent systems. A transformational period in history warrants a transformation in philanthropy — and we want to give innovative projects the support they need to test their ideas and scale. We leverage our skills and experience with nonprofit operations to guide enrolled projects through a bespoke acceleration roadmap.  Within 6–24 months, your project will graduate into an independent entity with operational best practices. This will put you in a position to scale your activities — and help mitigate the catastrophic risks facing us. What does SparkWell offer? SparkWell offers 6-month, 12-month, or 24-month tracks to accommodate projects at different stages. We enable each project to: * Test ideas * Receive tax-exempt funding via Anti Entropy's 501(c)(3) * Run your project, including hiring staff, contractors, and managing expenses * Receive feedback and develop your theory of change * Develop operational capabilities * Access your bank account, company card, and dashboard * Receive mentorship and resources from your Project Liaison * Leave bookkeeping and compliance to us * Launch an independent entity * Monitor your progress along entity formation milestones * Be on track to independence within 6, 12, or 24 months * Launch an independent entity when you’re ready We apply a 7% service fee on funds raised or received during the program. You can learn more about the program here. Who are