Hide table of contents

Is there any Mandarin equivalent of the AGI Safety Fundamentals course? Someone could translate the curriculum into Mandarin. Translation doesn't matter as much if many Chinese people speak English, but that doesn't seem to be the case at all.

That's just one thought that motivated me to write this question. It would be extremely valuable to introduce Chinese students and professionals to AGI safety. Not only because China has a strong AI industry, but also because China has >1.4 billion people. Yet as far as I know, most AI alignment projects and organizations target English speakers. I've spent very little time researching AI alignment in China, and I could certainly be wrong.

If people want to do more research, I'd recommend the 2022 AI Index Report. Here is a (possibly misleading; again, I haven't looked into this carefully) graph from page 26:

From the 2022 AI Index Report.

18

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

Vael Gates's post "Resources I send to AI researchers about AI safety" offers this:

AI Safety in China

Both Human Compatible and the Alignment Newsletter have translations into Mandarin. There are also translations that are potentially less alignment-specific, like Life 3.0, The Precipice, Technical Countermeasures for Security Risks of Artificial General Intelligence, etc.

That's great. Seems that these days all the Alignment Newsletter translations go directly onto the English website.

1
Xiaohu Zhu
I have already made Alignment Newsletter sync now. 

Translation is a great idea.

It was one of the winners of the Future Fund’s Project Ideas Competition, and it's now listed on the project ideas page.

A problem unique to Chinese content is to ensure that it doesn't get blocked by their internet censorship policy.

Excellent, I'm happy to see that! However, I'm concerned that the proposal focuses entirely on translating general EA concepts.

Publications we might start with include effectivealtruism.org, the 80,000 Hours ‘key idea’ series, and Toby Ord’s The Precipice.

I think it is much higher priority (from the perspective of reducing AI x-risk) to translate AI alignment concepts, particularly the AGI Safety Fundamentals course material. It takes a lot of inferences to go from "I'm interested in doing good" to "I like EA ideas" to "I think AI alignment is important" to "I want to work on AI, where can I start?" And even if many Mandarin speakers reach that last point through a Mandarin translation of 80,000 Hours, they will currently find very few (if any?) structured opportunities to skill up for AI alignment.

Additionally, I don't think one needs to know about longtermism and QALYs and PlayPumps to recognize the importance of AI alignment work. Nor does one need to care about doing as much good as possible with their career. One only needs to grasp why AI might be extremely dangerous and why advanced capabilities might be coming soon.

One more point is that translating AI alignment resources may have lower risks than translating general EA content.

Thanks for sharing these. The Chinese Association for AGI appears to focus on advancing AI capabilities rather than AI safety. I used Google Translate to translate the lead paragraph of the website's current opening page:

Notice of the 7th China General Artificial Intelligence Annual Conference

The China General Artificial Intelligence Annual Conference has been successfully held for six consecutive sessions. It is an annual event for Chinese general artificial intelligence enthusiasts, involving computer science, philosophy, logic, education, psychology, so

... (read more)
Comments1
Sorted by Click to highlight new comments since:

Hi jskatt, great question! I’m a research analyst at Concordia and this is what I said in my Feb 2022 SERI talk re: AI alignment/safety-sympathetic resources/institutions in China:

“Over the past few years, Chinese researchers and policy stakeholders have demonstrated increasing interest in AI safety. 

For instance, last year two AI scientists from China’s AI Strategic Advisory Committee, which advises national policy on AI, wrote an article talking about the risks from AGI and potential countermeasures. The two scientists, Huang Tiejun and Gao Wen, along with their colleagues, present a summary of possible approaches to alignment based on Nick Bostrom’s book, Superintelligence, and cite other classic works in the Western AI alignment community like Concrete Problems in AI Safety and Life 3.0. The article acknowledged the relative lack of attention to AGI safety in China, and recommended “examining international discussions…of AGI policies, integrating cutting-edge legal and ethical findings, and exploring the elements of China’s AGI policymaking in a deeper and more timely manner.”

In the same year, Huang Tiejun and the chairperson of one of China’s top AI labs, the Beijing Academy of AI, endorsed the Chinese translation of Human Compatible, a book on AI alignment written by Professor Stuart Russell, who’ll be speaking at this conference tomorrow. Zhang Hongjiang also participated in a dialogue with Stuart at one of China’s most prestigious AI conferences, talking about the book and AGI safety.  

But despite these cases of high-profile support for AGI safety, many Chinese AI safety researchers focus on areas like robustness and interpretability instead of more alignment-relevant topics like goal specification.”

Separately, the org I work at – Concordia – aims to promote the safe and responsible development of AI, with a particular focus on China (more). For example, we recently wrapped up the first ever lecture series on AI alignment in China, which included speakers Rohin Shah, Max Tegmark, David Krueger, Paul Christiano, Brian Christian, and Jacob Steinhardt. To market this lecture series, we also translated a bunch of AI alignment works from English into Chinese. 

I’d be happy to have a chat about this, just messaged you.

More from JakubK
Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier