Hide table of contents

Hello everyone! The submissions have all been read, and it’s time to announce the winners of the recent AI Fables Writing Contest!

Depending on how you count things, we had between 33-40 submissions over the course of about two months, which was a happy surprise. More than just the count, we also got submissions from a range of authors, from people new to writing fiction to those who do so regularly, new to writing about AI or very familiar with it, and every mix of both.

The writing retreat in September was also quite productive, with about 21 more short stories and scripts written by the participants, many of which will hopefully be publicly available at some point. We plan to work on creating an anthology of some selected stories from it, and with permission, others we’ve been impressed by.

With all that said, onto the contest winners!


Prize Winners

$1,500 First Place: The King and the Golem by Richard Ngo

This story explores the notion of “trust,” whether in people, tools, or beliefs, and how fundamentally difficult it is to make “trustworthiness” something we can feel justified about or verify. It also subtly highlights the way in which, at the end of the day, there are also consequences to not trusting anything at all.

$1,000 Second Place: The Oracle and the Agent by Alexander Wales

We really appreciated how this story showed the way better-than-human decision making can be so easy to defer to, and how despite those decisions individually still being reasonable and net-positive, small mistakes and inconsistencies in policy can lead to calamitous ends.

(This story is not yet publicly available, but it will be linked to if it becomes so)

$500 Third Place: The Tale of the Lion and the Boy + Mirror, Mirror by dr_s

These two roughly tied for third place, which made it convenient that they were written by the same person! The first is an eloquent analogy for the gap between intelligence capabilities and illusion of transparency by reexamining traditional human-raised-by-animals tales. The second was a fun twist on a classic via exploration of interpretability errors. As a bonus, we particularly enjoyed the way both were new takes on old and identifiable fables.

Honorable Mentions

There were a lot more stories that I’d like to mention here for being either close to a winner, or just presenting things in an interesting way. I’ve decided to pick just three of them:

A fun poem about the way various strategies can scale in exponentially different ways despite ineffectual first appearances. 

An illustrated, rhyming fable about Artificial Intelligence that demonstrates a number of the fundamental parts of AI, as well as the difficulties inherent to interpretability. 

  • This is What Kills Us by Jamie Wahls and Arthur Frost

A series of short, witty scripts about a number of ways AI in the near future might go from charming and useful tools to accidentally ending the world. Not publicly available yet, though they have since reached out to Rational Animations to turn them into videos!


There are many more stories we enjoyed, from the amusing The Curious Incident Aboard the Calibrius by Ron Fein, to the creepy Lir by Arjun Singh, and we'd like to thank everyone who participated. We hope everyone continues to write and engage with complex, meaningful ideas in their fiction.

To everyone else, we hope you enjoyed reading, and would love to hear about any new stories you might write that fits these themes.

39

0
0

Reactions

0
0

More posts like this

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T