Winners have been announced!
TL;DR: Writing Contest for AI Fables
Deadline: Sept 1/Oct 1
Prizes: $1500/1000/500, consideration for writing retreat
Word length: <6,000
How: Google Doc in replies
Purpose: Help shape the future by helping people understand relevant issues.
Hey everyone, I’d like to announce a writing competition to follow up on this post about AI Fables!
Like Bard, I write fiction and think it has a lot of power not just to expand our imaginations beyond what we believe is possible, but also educate and inspire. For generations the idea of what Artificial General Intelligence could or would look like has been shaped by fiction, for better and for worse, and that will likely continue even as what once seemed to be purely speculative starts to become more and more real in our everyday lives.
But there’s still time for good fiction to help shape the future, and on this particular topic, with the world changing so quickly, I want to help fill the empty spaces waiting for stories that can help people grapple with the relevant issues, and I’d like to help encourage those stories to be as good as possible, meaning both engaging and well-informed.
To that end, I’m calling for submissions of short stories or story outlines that involve one or more of the “nuts and bolts” covered in the above post, as well as some of my own tweaks:
- Basics of AI
- Neural networks are black boxes (though interpretability might help us to see inside).
- AI "Psychology"
- AI systems are alien in how they think. Even AGI are unlikely to think like humans or value things we'd take for granted they would.
- Orthogonality and instrumental convergence might provide insight into likely AI behaviour.
- AGI systems might be agents, in some relatively natural sense. They might also simulate agents, even if they are not agents.
- Potential dangers from AI
- Outer misalignment is a potential danger, but in the context of neural networks so too is inner misalignment (related: reward misspecification and goal misgeneralisation).
- Deceptive alignment might lead to worries about a treacherous turn.
- The possibility of recursive improvement might influence views about takeoff speed (which might influence views about safety).
- Broader Context of Potential Risks
- Different challenges might arise in the case of a singleton, when compared with multipolar scenarios.
- Arms races can lead to outcomes that no-one wants.
- AI rights could be a real thing, but also incorrect attribution of rights to non-sapient AI could itself pose a risk by restricting society’s ability to ensure safety.
- Psychology of Existential Risk
- Characters whose perspectives and philosophies show what it's like to take X-risks seriously without being overwhelmed by existential dread
- Stories showing the social or cultural shifts that might be necessary to improve coordination and will to face X-risks.
...or are otherwise in some way related to unaligned AI or AGI risk, such that readers would be expected to better understand some aspect of the potential worlds we might end up in. Black Mirror is a good example of the “modern Aesop’s Fables or Grimm Fairytales” style of commentary-through-storytelling, but I’m particularly interested in stories that don’t moralize at readers, and rather help people understand and emotionally process issues related to AI.
Though unrelated to AI, Truer Love's Kiss by Eliezer Yudkowsky and The Cambist and Lord Iron by Daniel Abraham are good examples of "modern fables" that I'd like to see more of. The setting doesn't matter, so long as it reasonably clearly teaches something related to the unique challenges or opportunities of creating safe artificial intelligence.
At least the top 3 stories will receive at least $1500, $1000, and $500 in reward (more donations may still be received to increase the prize pool), which I’m planning to distribute after judging is complete sometime in October.
In addition, some authors may also be invited to join a writing retreat in Oxford, UK to help refine their story and discuss publication options. We'd like to not just encourage more stories like these to exist, but also try and help improve and spread them.
Stories must be submitted by September 1st to be considered for the retreat, and practically speaking, the sooner they’re submitted the better the odds of being judged in time for it. Stories must be submitted before October 1st for monetary prize consideration.
This contest is meant to be relatively short in duration, as our preference is for relatively short stories of ~6,000 words or less. Longer stories can be submitted, but anything past the first 6k words will not be included in judgement.
As a final note, this is not an expectation for completely polished, ready-to-publish stories; things like spelling and grammar mistakes will not be penalized! What we want are the ideas to be conveyed well and in an engaging way. Proper editing, both of style and content, can be done later. Just show us the seed of a good story, and maybe we can help it grow!
To submit your fable, please link to a Google Doc in a reply to this post. If you wish to remain anonymous or retain First Publisher rights for your own attempts at finding a traditional publisher, you can send it to me in a DM.
You can also link to stories you believe fit the criteria in order to nominate someone else; if that’s the case, please indicate that the story is not your own.
Happy writing, and feel free to ask any clarifying questions either by reply or DMs!
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story here) and of Future of Life Institute's AI Worldbuilding contest (here), I agree that it seems like the default outcome is that even the winning stories don't get a huge amount of circulation. The real impact would come from writing the one story that actually does go viral beyond the EA community. But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or perhaps expanding on something like a very popular tweet to turn it into a story), and try to improve its presentation by polishing it, and perhaps adding illustrations or porting to other mediums like video / audio / etc.
That is why I am currently spending most of my EA effort helping out RationalAnimations, which sometimes writes original stuff but often adapts essays & topics that have preexisting traction within EA. (Suggestions welcome for things we might consider adapting!)
Could also be a cool mini-project of somebody's, to go through the archive of existing rationalist/EA stories, and try and spruce them up with midjourney-style AI artwork; you might even be able to create some passable, relatively low-effort youtube videos just by doing a dramatic reading of the story and matching it up with panning imagery of midjourney / stock art?
On the other hand, writing stories is fun, and a $3000 prize pool is not too much to spend in the hopes of maybe generating the next viral EA story! I guess my concrete advice would be to put more emphasis on starting from a seed of something that's already shown some viral potential (like a popular tweet making some point about AI safety, or a fanfic-style spinoff of a well-known story that is tweaked to contain an AI-relevant lesson, or etc).