Hide table of contents

TLDR; Join us this weekend for the non-technical AI governance ideathon with Michael Aird as keynote speaker, happening both virtually and in-person at 12 locations! We also invite you to join the interpretability hackathon with Neel Nanda on the 14th of April.

Below is an FAQ-style summary of what you can expect.

What is it?

The Alignment Jams are weekend-long fun research events where participants of all skill levels join in teams (1-5) to engage with direct AI safety work. You submit a PDF report on the participation page with the great opportunity to receive a review from great people like Emma Bluemke, Elizabeth Seger, Neel Nanda, Otto Barten and others.

If you are not at any of the in-person jam sites, you can participate online through our Discord where the keynote, award ceremony and AI safety discussion is happening!

The ideathon happens from the 24th to the 26th of March and we have the honour of presenting Michael Aird, lead AI governance researcher at Rethink Priorities as our keynote speaker. The interpretability hackathon happens on the 14th to 16th of April and we are collaborating with keynote speaker Neel Nanda for the third time to bring great starter resources to you. Get all dates into your calendar.

Join this weekend's AI governance ideathon to write proposals for solutions to problem cases and think more about strategy in AI safety. And we promise you that you'll be surprised what you can achieve in just a weekend's work!

Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the jams below.

Where can I join?

You can join the event both in-person and online but everyone needs to make an account and join the jam on the itch.io page.

See all in-person jam sites here. These include Hồ Chí Minh City, Copenhagen, Delft, Oxford, Cambridge, Madison, Aarhus, Paris, Toronto, Detroit (Ann Arbor), São Paulok, London, Sunnyvale and Stanford.

Everyone should join the Discord to ask questions, see updates and announcements, find online team members, and more. Join here.

What are some examples of AI governance projects I could make?

The submissions will be based on the cases presented on the Alignment Jam website and focus on specific problems in the interaction between society and artificial intelligence.

We provide some great inspiration with the cases that have been developed in collaboration with Richard Ngo, Otto Barten, Centre for the Governance of AI and others:

  • Categorizing the future risks from artificial intelligence in a way that is accessible for policymakers.
  • Write up a report of the considerations and actions that OpenAI should take for a hypothetical release of a multimodal GPT-6 to be safe.
  • Imagine a policy proposal that, with full support from policymakers, would be successful in slowing or pausing progress towards AGI in a responsible and safe manner.
  • Come up with ways that AI might self-replicate in dangerous ways and brainstorm solutions to these situations.
  • Whose values should AI follow and how do we design systems to aggregate and understand highly varied preferences for systems that take large-scale decisions?
  • If we imagine that in 7 years, the US ban on AI hardware export to China leads to antagonistic AGI development race dynamics between the two nations, what will lead to this scenario? And how might we avoid risky scenarios from a governance perspective?
  • As AI takes over more and more tasks in the world, how will the technology fit into democratic processes and which considerations will we have to take?

This will be our first non-technical hackathon (besides an in-person retreat in Berkeley) and we're excited to see which proposals you come up with!

Why should I join?

There’s loads of reasons to join! Here are just a few:

  • See how fun and interesting AI safety can be!
  • Get a new perspective on AI safety
  • Acquaint yourself with others interested in the same things as you
  • Get a chance to win $1,000 in the AI governance ideathon!
  • Get practical experience with AI safety research
  • Show the AI safety labs and institutions what you are able to do to increase your chances at some amazing jobs
  • Get a cool certificate that you can show your friends and family
  • Have a chance to work on that project you've considered starting for so long
  • Get proof of your skills so you can get that one grant to pursue AI safety research
  • And of course, many other reasons… Come along!

What if I don’t have any experience in AI safety?

Please join! This can be your first foray into AI and ML safety and maybe you’ll realize that it’s not that hard. Even if you don't find it particularly interesting, this might be a chance to engage with the topics on a deeper level.

There’s a lot of pressure from AI safety to perform at a top level and this seems to drive some people out of the field. We’d love it if you consider joining with a mindset of fun exploration and get a positive experience out of the weekend.

What is the agenda for the weekend?

The schedule runs from 6PM CET / 9AM PST Friday to 7PM CET / 10AM PST Sunday. We start with an introductory talk and end with an awards ceremony. Subscribe to the public calendar here.

CET / PST 
Fri 6 PM / 9 AMIntroduction to the hackathon, what to expect, and a talk from Michael Aird or Neel Nanda. Afterwards, there's a chance to find new teammates.
Fri 7:30 PM / 10:30 AMJamming begins!
Mon 4 AM / 8 PMFinal submissions have to be finished. Judging begins and both the community and our great judges from ERO and GovAI join us in reviewing the proposals.
Wed 6 PM / 9 AMThe award ceremony: The winning projects are presented by the teams and the prizes are presented.
Afterwards!We hope you will continue your work from the hackathons with the purpose of sharing it on the forums or your personal blog!


 

I’m busy, can I join for a short time?

As a matter of fact, we encourage you to join even if you only have a short while available during the weekend!

So yes, you can both join without coming to the beginning or end of the event, and you can submit research even if you’ve only spent a few hours on it. We of course still encourage you to come for the intro ceremony and join for the whole weekend but everything will be recorded and shared for you to join asynchronously as well.

Wow this sounds fun, can I also host an in-person event with my local AI safety group?

Definitely! It might be hard to make it for the AI governance ideathon but we encourage you to join our team of in-person organizers around the world for the interpretability hackathon in April!

You can read more about what we require here and the possible benefits it can have to your local AI safety group hereSign up as a host on the button on this page.

What have previous participants said about this hackathon?

I was not that interested in AI safety and didn't know that much about machine learning before, but I heard from this hackathon thanks to a friend, and I don't regret participating! I've learned a ton, and it was a refreshing weekend for me.A great experience! A fun and welcoming event with some really useful resources for starting to do interpretability research. And a lot of interesting projects to explore at the end!
Was great to hear directly from accomplished AI safety researchers and try investigating some of the questions they thought were high impact.I found the hackaton very cool, I think it lowered my hesitance in participating in stuff like this in the future significantly. A whole bunch of lessons learned and Jaime and Pablo were very kind and helpful through the whole process.
The hackathon was a really great way to try out research on AI interpretability and getting in touch with other people working on this. The input, resources and feedback provided by the team organizers and in particular by Neel Nanda were super helpful and very motivating! 


 

Where can I read more about this?

Again, sign up here by clicking “Join jam” and read more about the hackathons here.

Godspeed, research jammers!


 

Comments1


Sorted by Click to highlight new comments since:

The discord link no longer seems to work.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat