Hide table of contents

Edit - this group was an experiment which I consider to have mostly been unsuccessful, and the group is no longer very active. I think the two main reasons for this were: (1) lacking a single thing which people were congregating around (e.g. the TransformerLens library in the case of the Open Source Mech Interp Slack group, or the ARENA course material in the case of the ARENA Slack), and (2) top-down rather than bottom-up design of the Slack group and its features.


TL;DR

I'm creating a Slack group for people who are interested in working in AI safety at some point in the future, but who aren't working on it right now[1], and would like extra accountability and motivation while they pursue their goals. 

Join with this link!

Why am I creating this?

I just spent an awesome summer in Berkeley doing MLAB, surrounded by people who are really passionate about AI safety, and this definitely had a positive impact on my level of motivation. I think trying to recreate some (even much weaker) version of that would be really valuable. An accountability system is the most basic version of this, because making commitments to other people is a really nice way of motivating yourself to get shit done!

I've spoken to a few people from MLAB, and several seem to agree (at least five participants have mentioned to me that they'd like to join a group like this).

How will this work?

(Note - this might all be changed depending on how many people join, and their suggestions & preferences. Hopefully by the end of next week, the group will be larger and we will have made many improvements to this basic design!)

The core mechanism of the group will be everyone posting regular short updates (might be Slack message, or filling out a Google Form)[2] (maybe once per 2 weeks) summarising what they've done over that period. For instance:

  • Books you've read, or courses you've taken, or progress in structured self-study like this
  • Blog posts you've written
  • Projects you've done, or are doing, and your progress on them
  • Companies or other opportunities you've applied for
  • Dank AI memes you've designed

There will also be optional extra commitment mechanisms like weekly Zoom calls. Also if enough people join (e.g. more than 6) then we'll probably divide people into smaller groups for personal check-ins, since larger groups tend to lead to individuals feeling less accountability. Progress reports will still be posted to the main Slack channel.

Who is this group for?

I expect it will be most useful for you if one or more of the following holds:

  • You aren't yet contributing to AI alignment directly, but you think you might at some point in the future.
  • You aren't necessarily surrounded by a group of people who are also working on AI safety.
  • You have a particular idea for ways you want to skill up / projects you want to do / places you want to apply for, but lack the motivation to do it.
  • You are interested in accountability systems, but don't have the ability to make regular time commitments.

None of these are totally necessary, so feel free to join regardless if you think you'd benefit from this group. 

Why am I making this group, when there are other pre-existing groups that might work for this purpose?

Three main reasons:

  1. I don't think size is necessarily an advantage in an accountability group. Larger groups can diminish the effectiveness, whereas smaller groups can often better supply accountability to each individual, and encourage a sense of community.
  2. I think having an entire Slack dedicated to this mechanism, rather than just one channel of a larger Slack group, has lots of benefits. For instance, AI Alignment Slack has a study buddies channel, but since this isn't the primary focus of this group I expect ASAP to have a comparative advantage in providing motivation and accountability.
  3. Many pre-existing AI safety related Slack / Discord groups have a much more specific topical focus (e.g. the Alignment Studies Slack group, which is mainly focused on the MIRI course list). I imagine most people would benefit from more flexibility, since everyone will probably be doing slightly different things.

What is the end goal?

I created and joined the Slack group at the start of this week, and so far at least five people have expressed preferences to join. So by simple laws of exponential progression, I expect we'll reach the population of earth in approximately 14 weeks, or just before the end of 2022. The resulting galaxy-brained AI safety community would almost certainly be able to solve the alignment problem right away.

Just in case this doesn't succeed, some decent fallback goals would be:

  • Encouraging more people to stick to their targets, and providing positive reinforcement.
  • Motivating people to keep skilling up in AI safety, and helping them take steps towards making direct contributions to the field in the future if they aren't there yet.

Why did you call it the "AI Safety Accountability Programme"?

So I could title this post "Join ASAP" and no other reason.

Last words

Join ASAP, and come blast off from the land of amotivation into the stratosphere of becoming awesome 🙃🚀

  1. ^

    People who are already working in AI safety but would like to join an accountability group are of course also welcome, although I expect they'd get slightly less utility out of this group.

  2. ^

    These are just examples; people can feel free to update in whatever form works best for them.

Comments20


Sorted by Click to highlight new comments since:
  1. Joining
  2. I think an important bottle neck is "people can get early feedback about what they're doing" which involves the [hard in EA] sub goal of "people share early drafts"
    1. This is different than the typical meaning people have for "accountability" which seems to be "I promise to do this thing" [even if the thing seems internally wrong] [even if this thing IS a totally wrong direction that could be easily solved with an outside observer]
    2. My priors are from people learning software development

For sharing updates on Slack I would recommend https://geekbot.com/

Ah thanks, that looks awesome! Will definitely suggest this in the group

Slack invite link is no longer active?  I'm definitely interested.

Yeah the link isn't active.

Thanks for commenting! Yep the link seems to have expired, this one should work (and the post is now updated).

Can I petition for you moving away from Slack with is hostile to open communities due to its business model (e.g. hiding messages after 3 months unless you pay $8.75/user/month), towards Discord which is welcoming to communities due to their business model. I'm spearheading an initiative[1] to move Slacks to there after this recent decision.

  1. ^

I'll copy in my response from EA Groups Slack:

Yeah I'm aware that there are arguments against Slack on these kinds of bases. From my perspective, Discord has 2 main annoyances: (1) worse formatting options in messages (e.g. no links in text) and more importantly (2) no reply threads, which can make messages really cluttered. I've mentioned in the slack already that I'd be willing to cover costs if it comes to that

I'd be open to changing my mind on these points though, if I found that Discord had those features or close alternatives

Hey, quick note to mention that Discord now does have its version of threads (it's been a few months or something). Not everyone is using them everywhere, they're not quite as ubiquitous as slacks', but I think they're good enough that this should not be a blocking point anymore. 
So it's probably a good time to at least try the switch, if not for this project then for the next you try :)

Yep thanks for mentioning this, it did come up in the discussion on the Slack group and definitely updated me towards Discord. The vote for whether we should use Slack or Discord did end up going in favour of Slack by a margin of 14 votes to 6, so we'll be sticking with Slack for now, but we might revisit the issue in the future if there's good reason to (e.g. the 90 day history thing proves a significant inconvenience).

I'm very interested in joining, thank you for making the group!

The link no longer seems to be active though

Thanks for commenting! Yep the link seems to have expired, this one should work (and the post is now updated).

aj
1
0
0

Hi! Would love to join, though it seems the link is expired. Is there another link I could use? 

Sorry for the delay, yep sure! Here's the link: https://join.slack.com/t/join-asap/shared_invite/zt-1kkzoa53n-ImLZZpiM9L2uoV_bH7Oh2A

and I'll also update it in the post.

Just in case this has something to do with the link: I got an error when trying to join the group with my google account. (Might try with email later).

email worked

I'm in! But (again?), the link is not working

Thanks! Will fix now. I really should set a repeating reminder for myself lol

It looks like the new link has also expired. Can you post an update?

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.