Update Dec 4: Funds still needed for next month's stipends, plus salaries to run the 11th edition. Zvi listed AISC at the top of his recommendations for talent funnel orgs.
We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding.
In a nutshell:
- Last month, we put out AI Safety Camp’s funding case.
A private donor then decided to donate €5K.
- Five more donors offered $7K on Manifund.
For that $7K to not be wiped out and returned, another $21K in funding is needed. At that level, we may be able to run a minimal version of AI Safety Camp next year, where we get research leads started in the first 2.5 months, and leave the rest to them.
- The current edition is off to a productive start!
A total of 130 participants joined, spread over 26 projects. The projects are diverse – from agent foundations, to mechanistic interpretability, to copyright litigation.
- Our personal runways are running out.
If we do not get the funding, we have to move on. It’s hard to start a program again once organisers move on, so this likely means the end of AI Safety Camp.
- We commissioned Arb Research to do an impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.
How can you support us:
- Spread the word. When we tell people AISC doesn't have any money, most people are surprised. If more people knew of our situation, we believe we would get the donations we need.
- Donate. Make a donation through Manifund to help us reach the $28K threshold.
Reach out to remmelt@aisafety.camp for other donation options.
I see your concern.
Me and Remmelt have different beliefs about AI risk, which is why the last AISC was split into two streams. Each of us are allowed to independently accept project into our own stream.
Remmelt believes that AGI alignment is impossible, i.e. there is no way to make AGI safe. Exactly why Remmelt believes this is complicated, and something I my self is still trying to understand, however this is actually not very important for AISC.
The consequence of this for this on AISC is that Remmelt is only interested in project that aims to stop AI progress.
I still think that alignment is probably technically possible, but I'm not sure. I also believe that even if alignment is possible, we need more time to solve it. Therefore, I see project that aim to stop or slow down AI progress as good, as long as there are not too large adverse side-effect. Therefore, I'm happy to have Remmelt and the projects in his stream as part of AISC. Not to mention that me an Remmelt work really well together, despite or different beliefs.
If you check our website, you'll also notice that most of the projects are in my stream. I've been accepting any project as long as the there is a reasonable plan, there is a theory of change under some reasonable and self consistent assumptions, and the downside risk is not too large.
I've bounced around a lot in AI safety, trying out different ideas, stared more research projects than I finished, which has given me a wide view of different perspectives. I've updated many times in many directions, which have left me with a wide uncertainty as to what perspective is correct. This is reflected in what projects I accept to AISC. I believe in a "lets try everything" approach.
At this point, someone might think: If AISC is not filtering the project more than just "seems worth a try", then how do AISC make sure not to waist participants time on bad projects.
Our participants are adults, and we treat them as such. We do our best to present what AISC is, and what to expect, and then let people decide for themselves if it seems like something worth their time.
We also require research leads to do the same. I.e. the project plan has to provide enough information for potential participants to judge if this is something they want to join.
I believe there is a significant chance that the solution to alignment is something no-one has though of yet. I also believe that the only way to do intellectual exploration is to let people follow their own ideas, and avoid top down curation.
The only thing I filter hard for in my stream is that the research lead actually need to have a theory of change. They need to have actually though about AI risk, and why their plan could make a difference. I had this conversation with every research lead in my stream.
We had one person last AISC who said that they regretted joining AISC, because they could have learned more from spending that time on other things. I take that feedback seriously. But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive.
However, if we where not understaffed (due to being underfunded), we could do more to support the research leads to make better projects.