Update Dec 4: Funds still needed for next month's stipends, plus salaries to run the 11th edition. Zvi listed AISC at the top of his recommendations for talent funnel orgs.
We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding.
In a nutshell:
- Last month, we put out AI Safety Camp’s funding case.
A private donor then decided to donate €5K.
- Five more donors offered $7K on Manifund.
For that $7K to not be wiped out and returned, another $21K in funding is needed. At that level, we may be able to run a minimal version of AI Safety Camp next year, where we get research leads started in the first 2.5 months, and leave the rest to them.
- The current edition is off to a productive start!
A total of 130 participants joined, spread over 26 projects. The projects are diverse – from agent foundations, to mechanistic interpretability, to copyright litigation.
- Our personal runways are running out.
If we do not get the funding, we have to move on. It’s hard to start a program again once organisers move on, so this likely means the end of AI Safety Camp.
- We commissioned Arb Research to do an impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.
How can you support us:
- Spread the word. When we tell people AISC doesn't have any money, most people are surprised. If more people knew of our situation, we believe we would get the donations we need.
- Donate. Make a donation through Manifund to help us reach the $28K threshold.
Reach out to remmelt@aisafety.camp for other donation options.
If you look at the projects, notice that each is carefully scoped.
The fourth project was on the edge for me. I had a few calls with the research lead and decided it was okay to go ahead if the RL managed to recruit applicants with expertise in policy communication (which they did!).
I prefer carefully scoped projects in this area, including for the concern you raised.
Do you mean the posts early last year about fundamental controllability limits?
That's totally fair – I did not do a good job at taking peoples' perspectives into account in sharing new writings.
My mistake in part was presuming that since I'm in the same community, I could have more of an open conversation about it. I was hoping to put out a bunch of interesting posts, before putting out more rigorous explainers of the argumentation. Looking back, I should have spent way more time vetting and refining every (link)post. People's attention is limited and you want to explain it well from their perspective right off the bat.
Later that year, I distilled the reasoning into a summary explanation. That got 47 upvotes on LW.