Update Dec 4: Funds still needed for next month's stipends, plus salaries to run the 11th edition. Zvi listed AISC at the top of his recommendations for talent funnel orgs.
We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding.
In a nutshell:
- Last month, we put out AI Safety Camp’s funding case.
A private donor then decided to donate €5K.
- Five more donors offered $7K on Manifund.
For that $7K to not be wiped out and returned, another $21K in funding is needed. At that level, we may be able to run a minimal version of AI Safety Camp next year, where we get research leads started in the first 2.5 months, and leave the rest to them.
- The current edition is off to a productive start!
A total of 130 participants joined, spread over 26 projects. The projects are diverse – from agent foundations, to mechanistic interpretability, to copyright litigation.
- Our personal runways are running out.
If we do not get the funding, we have to move on. It’s hard to start a program again once organisers move on, so this likely means the end of AI Safety Camp.
- We commissioned Arb Research to do an impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.
How can you support us:
- Spread the word. When we tell people AISC doesn't have any money, most people are surprised. If more people knew of our situation, we believe we would get the donations we need.
- Donate. Make a donation through Manifund to help us reach the $28K threshold.
Reach out to remmelt@aisafety.camp for other donation options.
Crossposted from LessWrong.
Maybe I'm being cynical, but I'd give >30% that funders have declined to fund AI Safety Camp in its current form for some good reason. Has anyone written the case against? I know that AISC used to be good by talking to various colleagues, but I have no particular reason to believe in its current quality.
Semantically, you could have said the same thing in far less muckrakey language - 'Remmelt has posted widely criticised work', for example. Yes, that's less specific, but it's also more important - the idea that someone should be discredited because someone said a bad thing about something they wrote is disturbingly bad epistemics.
Etymologically, your definition of an ad hominem is wrong - it can also be about attacking their circumstances. Obviously circumstances can have evidential importance, but I think it's also poor epistemics to describe them withou... (read more)