I lead the DeepMind mechanistic interpretability team
I'm surprised to hear you say SFF and Lightspeed were trying to diversify the funding landscape, AND that it was bad that OpenPhil didn't fund them. My understanding was that there was already another donor (Jaan Tallinn) who wanted to make large donations, and you were trying to help them. To me, it seems natural for Jaan to fund these, and that this is great because it results in a genuinely independent donor. OpenPhil funding it feels more like a regranting program, and I don't see how that genuinely diversifies the landscape in the longterm (unless eg OpenPhil funded a longterm endowment for such a program that they can't later take away). Was the ask for them to fund the operations, or to add to the pool of money donated? Was the idea that, with more funding, these programs could be more successful and attract more mega donors from outside the community?
One year of funding to support Newspeak House as an EA community hub
I was pretty surprised at this one. I live in London and am familiar with Newspeak, and I didn't get the impression at all that they were trying to be an EA Hub. They sometimes host events like EAG after parties or ACX meet ups but it doesn't seem like their main thing. And I see this grant was made almost a year ago. What are they supposed to have been doing, and am I missing something?
Should we keep making excuses for OpenAI, and Anthropic, and DeepMind, pursuing AGI at recklessly high speed, despite the fact that AI capabilities research is far out-pacing AI safety and alignment research?
I don't at all follow your jump from "OpenAI is wracked by scandals" to "other AGI labs bad" - Anthropic and GDM had nothing to do with Sam's behaviour, and Anthropic co-founders actively chose to leave OpenAI. I know you already believed this position, but it feels like you're arguing that Sam's scandals should change other people's position here. I don't see how it gives much evidence either way for how the EA community should engage with Anthropic or DeepMind?
I definitely agree that this gives meaningful evidence on whether eg 80K should still recommend working at OpenAI (or even working on alignment at OpenAI, though that's far less clear cut IMO)
Thanks for clarifying! That sounds like a pretty unpleasant experience from a grantee perspective, I'm sorry that happened.