NN

Neel Nanda

4061 karmaJoined neelnanda.io
0

Bio

I lead the DeepMind mechanistic interpretability team

Comments
296

Thanks for clarifying! That sounds like a pretty unpleasant experience from a grantee perspective, I'm sorry that happened.

I'm surprised to hear you say SFF and Lightspeed were trying to diversify the funding landscape, AND that it was bad that OpenPhil didn't fund them. My understanding was that there was already another donor (Jaan Tallinn) who wanted to make large donations, and you were trying to help them. To me, it seems natural for Jaan to fund these, and that this is great because it results in a genuinely independent donor. OpenPhil funding it feels more like a regranting program, and I don't see how that genuinely diversifies the landscape in the longterm (unless eg OpenPhil funded a longterm endowment for such a program that they can't later take away). Was the ask for them to fund the operations, or to add to the pool of money donated? Was the idea that, with more funding, these programs could be more successful and attract more mega donors from outside the community?

Thanks for the update! Are there any plans to release the list of sub areas? I couldn't see it in this post or the blog post, and it seems quite valuable for other funders, small donors (like me!) and future grantees/org founders to know which areas might now be less well funded.

Yeah, I'm surprised they're not just giving people money. Conference trips likely matter to some people but not others (either because they're in a field with more conferences, or because their employer often pays for it, like mine)

One year of funding to support Newspeak House as an EA community hub

I was pretty surprised at this one. I live in London and am familiar with Newspeak, and I didn't get the impression at all that they were trying to be an EA Hub. They sometimes host events like EAG after parties or ACX meet ups but it doesn't seem like their main thing. And I see this grant was made almost a year ago. What are they supposed to have been doing, and am I missing something?

Cool! What kind of things are you learning from it?

Should we keep making excuses for OpenAI, and Anthropic, and DeepMind, pursuing AGI at recklessly high speed, despite the fact that AI capabilities research is far out-pacing AI safety and alignment research?

I don't at all follow your jump from "OpenAI is wracked by scandals" to "other AGI labs bad" - Anthropic and GDM had nothing to do with Sam's behaviour, and Anthropic co-founders actively chose to leave OpenAI. I know you already believed this position, but it feels like you're arguing that Sam's scandals should change other people's position here. I don't see how it gives much evidence either way for how the EA community should engage with Anthropic or DeepMind?

I definitely agree that this gives meaningful evidence on whether eg 80K should still recommend working at OpenAI (or even working on alignment at OpenAI, though that's far less clear cut IMO)

Very strong +1, this is nothing like the SBF situation and there's no need for soul searching of the form "how did the EA community let this happen" in my opinion

Damage control, not defeat IMO. It's not defeat until they free previous leavers from unfair non disparagements/otherwise make it right to them

Strong +1 to this! Also, entertainingly, I know many of the people in the first episode, and they seemed significantly funnier there than they do in real life - clearly I'm not hanging out with you all in the right settings!

Load more