NN

Neel Nanda

5884 karmaJoined neelnanda.io

Bio

I lead the DeepMind mechanistic interpretability team

Comments
435

Really glad to hear it! (And that writing several thousand words of very in depth examples was useful!) I'd love to hear if it proves to be useful longer term

Agreed with all of the above. I'll also add that a bunch of orgs do work that is basically useless, and it should not be assumed that just because an org seems "part of the community" that working there will be an effective way to do good - public callouts are costly, and community dynamics and knowledge can be hard to judge from the outside.

Thank you for the post. I was a bit surprised by the bulletin board one. What goes wrong with just positioning the forum exactly as it is now, but saying you're not going to do any maintenance or moderation. but without trying to reposition it as a bulletin board? At the very least I expect the momentum could keep it going for a while. Is the issue that you think you do a lot of active moderation work that sustains healthy discussion norms which matters a lot for the current forum but would matter less for a bulletin board?

I think we just agree. Don't donate to politics unless you're going to be smart about it

I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don't bother (especially in safety). Eg I would generally consider eg "comes from a reputable industry lab" to be somewhat stronger evidence. Imo the reason "was it peer reviewed" is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That's not the case in AI

So, it's an issue, but in the same way that all citations are problematic if you can't check them yourself/trust the authors to do due diligence

Define "past a certain point"? What fraction of close races in EG the US meet this? Especially if you include eg primaries for either party with one candidate with much more sensible views than the other. Imo donations are best spent on specific interventions or specific close but neglected races, but these can be a big deal

I do not feel qualified to judge the effectiveness of an advocacy org from the outside - there's a lot of critical information like whether they're offending people, if they're having an impact, whether they're sucking up oxygen from other orgs in the space, if their policy proposals are realistic, if they're making good strategic decisions, etc, that I don't really have the information to evaluate. So it's hard to engage deeply with an org's case for itself, and I default to this kind of high level prior. Like, the funders can also see this strong case and still aren't funding it, so I think my argument stands

I'm sorry to hear that CAIP is in the situation, and this is not at all my area of expertise/I don't know much about CAIP specifically, so I do not feel qualified to judge this myself.

That said, I will note on the meta level that there is major adverse selection when funding an org in a bad situation that all other major funders have passed on funding, and I would be personally quite hesitant to fund CAIP here without thinking hard about it or getting more info.

Funders typically have more context and private info than me, and with prominent orgs like this there's typically a reason, but funders are strongly disincentived from making the criticism public. In this case, one of the stated reasons CAIP quotes is "had heard from third parties that CAIP was not a valuable funding opportunity" can be a very good reason if the third party is trustworthy and well informed, and often critics would prefer to be anonymous. I would love to hear more about the exact context here, and why CAIP believes they are making a mistake that readers should ignore, to assuage fears of adverse selection

I generally only recommend donating this when you are:

  • Confident the opportunity is low downside (which seems false in the context of political advocacy)
  • If you have a decent idea of why those funders declined that you disagree with
  • Or you think sufficiently little of all mentioned funders (Open Philanthropy, Longview Philanthropy, Macroscopic Ventures, Long-Term Future Fund, Manifund, MIRI, Scott Alexander, and JueYan Zhang) that you don't update much
  • You feel you have enough context to make an informed judgement yourself, and grant makers are not meaningfully more well informed than you

I'm skeptical that the reason is really just that it's politically difficult for most funders to fund political advocacy. It's harder, but there's a fair amount of risk tolerant private donors, at least. If it were, I expect they would be back channelling to other less constrained funders that CAIP is a good opportunity, or possibly making public that they did not have an important reason to decline/think the org does good work (as Eli Rose did for Lightcone). I would love for any to reply to my comment saying this is all paranoia! There are other advocacy orgs that are not in as dire a situation.

It seems like your goal with this post was to persuade EAs like me. I was trying to explain why I didn't feel like there was much here that I found persuasive. I generally only go and read linked resources if there's enough to make me curious, so a post that asserts something and links resources but doesn't summarise the ideas or arguments is not persuasive to me. I've tried to be fairly clear about which parts of what you're saying I think I understand well enough to confidently disagree with, and what parts I predict I would disagree with based on prior experience with other concepts and discourse from this ideological space but have not engaged enough to be confident in - I consider this perfectly consistent with evidence-based judgement. Life is far too short to go and read a bunch of things about every idea that I'm not confident is wrong

  1. I disagree that wealth accumulation causes damage
  2. I'm not super sure what you mean by comprehensive donor education, but I predict I would disagree with it
  3. I'm neither convinced that these orgs effect complex political change, nor that their political goals would be good for the world. For example, as I understand it, degrowth is a popular political view in such circles and I think this would be extremely bad
  4. I'm not familiar with the techniques outlined here, but would guess that the goals and worldview behind such tricky conversations differ a fair bit from mine
  5. This one seems vaguely plausible, but is premised on radical feminism having techniques for getting donors to exert useful non monetary influence, and that these techniques would work for the goals I care about, neither of which is obvious to me
Load more