NN

Neel Nanda

5211 karmaJoined neelnanda.io

Bio

I lead the DeepMind mechanistic interpretability team

Comments
394

This seems like a great post to exist!

I would say yes, but frame it as "they can help you think about AGI focused options and if they might be a good fit - but obviously there are also other options worth considering!"

Huh, I think this way is a substantial improvement - if 80K had strong views about where their advice leads, far better to be honest about this and let people make informed decisions, than giving the mere appearance of openness

This seems a reasonable update, and I appreciate the decisiveness, and clear communication. I'm excited to see what comes of it!

This was a very helpful post, thanks! Do you know of any way for UK donors to give to the rapid response fund? If not, has GWWC considered trying to set that up? (Like I think you have with a bunch of other charities)

Great post! I highly recommend using LLM assistance (especially Claude) here, eg drafting emails, preparing a script for phone calls, etc. Personally I find this all super awkward, and LLMs are much better than me at wording things gracefully. (Though you want to edit it enough that it doesn't feel like LLM written slop)

I think you are conflating your specific cause prioritisation and a general question of how people who care about impact should think. If someone held your course prioritisation then they should clearly work at one of those top organisations, otherwise help with the issues, or earn the highest salary they can and donate that. I.E earning to give. Working at other impact-focused organisations not focused on those top causes wouldn't make sense. I think that generally you should optimise for one thing rather than half-hardly optimising for several.

However, many people do not share your cause participation which leads to quite different conclusions. I have no regrets about doing direct work myself

I disagree. I think that if a government causes great harm by accident or great harm intentionally, either is evidence that it will cause great harm by accident or intentionally in future respectively and I just care about the great harm part

This is quite different from the case I would make for donor lotteries. The argument I would make is just that figuring out what to do with my money takes a bunch of time and effort. If I had 10 times the amount of money I could just scale up all of my donations by 10 times and the marginal utility would probably be about the same. So I would happily take a 10% chance to 10x my money and a 90% chance to be zero and otherwise follow the same strategy because in expectation the total good done is the same but the effort invested has 10% the cost, as I won't bother doing it if I lose.

Further, it now makes more sense to invest way more effort, but that's just a fun bonus. I can still just give the money to EA funds or whatever if that beats my personal judgement, but I can take a bit more time to look into this, maybe make some other grants if I prefer etc. And so likewise, being 100 or 1000x leveraged is helpful and justifies even more efforts in the world where I win.

Notably this argument works regardless of who else is participating in the lottery. if I just went to Vegas and bet a bunch of my money on roulette that gets a similar effect. Donor lotteries are just a useful way of doing this where everyone gets this benefit of a small chance of massively increasing their money and a high chance of losing it all, and it's zero expected value unlike roulette

If people said all these things without the word marginal, would you be happy?

Load more