Here's a list of my donations so far this year (put together as part of thinking through whether I and others should participate in an OpenAI equity donation round). They are roughly in chronological order (though it's possible I missed one or two). I include some thoughts on what I've learned and what I'm now doing differently at the bottom.

As a meta-level note, I originally posted this content to LessWrong as a shortform, but since donations are a more central topic on the EA forum it seemed appropriate to make it a top-level post here.  However, I do want to highlight that I no longer identify as an EA (this post hits many of the key points why) and have very mixed feelings about the EA attitude towards donating. I'm mainly posting this here because I hope that the reasoning and conclusions I've reached below are useful for EAs to see, not because I'm trying to inspire others to copy me. I don't want to encourage people to donate (even to the same places as I did) unless you already have a few million dollars in assets, since I want people who have influence over the EA community (and humanity in general) to be making decisions from an abundance mindset rather than a scarcity mindset. That's also a crucial priority on a personal level, and I'm only donating to an extent that's consistent with maintaining that myself. 

  1. $100k to Lightcone
    1. This grant was largely motivated by my respect for Oliver Habryka's quality of thinking and personal judgment.
    2. This ended up being matched by the Survival and Flourishing Fund (though I didn't know it would be when I made it). Note that they'll continue matching donations to Lightcone until the end of March 2026.
  2. $50k to the Alignment of Complex Systems (ACS) research group
    1. This grant was largely motivated by my respect for Jan Kulveit's philosophical and technical thinking.
  3. $20k to Alexander Gietelink Oldenziel for support with running agent foundations conferences.
  4. ~$25k to Inference Magazine to host a public debate on the plausibility of the intelligence explosion in London.
  5. $100k to Apart Research, who run hackathons where people can engage with AI safety research in a hands-on way (technically made with my regranting funds from Manifund, though I treated it like a 100k boost to my own donation budget)
  6. $50k to Janus
    1. Janus could reasonably be described as the Jane Goodall of AI. They and their collaborators are doing the kind of creative thinking and experimentation that has a genuine chance of leading to new paradigms for understanding AI. See for instance this discussion of AI identities.
  7. $15k to Palladium
    1. They are doing good thinking about governance and politics on a surprisingly tight budget.
  8. $100k to Sahil to support work on live theory at groundless.ai
    1. I've found my conversations with Sahil extremely generative. He's one of the researchers I've talked to with the most ambitious and philosophically coherent "overall vision" for the future of AI. I still feel confused about how likely his current plans are to actualize that vision (and there are also some points where it's in tension with my own overall vision) but it definitely seems worth betting on.

Total so far: ~$460k (of which $360k was my own money, and $100k Manifund's money).

Note that my personal donations this year are >10x greater than any previous year; this is because I cashed out some of my OpenAI equity for the first time. So this is the first year that I've invested serious time and energy into donating. What have I learned?

My biggest shift is from thinking of myself as donating "on behalf of the AI safety community" to specifically donating to things that I personally am unusually excited about. I have only a very small proportion of the AI safety community's money; also, I have fairly idiosyncratic views that I've put a lot of time into developing. So I now want to donate in a way which "bets on" my research taste, since that's the best way to potentially get outsized returns. More concretely:

  • I'd classify the grants to Apart Research and the Inference Magazine debate as things that I "thought the community as a whole should fund". If I were making those decisions today, I'd fund Apart Research significantly less (maybe $50k?) and not fund the debate (also because I've updated away from public outreach as a valuable strategy).
  • I consider my donations to ACS, Janus and Sahil as leveraging my research taste: these are some of the people who I have the most productive research discussions with. I'm excited about others donating to them too.
  • My grants to Lighthaven and Alexander Gietelink Oldenziel are somewhere in between those two categories. I'm still excited about them, though I'm now a bit more skeptical about conferences/workshops in general as a thing I want to support (there are so many conferences, are people actually getting value out of them or mainly using them as a way to feel high-status?) However this is less of a concern for agent foundations conferences, and also the sort of thing that I trust Oliver to track and account for.
  • My political views are unusual enough that I haven't yet figured out a great way to fund them. Palladium is in the right broad direction but not focused enough on my particular interests for me to want to fund at scale (and again is more of a "someone should fund it" type thing). Regardless, I'm uninterested in almost all of the AI governance interventions others in the community are funding (for reasons I gestured towards in this talk, though note that some of my arguments in it were a bit sloppy, e.g. as per the top comment).

Even more recently, I've decided that I can bet on my research taste most effectively by simply hiring research assistants to work for me. I'm uncertain how much this will cost me, but if it goes well it'll be most of my "donation" budget for the next year. I could potentially get funding for this, but at least to start off with, it feels valuable to not be beholden to any external funders.

More generally, I'd be excited if more people who are wealthy from working at AI labs used that money to make more leveraged bets on their own research (e.g. by working independently and hiring collaborators). This seems like a good way to produce the kinds of innovative research that are hard to incentivize under other institutional setups. I'm currently writing a post elaborating on this intuition.

90

0
0
6
2

Reactions

0
0
6
2

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I want people who have influence over the EA community (and humanity in general) to be making decisions from an abundance mindset rather than a scarcity mindset

In case you read the comments here: do you have a short form / a blog post on this (even from someone else) that you'd like to link to?

Richard’s ‘Coercion is an adaptation to scarcity’ post and follow-up comment talk about this (though ofc maybe there’s more to Richard’s view than what’s discussed there). Select quotes:

What if you think, like I do, that we live at the hinge of history, and our actions could have major effects on the far future—and in particular that there’s a significant possibility of existential risk from AGI? I agree that this puts us in more of a position of scarcity and danger than we otherwise would be (although I disagree with those who have very high credence in catastrophe). But the more complex the problems we face, the more counterproductive scarcity mindset is. In particular, AGI safety requires creative paradigm-shifting research, and large-scale coordination; those are both hard to achieve from a scarcity mindset. In other words, coercion at a psychological or community level has strongly diminishing marginal returns when dealing with scarcity at a civilizational level.

AI is a danger on a civilizational level; but the best way to deal with danger on a civilizational level is via cultivating abundance at the level of your own community, since that’s the only way it’ll be able to make a difference at that higher level.

"I don't want to encourage people to donate (even to the same places as I did) unless you already have a few million dollars in assets"

I do see advantages of the abundance mindset, but your threshold is extremely high-it excludes nearly everyone in developed countries, let alone the world. Plenty of people without millions of dollars of assets have an abundance mindset (including myself).

I'd fund Apart Research significantly less (maybe $50k?) and not fund the debate (also because I've updated away from public outreach as a valuable strategy).

 

What caused this update? Perhaps I just need to listen to the talk linked below it, but would be interested if you had any more pointed thoughts to share. 

I used to not actually believe in heavy-tailed impact. On some level I thought that early rationalists (and to a lesser extent EAs) had "gotten lucky" in being way more right than academic consensus about AI progress. And I thought on some gut level that e.g. Thiel and Musk and so on kept getting lucky, because I didn't want to picture a world in which they were actually just skillful enough to keep succeeding (due to various psychological blockers).

Now, thanks to dealing with a bunch of those blockers, I have internalized to a much greater extent that you can actually be good not just lucky. This means that I'm no longer interested in strategies that involve recruiting a whole bunch of people and hoping something good comes out of it. Instead I am trying to target outreach precisely to the very best people, without compromising much.

Relatedly, I've updated that the very best thinkers in this space are still disproportionately the people who were around very early. The people you need to soften/moderate your message to reach (or who need social proof in order to get involved) are seldom going to be the ones who can think clearly about this stuff. And we are very bottlenecked on high-quality thinking.

(My past self needed a lot of social proof to get involved in AI safety in the first place, but I also "got lucky" in the sense of being exposed to enough world-class people that I was able to update my mental models a lot—e.g. watching the OpenAI board coup close up, various conversations with OpenAI cofounders, etc. This doesn't seem very replicable—though I'm trying to convey a bunch of the models I've gained on my blog, e.g. in this post.)

Curated and popular this week
Relevant opportunities