I am an earlyish crypto investor who has accumulated enough to be a mid-sized grantmaker, and I intend to donate most of my money over the next 5-10 years to try and increase the chances that humanity has a wonderful future. My best guess is that this is mostly decided by whether we pass the test of AI alignment, so that’s my primary focus.
AI alignment has lots of money flowing into it, with some major organizations not running fundraisers, Zvi characterizing SFF as having “too much money”, OpenPhil expanding its grantmaking for the cause, FTX setting themselves up as another major grantmaker, and ACX reporting the LTFF’s position as:
what actually happened was that the Long Term Future Fund approached me and said “we will fund every single good AI-related proposal you get, just hand them to us, you don’t have to worry about it”
So the challenge is to find high-value funding opportunities in a crowded space.
One option would be to trust that the LTFF or whichever organization I pick will do something useful with the money, and I think this is a perfectly valid default choice. However, I suspect that as the major grantmakers are well-funded, I have a specific comparative advantage over them in allocating my funds: I have much more time per unit money to assess, advise, and mentor my grantees. It helps that I have enough of an inside view of what kinds of things might be valuable that I have some hope of noticing gold when I strike it. Additionally, I can approach people who would not normally apply to a fund.
What is my grantmaking strategy?
First, I decided what parts of the cause to focus on. I’m most interested in supporting alignment infrastructure, because I feel relatively more qualified to judge the effectiveness of interventions to improve the funnel which takes in people who don’t know about alignment in one end, takes them through increasing levels of involvement, and (when successful) ends with people who make notable contributions. I’m also excited about funding frugal people to study or do research which seems potentially promising to my inside view.
Next, I increased my surface area with places which might have good giving opportunities by involving myself with many parts of the movement. This includes Rob Miles’s Discord, AI Safety Support’s Slack, in-person communities, EleutherAI, and the LW/EA investing Discord, where there are high concentrations of relevant people, and exploring my non-EA social networks for promising people. I also fund myself to spend most of my time helping out with projects, advising people, and learning about what it takes to build things.
Then, I put out feelers towards people who are either already doing valuable work unfunded or appear to have the potential and drive to do so if they were freed of financial constraints. This generally involves getting to know them well enough that I have a decent picture of their skills, motivation structure, and life circumstances. I put some thought into the kind of work I would be most excited to see them do, then discuss this with them and offer them a ~1 year grant (usually $14k-20k, so far) as a trial. I also keep an eye open for larger projects which I might be able to kickstart.
When an impact certificate market comes into being (some promising signs on the horizon!), I intend to sell the impact of funding the successful projects and use the proceeds to continue grantmaking for longer.
Alongside sharing my models of how to grantmake in this area and getting advice on it, the secondary purpose of this post is to pre-register my intent to sell impact in order to strengthen the connection between future people buying my impact and my current decisions. I’ll likely make another post in two or three years with a menu of impact purchases for both donations and volunteer work I do, once it’s more clear which ones produced something of value.
I have donated about $40,000 in the past year, and committed around $200,000 over the next two years using this strategy. I welcome comments, questions, and advice on improving it.
Whee, thanks!
Yeah, that feels like a continuous kind of failure. Like, you can reduce the risk from 50% to 1% and then to 0.1% but you can’t get it down to 0%. I want to throw all the other solutions at the problem as well, apart from Attributed Impact, and hope that the aggregate of all of them will reduce the risk sufficiently that impact markets will be robustly better than the status quo. This case depends a lot on the attitudes, sophistication, and transparency of the retro funders, so it’ll be useful for the retro funders to be smart and value-aligned and to have a clear public image.
In a way this is similar to the above. Instead of some number of speculators having some credence that the outcome might be extremely good, we get the same outcome if a small number of speculators have a sufficiently high credence that the outcome will be good.
This one is different. I think here the problem is that the issuers lied and had an incentive to lie. They could’ve also gone the Nikola route of promising something awesome, then quickly giving up on it but lying about it and keeping the money. What the issuers did is just something other than the actions that the impact certificate are about; the problem is just that the issuers are keeping that a secret. I don’t want to (or can) change Attributed Impact to prevent lying, though it is of course a big deal…
I feel like the first two don’t call for changes to Attributed Impact but for a particular simplicity and transparency on the part of the retro funders, right? Maybe they need to monitor the market, and if a project they consider bad attracts too much attention from speculators, they need to release a statement that they’re not excited about that type of project. Limiting particular retro funders to particular types of projects could also aid that transparency – e.g., a retro funder only for scientific papers or one only for vaccinating wild animals. They can then probably communicate what they are and aren’t interested in without having to write reams upon reams.
The third one is something where I see the responsibility with auditors. In the short term speculators should probably only give their money to issuers who they somehow trust, e.g., because of their reputation or because they’re friends. In the long run, there should be auditors who check databases of all audited impact certificates to confirm that the impact is not being double-issued and who have some standards for how clearly and convincingly an impact certificate is justified. Later in the process they should also confirm any claims of the issuer that the impact has happened.
I’ll make some changes to my document to reflect these learnings, but the auditor part still feels completely raw in my mind. There’s just the idea of a directory that they maintain and of their different types of certification, but I’d like to figure out how much they’ll likely need to charge and how to prevent them from colluding with bad issuers.
The “human judgment filter,” which I’ve been calling “curation” (unless there are differences?) is definitely going to be an important mechanism, but I think it’ll fall short in cases where unaligned people are good at marketing and can push their Safemoon-type charity even if no reputable impact certificate exchange will list it.