Hide table of contents

(Crossposted from The Impact Purchase)

The first round of the 2015 Impact Purchase had eight submissions, including research, translation, party planning, mentoring, teaching and money to GiveDirectly. We expected the evaluations would have to be rough, and would like to emphasize that they really were rough: we had to consider lots of things very quickly to get through them in a reasonable time for the scale of the funding. Please forgive us for our inaccuracies, and don't read too much into our choices! This round, we are buying certificates of impact for:

What does this mean? If everything is working correctly, it suggests that for about $1,200 you can buy an investigation as good as Ben's. And if you can make an investigation as good as Ben's, it suggests you can get $1,200 for it. (Note that these prices should include more costs of the labor than are usually accounted for when paying for altruistic projects. Usually if someone pays me to write an EA blog post, say, I am willing to do it for less than what I consider the value of my time, because I also want the blog post to be written. These prices are designed to be the full price without this discounting.)

The submissions

Here are all of the submissions so far. Everything not bought in this round can still be bought in the next rounds:

  1. Teaching at SPARC in 2014 (50%), Ben Kuhn
  2. Post "Does Donation Matching Work?" (50%), Ben Kuhn
  3. Inducing the translation of many papers and posts by Bostrom, Yudkowksy and Hanson to Portuguese, as part of IERFH (40%), Diego
  4. A donation of $100 to GiveDirectly, Telofy
  5. Research comparing modafinil and caffeine as cognitive enhancers, including these blog posts (50%), Joao Fabiano
  6. A chapter of a doctoral thesis defending a spin-off version of Eliezer's complexity of value thesis (20%), Joao Fabiano
  7. Organization of Harry Potter and the Methods of Rationality wrap parties, including organization of the Berkeley party and central organization of other parties (50%), Oliver Habryka
  8. Mentoring promising effective altruists (50%), Oliver Habryka

The evaluations

Too hard to evaluate

We decided not to evaluate teaching at SPARC, inducing the translation of papers, or mentoring. Paul's involvement in SPARC made buying teaching there complicated, and it would already have been difficult to separate the teaching from others' work on SPARC. Inducing the translation of papers also seemed too hard to separate from actually translating the papers, without much more access to exactly what happened between the participants. The value of mentoring EAs seemed too hard to assess.

Purchased projects

We evaluated the other five projects, and it looked as if we would buy the two that we did. We then evaluated those two somewhat more thoroughly. Here are summaries of our evaluations for them.

Ben Kuhn's blog post on donation matching
  1. We estimate that EAs and other related groups move around $500k annually through donation matching. We are thinking of drives run by MIRI, CFAR, GiveDirectly, Charity Science, Good Ventures, among others.
  2. We think a full and clear understanding of donation matching would improve this by around $6k, through such drives being better optimized. We lowered this figure to account for the data being less relevant to some matching drives, and costs and inefficiencies in the information being spread.
  3. We think this work constitutes around 1/30th of a full and clear understanding of donation matching.
  4. We used a time horizon of three years, though in retrospect it probably should have been longer. This implicitly included some general concerns about the fraction of people who have seen it being smaller in the future, and information accruing from other sources and conditions changing, and so on.
  5. We get $6,000 * 3 years / 30 = $600 of stimulated EA donations
Oliver Habryka's organization of HPMOR wrap parties
  1. We estimate that around 1300 people went to wrap parties (adjusted somewhat for how long they were there for). This was based on examining the list of events and their purported attendances, and a few quick checks for verification.
  2. We estimated Oliver's impact was 1/4 of the impact of the wrap parties. We estimated that the existence of cental organization doubled the scale of the event, and we attributed half of that credit to the central organization and half of the credit to other local organizers and non-organizational inputs (which also had to scale up).
  3. We estimated that the attendance of an additional person was worth around $15 of stimulated EA donations. This was a guess based on a few different lines of reasoning. We estimated the value of the EA/LW community in stimulated donations, the value of annual growth, and the fraction of that growth that comes from outreach (as opposed to improving the EA product, or natural social contact), and the fraction of outreach that came from the wrap parties. We also guessed what fraction of people were new, and would become more involved in the EA/LW community as a result, and would end up doing more useful things on our values as a result of that. We sanity checked these numbers against the kind of value participants probably got from the celebration individually.
  4. Thus we have 1300 * 15 /4 = $4,875 of stimulated EA donations, which we rounded up to $5,000.

Note that while we evaluated both items in terms of dollars of stimulated EA donations, these numbers don't have much to do with real dollars in the auction—their only relevance is in deciding the ratio of value between different projects. So systematic errors one way or the other won't much matter.

Notes on our experience

Quick estimates

It was tough to evaluate things fast enough to be worth it given how little we were spending, while also being meaningfully accurate. To some extent this is just a problem with funding small, inhomogeneous projects. But we think it will get better in the future for a few reasons, if we or others do more of this kind of thing:

  1. Having a reference class of similar things that are already evaluated makes it much easier to evaluate a new project. You can tell how much to spend on a bottle of ketchup because you have many similar ketchup options which you have already judged to be basically worth buying, and so you mostly just have to judge whether it is worth an extra $0.10 for less sugar or more food dye or whatever. If you had never bought food before and had to figure out from first principles how much a bottle of ketchup would improve your long term goals, you would have more trouble. Similarly, if we had established going prices for different kinds of research blogging, it would be easier to evaluate Ben's post relative to nearby alternatives.
  2. We will cache many parts of the analysis that come up often. e.g. how much is it worth to attract a new person to the EA movement? And only make comparisons between similar activities.
  3. We will get better with practice.

Shared responsibility

We said we would not buy certificates for collaborative projects unless the subset of people applying had been explicitly allocated a share of responsibility for the project. Collaborative versus not turned out to be a fairly unclear distinction. No project was creating objects of ultimate value directly; so all of these projects are instrumental steps, to be combined with other people's instrumental steps, to make further, bigger instrumental steps. Is a donation to GiveDirectly its own project, or is it part of a collaboration with GiveDirectly and their other donors? Happily, we don't care. We just want to be able to evaluate the thing we are buying. So we were willing to purchase a donation to GiveDirectly from the donor, but not to purchase the output of a cash transfer from a GiveDirectly donor. In some cases it is hard to assess the value of one intermediate step in isolation, and then will be less likely to purchase it (or will purchase it only at a discount).

Call for more proposals

The next deadline will be April 25. If you have any finished work you'd like to partially sell, please consider applying!





More posts like this

Sorted by Click to highlight new comments since:

Great! It's excellent to see how this is progressing.

Are you going to try to stick to evaluating individual projects, or do you want people to try to take credit for their part in a collaborative project now?

You didn't explain in your post your rationale for not purchasing Joao Fabiano's work. For what reasons did you rule it out? Difficulty in evaluation?

We evaluated all of the projects other than the three I specifically mentioned not evaluating. Sorry for not writing up the other evaluations - we just didn't have time. We bought the ones that gave us the most impact per dollar, according to our evaluations (and based on the prices people wanted for their work). So we didn't purchase Joao's work this round because we calculated that it was somewhat less cost-effective than the things we did purchase, given the price. We may still purchase it in a later round.

Thanks for the response. That's great feedback to hear.

Curated and popular this week
Relevant opportunities