I am an early-retired Harvard Ph.D Physicist and Clean Energy Policy Analyst and charity entrepreneur. I have been organizing and experimenting with clean energy projects in Africa for 30 years. Currently, I am lead organizer of a social venture that consists of a small US non-profit and a local Malawian for-profit partner.
To make my work sustainable, replicate and scaleable, I need to develop a system where our projects can sell "development credits" which are sort of like high integrity carbon credits, but easier to verify. And I need these to be priced at the GiveWell funding threshold of about $150/DALY-equivalent impact.
I am a good innovator with respect to the practicalities of very cost-efficient distribution of solar tech to low-income rural Africa.
In Carbon Credit markets, project implementers create carbon emission mitigation projects, and buyers, either environmental philanthropists, or organizations wanting or regulated to mitigate their climate impact buy verified impact credits, called carbon credits. A balance between supply and demand sets the "price" of buying impact, and buyers try to maximize impact per dollar by minimizing the price they pay per credit.
Acknowledging that carbon credit markets have a whole host of problems, they still seem to be an interesting mechanism for "buying impact" and by being market based, they create incentives to minimize barriers to entry for both buyers and sellers. And competition should encourage cost-minimization or impact maximization.
So do you have an opinion on why EA has not yet succeeded in creating a version of an "Impact Credit" market for expanding and incentivizing impact-based philanthropy? I can imagine a few possibilities. Here are some that come to mind:
(1) It is just too hard to accurately characterize impact accurately at the project level so the focus is on charity-wide impact evaluation and quantification.
(2) Attribution of impact cannot be realistically done at the project level, it has to be done at the charity level.
(3) An open market will encourage cheating and the EA community does not have the resources to police the potential cheating and corruption.
(4) There are too many types of impact that EAs are interested in, and because EAs focus on neglected cause areas, you can't really create Impact Credits for a neglected area because that puts the "cart before the horse."
(5) Maybe it is a good idea and the EA community just hasn't gotten around to seriously or successfully trying it yet.
(6) Maybe if there is too much money going into "Impact Markets," by the laws of supply and demand the cost of impact will go up for the donors, and the impact cost effectiveness will go down. Therefore EA donors get much more "bang for the buck" by being exclusive, raising requirements on donors, charities and projects so that their more limited, exclusive projects can have higher cost effectiveness, than what might be possible at larger scale. Do we really want Malaria bednets to get $20,000 per life saved instead of $5,000 per life saved so that Malaria bednet charities have an incentive to lower the average bed net distribution cost-effectiveness? If the market price for impact is $20,000 when a focused program can deliver $5,000 per life saved, then the bednet charities have an incentive to expand and lower their cost-effectiveness by 4X by distributing bednets to people who really don't need them.
I am sure that there are many more possible answers.
Thank you for clarifying the voting system for me. So my comment most likely irritated some folks with lots of karma.
I certainly don't want to say things that irritate folks in the EA community . I was giving voice to what I might hear from some of my women friends, something like: "Yes Helen Toner was an EA, but she was a woman who was questioning what Altman was doing too." According to this article, Altman tried to push out Helen Toner "because he thought a research paper she had co-written was critical of the company." But she was on a board whose responsibility to make sure that AI serves humanity. So here job was in some sense to be critical of the company when it might be diverging from the mission of serving humanity. So when she tries to do her job, some founder-guy tries to her because the public discussion about the issue might be critical of something he implemented?
I think this information indicates that there is not only an EA/non-EA dimension to that precursor event, but I think most women would recognize that there is also a gender/power/authority dimension to that precursor event.
In spite of such considerations, I also agree with the idea that we should not focus on differences, conflict and divisions. And now I will more fully understand the karma cost of irritating someone who has much more karma than me on the forum.
Thank you for the feedback on my comment. It has been informative.
It is also the two women board members are also now off the board. So I would also like to hear what happened from a woman's perspective. Was it another case of powerful men not wanting to cede authority to women who are occupying positions of authority?
There could be many layers to what happened.
Thanks Emily:
I see some resonance between the behavioral science "Mistakes" that you think EAs might be making and differences that I find in my approach to EA work compared to what seems to be documented in the EA literature.
Specifically, I was recently reading more thoroughly the works of Peter Singer, (specifically The Life You Can Save and The Most Good That You Can Do), and while I appreciated the arguments that were being made, I did not feel as though they reflected or properly respect the real beliefs and motivations of the friends and family that donate to support my EA activities.
In this sense I also see a set of behavior science mistakes that the EA movement seems to be making from my particular individual perspective.
So in my 30 years of doing Africa-focused, quantitatively-oriented development projects, I have developed a different "Theory of Change" for how my personal EA activities can have impact.
This personal EA-like Theory of Change has three key elements: (1) Attracting non-EA donors for non-EA reasons to support EA Global Health and Welfare (GHW) causes, (2) Focusing on innovation to increase EA GHW impact leverage to 100:1, and (3) Cooperating with other EAs with the assumption that each member of the EA community has a different set of beliefs and an individual agenda. Cooperation serves aligned and community interests.
I would appreciate it if you might comment on whether some of my divergences from general EA practice might address some of the "behavioral science" issues that you have identified.
The first element of my Theory of Change is that for my EA causes to be successful, my projects have to be able to attract mostly non-EA donors. I recognize that my EA-type views are the views of a relatively tiny minority in our larger society. Therefore, I do not personally try to change people's moral philosophy which seems to be Peter Singer's approach. When I do make arguments for people to modify their moral philosophy, I find that people usually find this to be either threatening or offensive.
Element #1: While the vast majority donors do not donate based on "maximum quantitative cost-effectiveness," they do respond to respectful arguments that a particular cause or charity that you are working on is more important and impactful than other causes and charities. When "maximum quantitative cost-effectiveness" is a reason for someone that they know and respect to dedicate their life effort and money to a cause, many people will be willing to join and support that person's commitment. So while only a few people may be motivated by EA philosophical arguments, many more people can support the movement if people that they like, know and respect show a strong commitment to the movement.
This convinces people to support EA causes because they see that EAs are honest, dedicated, and committed people that they can trust. You do not have to convince people of EA philosophy to have people donate to EA cause/efforts. Most people who donate to EA causes could potentially have strong philosophical disagreements with the EA movement.
The second element of my Theory of Change is that EA projects need to have very large amounts of impact leverage. So it is important to constantly improve the impact leverage of EA projects. Statistics on charitable donations indicates, that most people donate only a few percent of their income to charity, and may donate less than 1% of income to international charitable causes.
Element #2: If people are going to donate less than 1% of income to international charitable causes, then in order to try to address the consequences of international economic inequality, EA Global Health and Welfare charities should strive to have 100:1 leverage or impact. That is, $1 of charitable donation should produce $100 of benefit for people in need. In that way, it may be possible to create an egalitarian world over the long term in spite of the fact that people may be willing to give only 1% of their income on average to international charitable causes.
In my little efforts, I think I have gotten to 20:1 impact leverage. I hope I can demonstrate something closer to 50:1 impact leverage in a year or two.
The third element of my EA theory of change is that I assume that every EA has a different personal agenda that is set by their personal history and circumstances. It is my role to modify that agenda, only if someone is open to change.
Element #3: Everyone in the EA movement has a different personal agenda and different needs and goals. Therefore my goal in interacting with other EAs is to help them realize their full potential as an EA community participant on their terms. Now since, I have my own personal views and agenda, generally I will help the agenda of others when it is low cost to my work or when it also make a contribution to my personal EA agenda (i.e. encouraging EA GHW projects to have 100:1 impact leverage). But if I can keep my EA agenda general enough, then there should be lots of alignment between my EA agenda and the agenda/interests of other EAs and I can be part of a substantial circle of associates that are mutually supportive.
Now this Theory of Change or Theory of Impact is to some extent assuming fairly minimal behavior change. It assumes that most people support EA causes for their own reasons. And it also assumes that people will not change their charitable donation behavior very much. It puts most of the onus of change on a fairly small EA community that achieves the technical accomplishment of attaining 100:1 impact leverage.
Does this approach avoid the mistakes that you mention, while at the same time making a minimal impact on changing behavior???
Just curious. I hope this response to your presentation of EA behavior science "Mistakes" is useful to you.
We have made a post for the project: Solar pumps in Malawi: Creating ~$20 of income per $1 of donation for ~$1/capita/year beneficiaries where:
"Every $100 of marginal donation will add one brushless DC pump and 100m of irrigation hose to a container shipment of solar equipment that we will purchase in February to be delivered to low-income rural women's groups and farmers in rural Malawi starting in June 2024. "
Beneficiaries pay for the solar panels to power the pumps, and we estimate that each pumping system will generate roughly between $2000 and $5000 of new income for $1/capita/day beneficiaries in total over the next 3 to 10 years.
Thanks Jamie: our method is most useful when one has a relatively small sample of field data. In that case it is easy to calculate the averages of the bottom third, middle third, and top third of values and this is good enough because the data sample is not sufficient to specify the distribution with any greater precision.
Our method can also be calculated in any spreadsheet extremely easily and quickly without using any plug-ins or tools.
But agreed, if someone has the time, data and capacity, your method is better.
I don't know if this is useful to you, but we have a somewhat easier way of solving the same problem using what we call a simplified Monte Carlo estimation technique. This is described in the following post:
https://forum.effectivealtruism.org/posts/icxnuEHTXrPapHBQg/a-simplified-cost-effectiveness-estimation-methodology-for
It is not quite as accurate as your method because it approximates the probability distributions as a three-value distribution, but it addresses the issue of CEA inputs being uncorrelated and can be done in any spreadsheet without a need of using any extra tools or web services. You just calculate for all combinations of inputs, and then use standard spreadsheet tools like cumulative probability plots or histogram calculators to get the probability distribution of results:
"In our CE estimation with uncertain inputs, we implement a highly simplified Monte Carlo method that we call a simplified Monte Carlo or "poor man's" Monte Carlo calculation.
In our simplified Monte Carlo calculation, we initially estimate ranges for all or most of the input parameters, and represent these ranges by low, median, and high values. Given a probability distribution of what values a parameter may take, the low value represents the average value of the lowest 1/3 of possibilities, the median value represents the average of the middle 1/3 of probable values and the high value represents the average of the largest 1/3 of probably values. This approximates a probability distribution of possible input parameter values by three discrete values of equal probability.
Once all of the input parameters are represented by three values of equal probability, then the CE result is calculated for all combinations of input parameters. If each of the input parameters are independent and uncorrelated, then the set of CE values that result from all combinations of inputs all have equal probability. A histogram of the full set of CE results is then constructed to illustrate the full range of possible CE values and their respective approximate probabilities. "
I completely agree that GiveDirectly could explain this a hell of a lot better. I suspect that their team has a diversity of points of view and this prevents them from committing to a more concrete explanation like what I presented. I explained the capital accumulation in terms of what I see when I visit actual households: someone starts with mud walls and a thatch roof, then they take their surplus and build a bigger house with concrete floors, brick walls and metal roof, move into that and then they can buy nicer furniture because dirt isn't falling down from the thatch roof all of the time. They have obviously invested in higher productivity housing services which increases their consumption income. But this evidence is anecdotal.
I think GiveDirectly tried to anecdotally explain how cash transfers get invested with their shop example in the video. But again, I totally agree that their explanation has holes that make it hard for people who don't already agree with them to have the explanation that they need to understand why cash transfer requirements are likely to decrease substantially over time.
No answer to my question yet. I will try to ask a much shorter question that might be easier to answer.
Why has the EA movement focused on creating EA charities, rather than creating a charity impact credit/financing mechanism that even non-EA charities can use at the project level for high-impact projects?
Might creating a mechanism for non-EA charities to participate in creating EA-type-impacts expand the funds and financing that is going to EA-type cost-effectiveness?