Hide table of contents

This is the user guide for the Causal Networks Model, created by CEA summer research fellows Alex Barry and Denise Melchin. Owen Cotton-Barratt provided the original idea, which was further developed by Max Dalton. Both, along with Stefan Schubert, provided comments and feedback throughout the process. 

This is the beginning of a multipart series of posts explaining what the model is, how it works and some of the findings it leads to. The structure is as follows (links will be added as new posts go up):

  1. Introduction & user guide (this post)
  2. Technical guide (optional reading, a description of the technical details of how the model works)
  3. Findings (a writeup of all our findings)
  4. Climate catastrophe (one particularly major finding)

The structure of this post is as follows:

  1. Summary
  2. Introduction
  3. The qualitative model
  4. The quantitative model
  5. Limitations of the quantitative model
  6. How to use the quantitative model
  7. Conclusion

 

1. Summary

The indirect effects of actions seem to be very important to assessing them, but even for common EA activities like donations to global poverty charities, very little is generally known about them. To try to get a handle on this, we created a quantitative model which attempts to capture the indirect effects of donating to common EA causes. The results are very speculative and should mostly be taken as a call for further research, but we have used the model to get some tentative results, which will be explored in Part III.

Our model depends to a high degree on extremely uncertain information (such as the probability of existential risks) about which there is likely to be significant disagreement. Therefore, we created a user tool for our model which allows the user to easily alter the most contentious values, to see how the outcome is affected.

2. Introduction

When trying to do the most good, the indirect effects of one's actions are important, and there have been many debates in the EA community about indirect effects (Do human interventions have more positive indirect effects than animal interventions? Do the indirect effects of donations to GiveWell charities dwarf their direct effects?). However, our current knowledge of the indirect effects of common EA actions and how to handle them is still very limited.

For example, consider the impact of GiveDirectly (which allows money to be transferred to some of the world’s poorest people) on population levels. While donating via GiveDirectly doesn’t affect populations levels directly, it is likely to do so indirectly by increasing GDP in the least developed countries, which tends to lead to fewer births. 

In an effort to better understand the significance of indirect effects, we have created a quantitative model to calculate the rough orders of magnitude of the likely indirect effects of funding common EA causes. We also look at how the results are affected by different starting assumptions. 

Our model is very simplified due to time constraints, and it does not account for a number of effects which are likely to be important. However, we still think its results are useful for pointing to areas for further research.

3. The qualitative model

We began by creating a flowchart (‘the qualitative model’) showing the path of cause-and-effect between different parameters. Each node represents a parameter, and nodes are connected to one another via arrows if they affect each other.

So for another example, the node ‘AMF Funding by the EA Community’ is connected with the node ‘QALYs saved’ to show that changing AMF funding will change the total number of QALYs saved by the EA community.

The whole qualitative version of the model, which consists of all the effects we have considered and forms the basis of the quantitative model, is available here. We advise you to take a look at it for a better understanding of the model and the following sections.

The nodes in green (e.g. charities recommended by GiveWell) are the ones we, the EA Community, can directly affect, such as through funding. The transparent nodes act as ‘buckets’ (e.g. ‘Global Poverty funding’) for those more specific funding pursuits. The grey nodes are the outputs people typically directly care about (e.g. ‘QALYs saved’). Finally, the red nodes are intermediate nodes (e.g. ‘GDP per capita in least developed countries’) which come between the funding we can affect and the outputs we directly care about.

As you can see, our model is focussed on global poverty (through the traditional GiveWell recommended charities), animal suffering (which result from land farm animal consumption) and global catastrophic and existential risks before 2050. It tries to measure the effects of interventions in terms of QALYs, reported (human) well-being, human population levels and land animal welfare and population.

4. The quantitative model

We then developed a quantitative model based on the qualitative model. Using this we can begin to answer questions like: How many more QALYs will be saved by increasing AMF funding by one million dollars? We built on the qualitative model by estimating how much a change to one node will affect the downstream nodes. 

In the above example of the impact of AMF Funding on QALYs, we could use numbers estimated by GiveWell. In other cases, estimating impact was much less straightforward. In certain cases we realised that the numbers given were likely to be contentious, and this led us to develop the customisable version of the quantitative model (see section 6 below). 

We tried to quantify the effect of each node on all the nodes it is connected to. We did this in two different ways: using a differential and using an elasticity. By differential, we mean the effect of increasing the parameter of a node by one unit. By elasticity, we mean the effect of increasing the parameter of a node by 1%. In our model we therefore decided whether each node would be ‘differential’ or ‘elastic’.

Going back to our previous example, for AMF funding we used a differential. This means that we considered about how increasing the funding of AMF by $1 million would impact the number of QALYs saved. We could instead have modelled the impact via an elasticity: What happens if we increase AMF funding by 1%?

  • A more detailed explanation of this process can be found in the technical guide, which will be Part II of this series.
  • A list of the differentials and elasticities used, along with the reasoning behind them, can be found in this chart, which answers questions like ‘How much does increasing GDP in the least developed countries impact population levels?’ and ‘How much does raising global egg consumption increase farmed chicken population levels?’
  • A list of the static inputs used, along with the reasoning behind them, can be found in this chart, which answers questions like ‘What do we think are the least developed countries?’ and ‘How much money has ACE moved in the past year?’

We have written up our own findings in Part III of the series and considered how much they differ given different reasonable inputs for the contentious variables.

5. Limitations of the quantitative model

Many of the results of our model depend to a large extent on particular variables with values about which we have very little information (two particularly uncertain sets of values are those related to existential risk and to the chance of creating cost-effective cultured (‘clean’) meat). Because of this, and because of the general limitations of the model, any findings should be taken as invitations to further research, rather than as concrete pronouncements of effectiveness. (In the past we have found a fair number of mistakes involving numbers being wrong by a few orders of magnitude!)

The model does not explicitly model time passing. Instead it takes as inputs increases of funding for different EA cause areas in 2017, calculates various intermediary effects (including simply modelled feedback loops) and then outputs effects in 2050. We chose 2050 as the end point because of the difficulty of extrapolating estimates much further into the future. The model is therefore not very useful for considering most long-term future effects, although it does output the probability of global catastrophic risks and existential risks occurring before 2050.

The ethical theories considered are also constrained, with outputs only being sufficient to make crude short-term total utilitarian calculations (which are explained in section 6), estimate QALYs saved as an approximation of value according to some forms of person-affecting views, and set out existential and global catastrophic risk in the time frame considered (2017-2050).

There are also more general arguments to be made against taking cost-effectiveness estimates too literally, as laid out in this classic GiveWell post, which you might want to keep in mind.

6. How to use the quantitative model

As previously stated, along with the model we have created a user tool that lets you increase various areas of EA funding and customise the contentious variables. You can then see what difference it would make for outputs like population levels or QALYs saved.

To use the tool, make a copy of it (click ‘File’ in the upper left corner and then ‘Create Copy’). Make sure you are on the ‘User interface’ page on the bottom of the screen. 

The Important Variables section (in yellow) consists of inputs regarding which (i) different values have disproportionate effects on the model’s output and (ii) different people could have strongly divergent opinions. They are currently filled in with default values given by us, although they do not necessarily represent our opinion. (You can find the reasoning for our default ‘important variables’ here).

The Changes In Funding section (in green) contains the model’s inputs, allowing you to compare how funding different causes has different effects. Note the final model is linear in respect to these inputs, so funding by 10 million will not produce any interesting effects not seen by funding it by 1 million. In the real world these elasticity and differential functions have diminishing returns: for instance, increasing funding by 100% often will not change outputs by 100 times as much as increasing funding by 1%. Keep this in mind if you use large numbers as inputs.

The Output section (in blue) contains the raw outputs of the model (i.e. the grey nodes in the qualitative model) which people are likely to care about most.

The Morality Inputs section (in pink) lets you define a custom moral weighting, for example at what point you think a human life is not worth living according to the Cantril ladder (a tool to evaluate life satisfaction on a scale from 0 to 10). These weightings affect the Moral Outputs. 

The Morality Outputs section (in purple) then shows the effects of changes in funding on human and animal welfare levels. The Morality Outputs starting with ‘Total []’ are calculated by multiplying average welfare (normalised to a -1 to 1 scale) by population, then multiplying by a sentience modifier (if applicable) and summing over the different species included (if looking at non-human animals). A value of 1 is meant to represent a counterfactual year of human life at 10/10 satisfaction.

Total human wellbeing is represented by the following formula (again by taking the difference between the new and the counterfactual funding):

(Actual global average wellbeing measured by Cantril Scale - The minimum step on the Cantril Scale for a life to be worth living) / 5 * Population level.

So if you think some humans have lives not worth living, you can set the number accordingly and have their lives traded off against human lives which are worth living. There’s plenty of data on the Cantril scale from Gallup which you are advised to look over before using the tool. 

The different animal weightings are 'Brain' (which values animals by the number of neurons they have relative to a human), 'Brian weights' (which are based off the weights given by Brian Tomasik) and a custom weighting which you can specify in the ‘Morality input’ section.

For non-human animal lives the normalised average welfare works a bit differently. We’ve assumed a 0-10 scale, with 5 being neutral. You can change where on this scale the quality of life for different animals stands in the ‘Important Variables’ section.

You can use the ‘Reset values’ button to change the numbers back to the default values.

You might disagree with some connections or inputs which aren’t customisable in the user tool. If so you can change or remove connections and nodes using our Manual Input sheet (which is somewhat less user-friendly). A detailed explanation on how to this sheet can be found in our technical guide, which is Part II of the series.

(You can also see most of the above explanation in the ‘Guide’ tab in the Google Doc.)

7. Conclusion

Have fun, but stay safe. Don’t interpret the model’s results literally. Take a look at our reasoning for our static inputs, and elasticities and differentials and let us know if you catch any errors. 

Our next post is the technical guide to the model which constitutes Part II of the series. You can find it here.

Feel free to ask questions in the comment section or email us (denisemelchin@gmail.com or alexbarry40@gmail.com).

Comments3
Sorted by Click to highlight new comments since:

This is an interesting project! I am wondering how valuable you have found it, and whether there are any plans for further development. I can imagine that it would be valuable to

  • Increase complexity to increase robustness of the model, but then find some balance between robustness and user-friendliness, perhaps by allowing users to view the model on different 'levels' of complexity.
  • Use some form of crowd-sourcing to get much more reliable estimates, ideally weighted by expertise or forecasting ability.
  • Incorporate some insights from the moral uncertainty literature, so that low probability of something being very bad (e.g. wild animal suffering, or insect suffering) are given appropriate weight.

However, I have no idea how feasible this is, and imagine it would require many and valuable resources (lots of time, money, and capable researchers). Do you already have thoughts on this?

P.S. The link is missing for part IV

Thank you for your comment. I agree our model is only a very basic version and it would be interesting to see it developed further. (Though there are currently no further plans for development that I know of.)

This model was created in about 14 wks of FTEs. I expect a project like you're proposing to take much longer.

I am excited about this! I have some technical questions, but I'll save them until I've read part II.

Curated and popular this week
Relevant opportunities