(As I start drafting this, I realized there is a website with a whole series of posts that seem to be related, so I want to mention it here as well: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. I have debated with myself if I should even write this post, but figured it may possibly be net positive to overcome my fear of posting and express this view for some people in the community to at least reflect.)
Ever since I got familiar with the concept of Effective Altruism, I have learnt that there are many different sub-groups and sub-beliefs. It probably started with making effective donation strategies for charities generally, to ranking cause areas (where I started to feel a bit problematic with a lack of actual full pictures and a couple other things, but will probably elaborate on this in more detail in a later post), to animal suffering, to focus on X risks such as AI takeover.
Cause prioritization introduction/context/reminder
In cause prioritization, I learnt about four dimensions (source: Intro to EA handbook, though it is interesting that some people in EA do not seem to know about this):
- Scale
- Solvability
- Neglectedness
- Personal fit
For the first three, after some mathematical formulas, it comes to marginal value per person (or, I would think, better per energy or some time unit), or per dollar.
Even though “Personal fit” is listed, from my observations and online sentiment, the cause prioritization practice has evolved into a process of convincing others, so that everyone needs to follow the winning cause in arguments.
We can also notice that there is no explicit mention of urgency.
Long term risks combined with cause prioritization
With the X risks combined with cause prioritization, I hear this a lot: X risks are going to affect all of us, thus it is the most important thing to do, and everything else is a distraction.
I found this to be rationalizing/justifying a subjective preference. First of all, I do not think AI safety is unworthy of investment, especially as AI agents gain greater physical control and lack a natural/biological understanding of the constraints on their goals. The same applies to other realistic X risks. But, any one of them should never be, in my opinion, the only thing to invest in. Firstly, there are problems with accurately quantifying the lives each cause may save. Secondly, even if we follow that philosophy, X risks may not affect all humans.
Here is a relatively simple case to show why:
| 2025 | 2028 | 2040 | |
| person A | 50% dying from risk 2 | 2% dying from risk 3 | |
| person B | 20% dying from risk 1 | 2% dying from risk 3 | |
| person C | 30% dying from risk 1 | 50% dying from risk 2 | 2% dying from risk 3 |
| person D | 2% dying from risk 3 |
I assigned probabilities to attempt being more rigorous, but they are hypothetical scenarios (and they do actually matter). It is not very hard to tell that, without considering urgency in cause prioritization, one can use a simple logic which concludes "Risk 1 may affect 2 people, risk 2 may affect 2 people, and risk 3 will affect all." However, by 2025, two people may have already died if we don't treat risk 1. By 2028, 3 people may have already died. Only person D can actually be more affected by long-term risk 3.
What are some real-life examples of risk 1 and risk 2? Mental health, physical health, violent crimes, human trafficking, etc. What are some correlated groups that may not be affected by risks 1 and 2? Those with higher socio-economic status, younger people, and those in power. These people might also be the same ones who have influence over governments and funding decisions to decide where to invest more in, etc. This worries me the most. Risks 1 and 2 may never get classified as "high priority".
Purposes/points of the post
- I am 70% in favor of adding urgency or diversity to the framework for cause prioritization.
- I am 90% in favor of promoting the concept of a donation portfolio at the aggregated level, rather than convincing everyone to work on a single area.
- A call on reflecting the over-simplification when reducing "humans from" a group of "high-dimensional" individuals to a single concept/dot. This is also related to other values in this community on utilitarianism, but I will not go into detail in this post to stay focused.
- This is a more complicated point. Maybe we need to recognize that sometimes we are just using numbers to conveniently rationalize our subjective choices,[1] especially when these calculations or numbers support something that will affect us the most. Sometimes, we are limited by the environments we are exposed to, and we really don't understand the risks. It might be fine though to do this, as altruism in humans is limited by nature, as long as 1. we are aware of our own limitations/reflect on this/keep this in mind (which I believe will usually translate into some actions), and 2. we do not convince everyone to work on the same one single thing, and claim all other risks are distractions.
(Wow, this is longer than I expected.)
- ^
In fact, in consulting (strategic, management, economic, all sorts), it is known practice for the decision maker to have a belief/or the team to have a goal, and the analytics/economics consultants back out formulas and assumptions to match that belief/support the goal. Some of these formulas or assumptions are good enough to even be tested in courts. I would not say everyone does this, but it is likely sometimes we are prone to this, either explicitly or implicitly.
