I'm fairly new to EA, greatly enjoying the 80,000 hours podcasts on 10 global problems. I have been pondering on the EA philosophy of using resources to do the most good and therefore having the greatest impact numerically.
So I'm wondering - taken to it's logical conclusion, is this not effectively a well intentioned version of survival of the fittest? What if your cause or issue is niche or that the people affected are low in numbers? How is their validity built in to the model? How does EA value diversity of issues?
So to give a concrete example: less than 1% of the population worldwide have type 1 diabetes. Maybe your money would be better spent on type 2 diabetes, with 8% of the world's population having that. Does this mean those with type 1 are unimportant or unworthy of funding?
Within EA would the solution be looking for the most impactful way to 'solve' type 1 (be that through advocacy for affordable insulin and supplies or via a cure), or would you simply focus on the larger population (type 2) and fund that for greater impact?
The lack of scope for diversity of smaller causes in the model troubles me, but I'm here to learn and very interested to hear views!
I think this is a good question. To me, EA is pretty ruthless in how it assesses effectiveness, and that leads to many causes feeling left out (especially when those causes are close to you personally).
Taken to an extreme, if all charitable acts/giving was done through an EA lens, it would feel pretty brutalist to any cause not included in its scope. Though from an EA lens, this would be a *more effective* charitable sector and ultimately reduce suffering/increase overall wellbeing.
But the simple reality is EA is small relative to the universe of charitable acts. And I think having a portion of charitable acts approached with an EA lens is a good thing. And I think the actual % is significantly lower than the optimal %.