I'm fairly new to EA, greatly enjoying the 80,000 hours podcasts on 10 global problems. I have been pondering on the EA philosophy of using resources to do the most good and therefore having the greatest impact numerically.
So I'm wondering - taken to it's logical conclusion, is this not effectively a well intentioned version of survival of the fittest? What if your cause or issue is niche or that the people affected are low in numbers? How is their validity built in to the model? How does EA value diversity of issues?
So to give a concrete example: less than 1% of the population worldwide have type 1 diabetes. Maybe your money would be better spent on type 2 diabetes, with 8% of the world's population having that. Does this mean those with type 1 are unimportant or unworthy of funding?
Within EA would the solution be looking for the most impactful way to 'solve' type 1 (be that through advocacy for affordable insulin and supplies or via a cure), or would you simply focus on the larger population (type 2) and fund that for greater impact?
The lack of scope for diversity of smaller causes in the model troubles me, but I'm here to learn and very interested to hear views!
Assuming it was equally tractable to make progress on both types of diabetes, and both currently received the same amount of funding, EAs would tend to say you should favour type 2 research. If enough people started to fund type 2, the cost-effectiveness of marginal dollars there would likely fall (because the low-hanging fruit were already picked), so work on type 1 would become more attractive on the margin. So basically, assuming equal tractability, if EA would likely target most but probably not all dollars be focused on the more numerous condition, and each individual should donate so as to help move the world towards this optimal equilibrium.
I also wouldn't consider people with type 1 to be in any way morally inferior to those with type 2. There are just less of them, and all the extra people in the larger group matter.