I worry that some EAs consider certain interventions to be interventions with high expected value (high EV) because the intervention tackles a major EA cause area, rather than using major EA cause areas as a tool to identify high EV interventions. This strikes me as “thinking in the wrong direction”, and seems wrong because I think we should expect there to be many, many potential interventions in global health and development, reducing existential risk and improving animal welfare that have low expected value (low EV).
As a result of this error, I think some EAs overvalue some interventions in major EA cause areas, and undervalue some interventions that are not in major EA cause areas.
Because of one of the problems with the ITN framework (that we seem to switch between comparing problems and interventions when we move between importance, tractablity and neglectedness), I think it may help and be more accurate to view the major EA cause areas as areas where high EV interventions should be easier to find, and to view other cause areas as areas where high EV interventions should be harder to find.
Thinking in these terms would mean being more open to interventions that aren't in major EA cause areas.
The main examples where I think EAs may underestimate the EV of an intervention because it doesn't involve a major EA cause area are those where a particular form of activism / social movement / organisation could potentially be made more efficient / effective at a low cost. There are probably many such examples, with some having much greater EV and some having much smaller EV, but 2 examples I'd provide are: a) starting a campaign for the USA to recognise Palestine (https://forum.effectivealtruism.org/posts/qHhLrcDyhGQoPgsDg/should-someone-start-a-grassroots-campaign-for-usa-to) b) identifying areas and ethnic groups internationally at greatest risk of genocide / ethnic violence and trying to redirect funding for western anti-racism movements towards these areas
From discussion in comments: One general point I'd like to make is if a proposed intervention is "improving the efficiency of work on cause X", a large amount of resources already being poored into cause X should actually increase the EV of the proposed intervention (but obviously, this is assuming that the work on cause X is positive in expectation).
Props for writing the post you were thinking about!
Overwhelmingly, the things you think of as "EA cause areas" translate to "areas where people have used common EA principles to evaluate opportunities". And the things you think of as "not in major EA cause areas" are overwhelmingly "areas where people have not tried very hard to evaluate opportunities".
Many of the "haven't tried hard" areas are justifiably ignored, because there are major factors implying there probably aren't great opportunities (very few people are affected, very little harm is done, or progress has been made despite enormous investment from reasonable people, etc.)
But many other areas are ignored because there just... aren't very many people in EA. Maybe 150 people whose job description is something like "full-time researcher", plus another few dozen people doing research internships or summer programs? Compare this to the scale of open questions within well-established areas, and you'll see that we are already overwhelmed. (Plus, many of these researchers aren't very flexible; if you work for Animal Charity Evaluators, Palestine isn't going to be within your purview.)
Fortunately, there's a lot of funding available for people to do impact-focused research, at least in areas with some plausible connection to long-term impact (not sure what's out there for e.g. "new approaches in global development"). It just takes time and skill to put together a good application and develop the basic case for something being promising enough to spend $10k-50k investigating.
I'll follow in your footsteps and say that I want to write a full post about this (the argument that "EA doesn't prioritize X highly enough") sometime in the next few months.