As a community, EA sometimes talks about finding "Cause X" (example 1, example 2).
The search for "Cause X" featured prominently in the billing for last year's EA Global (a).
I understand "Cause X" to mean "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar."
This afternoon, I realized I don't really know how many people in EA are actively pursuing the "search for cause X." (I thought of a couple people, who I'll note in comments to this thread. But my map feels very incomplete.)
The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don't currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I've talked to who don't share those priorities say they'd be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.