I think of longtermism as a type of Effective Altruism (EA). I’ve seen some people talking about longtermism as (almost) an alternative to EA, so this is a quick statement of my position.
EA says to allocate the total community budget to interventions with the highest marginal expected value. In other words, allocate your next dollar to the best intervention, where 'best' is evaluated conditional on current funding levels. This is important, because with diminishing marginal returns, an intervention's marginal expected value falls as it is funded. So the best intervention could change as funding is allocated.
Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations. In general, calculating the expected value of an action over the entire potential future is quite difficult, because we run into the cluelessness problem, where we just don't know what effects an action will have far into the future. But there is a subset of actions where long-term effects are predictable: actions affecting lock-in events like extinction or misaligned AGI spreading throughout the universe. (Cluelessness seems like an open problem: what should we do about actions with unpredictable long-term effects?)
Longtermist EA, then, says to allocate the community budget according to marginal expected value, without discounting future generations. Given humanity's neglect of existential risks, the interventions with the highest marginal expected value may be those aimed at reducing such risks. And even with diminishing returns, these could still be the best interventions after large amounts of funding are allocated. But longtermist EAs are not committed only to interventions aimed at improving the far future. If a neartermist intervention turned out to have the highest marginal expected value, they would fund that, and then recalculate marginal expected value and reassess for the next round of funding allocation.
I'm not sure who is saying longtermism is an alternative to EA but it seems a bit nonsensical to me as longtermism is essentially the view that we should focus on positively influencing the longterm future to do the most good. It's therefore quite clearly a school of thought within EA.
Also I have a minor(ish) bone to pick with your claim that "Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations." Will MacAskill defines longtermism as the following:
There's nothing in this definition about expected value or discounting. I will plug a post I wrote which explains that it has been suggested one can get a longtermist conclusion using a different decision theory than maximising expected value, just as one may still get a longtermist conclusion if one discounts future lives.