Among long-termist EAs, I think there's a lot of healthy disagreement about the value-loading (what utilitarianism.net calls "theories of welfare") within utilitarianism. Ie, should we aim to maximize positive sentient experiences, should we aim to minimize negative sentient experiences, or should we focus on complexity of value and assume that the value loading may be very complicated and/or include things like justice, honor, nature, etc?
My impression is that the Oxford crowd (like Will MacAskill and the FHI people) are most gung ho about the total view and the simplicity needed to say pleasure good, suffering bad. It helps that past thinkers with this normative position have a solid track record.
I think Brian Tomasik has a lot of followers in continental Europe, and a reasonable fraction of them are in the negative(-leaning) crowd. Their pitch is something like "in most normal non-convoluted circumstances, no amount of pleasure or other positive moral goods can justify a single instance of truly extreme suffering."
My vague understanding is that Bay Area rationalist EAs (especially people in the MIRI camp) generally believe strongly in the complexity of value. A simple version of their pitch might be something like "if you could push a pleasure button to wirehead yourself forever, would you do it? If not, why are you so confident about it being the right recourse for humanity?"
Of the three views, I get the impression that the "Oxford view" gets presented the most for various reasons, including that they are the best at PR, especially in English speaking countries.
In general, a lot of EAs in all three camps believe something like "morality is hard, man, and we should try to avoid locking in any definitive normative results until after the singularity." This may also entail a period of time (maybe thousands of years) on Earth to think through things, possibly with the help of AGI or other technologies, before we commit to spreading throughout the stars.
I broadly agree with this stance, though I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.
I looked into worms a bunch for the WASH post I recently made. Miguel and Kramer's study has a currently unpublished 15 year follow up which according to givewell has similar results to the 10 year followup. Other than that the evidence of the last couple of years (including a new metastudy in September 2019 from Taylor-Robinson et. al.) has continued to point towards there being almost no effects of deworming on weight, height, cognition, school performance, or mortality. This hasn't really caused anyone to update because this is the same picture as in 2016/7. My WASH piece had almost no response, which might suggest that people just aren't too bothered by worms any more, though it could equally be something unrelated like style.
I think there's a reasonable case to be made that discussion and interest around worms is dropping though, as people for whom the "low probability of a big success" reasoning is convincing seem likely to either be long-termists, or to have updated towards growth-based interventions.