Among long-termist EAs, I think there's a lot of healthy disagreement about the value-loading (what utilitarianism.net calls "theories of welfare") within utilitarianism. Ie, should we aim to maximize positive sentient experiences, should we aim to minimize negative sentient experiences, or should we focus on complexity of value and assume that the value loading may be very complicated and/or include things like justice, honor, nature, etc?
My impression is that the Oxford crowd (like Will MacAskill and the FHI people) are most gung ho about the total view and the simplicity needed to say pleasure good, suffering bad. It helps that past thinkers with this normative position have a solid track record.
I think Brian Tomasik has a lot of followers in continental Europe, and a reasonable fraction of them are in the negative(-leaning) crowd. Their pitch is something like "in most normal non-convoluted circumstances, no amount of pleasure or other positive moral goods can justify a single instance of truly extreme suffering."
My vague understanding is that Bay Area rationalist EAs (especially people in the MIRI camp) generally believe strongly in the complexity of value. A simple version of their pitch might be something like "if you could push a pleasure button to wirehead yourself forever, would you do it? If not, why are you so confident about it being the right recourse for humanity?"
Of the three views, I get the impression that the "Oxford view" gets presented the most for various reasons, including that they are the best at PR, especially in English speaking countries.
In general, a lot of EAs in all three camps believe something like "morality is hard, man, and we should try to avoid locking in any definitive normative results until after the singularity." This may also entail a period of time (maybe thousands of years) on Earth to think through things, possibly with the help of AGI or other technologies, before we commit to spreading throughout the stars.
I broadly agree with this stance, though I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.
I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.
My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.
I looked into worms a bunch for the WASH post I recently made. Miguel and Kramer's study has a currently unpublished 15 year follow up which according to givewell has similar results to the 10 year followup. Other than that the evidence of the last couple of years (including a new metastudy in September 2019 from Taylor-Robinson et. al.) has continued to point towards there being almost no effects of deworming on weight, height, cognition, school performance, or mortality. This hasn't really caused anyone to update because this is the same picture as in 20
... (read more)