Effective altruism is a complicated idea. When an idea is complicated, often people don't understand the full idea but instead some low resolution version of the idea which they get from snippets of conversation, impressions they have from other people, and vague recollection of media articles.
What's the current low resolution version of effective altruism? Is it positive? What would a better low resolution image be?
Unfortunately I think the importance of EA actually goes up as you focus on better and better things. My best guess is the distribution of impact is lognormal, this means that going from, say, the 90th percentile best thing to the 99th could easily be a bigger jump than going from, say, the 50th percentile to the 80th.
You're right that at some point diminishing returns to more research must kick in and you should take action rather than do more research, but I think that point is well beyond "don't do something obviously bad", and more like "after you've thought really carefully about what the very top priority might be, including potentially unconventional and weird-seeming issues".