If you start decomposing minds into their computational components, you find many orders of magnitude differences in the numbers of similar components. E.g. both a honeybee and a human may have visual experience, but the latter will have on the order of 10,000 times as many photoreceptors, with even larger disparities in the number of neurons and computations for subsequent processing. If each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.
Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses, associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.
On the other side, trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples. See also this discussion. A global workspace may broadcast to thousands of processes or billions.
We can also consider minds much larger than humans, e.g. imagine a network of humans linked by neural interfaces, exchanging memories, sensory input, and directions to action. As we increased the bandwidth of these connections and the degree of behavioral integration, eventually you might have a system that one could consider a single organism, but with vastly greater numbers of perceptions, actions, and cognitive processes than a single human. If we started with 1 billion humans who gradually joined their minds together in such a network, should we say that near the end of the process their total amount of experience or moral weight is reduced to that of 1-10 humans? I'd guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.
The usual discussions on this topic seem to assume that connecting and integrating many mental processes almost certainly destroys almost all of their consciousness and value, which seems questionable both for the view itself and for the extreme weight put on it. With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.
You might be interested in Rethink Priorities’ recent reports about comparing capacity for welfare and moral status across species (part 1 here, part 2 here). Some people (myself included) think capacity for welfare, which roughly is how good or bad an animal’s life can go, differs significantly across species. The extent and degree of this sort of difference depends on the correct theory of welfare. Even if a purely hedonic theory is correct, it’s plausible that differences in affective complexity and cognitive sophistication affect the phenomenal intensity of experience and that some neurological differences affect the subjective experience of time (i.e., the phenomenal duration of experience).
However, it’s unclear which way these differences cut. Advanced social, emotional, and intellectual complexity may open up new dimensions of pleasure and suffering that widen the intensity range of experience (e.g., combining physical with emotional intimacy plausibly opens up the possibility of greater overall pleasure than mere physical intimacy). On the other hand, these same faculties may actually suppress the intensity range of experience (e.g., without the ability to conceptualize, rationalize, or time the experience, even modest pain may induce rather extreme suffering).
Comparing the intrinsic moral worth of different animals (including humans) is extraordinarily difficult, and there is tremendous uncertainty, both normative and empirical. Given this large uncertainty, it seems that, all other things equal, it would be better if near-termist EA funding didn’t skew quite so heavily towards humans, and for the funding that is directed at nonhuman animals, it would be better if it didn’t skew quite so heavily towards terrestrial vertebrates.