If you start decomposing minds into their computational components, you find many orders of magnitude differences in the numbers of similar components. E.g. both a honeybee and a human may have visual experience, but the latter will have on the order of 10,000 times as many photoreceptors, with even larger disparities in the number of neurons and computations for subsequent processing. If each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.
Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses, associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.
On the other side, trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples. See also this discussion. A global workspace may broadcast to thousands of processes or billions.
We can also consider minds much larger than humans, e.g. imagine a network of humans linked by neural interfaces, exchanging memories, sensory input, and directions to action. As we increased the bandwidth of these connections and the degree of behavioral integration, eventually you might have a system that one could consider a single organism, but with vastly greater numbers of perceptions, actions, and cognitive processes than a single human. If we started with 1 billion humans who gradually joined their minds together in such a network, should we say that near the end of the process their total amount of experience or moral weight is reduced to that of 1-10 humans? I'd guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.
The usual discussions on this topic seem to assume that connecting and integrating many mental processes almost certainly destroys almost all of their consciousness and value, which seems questionable both for the view itself and for the extreme weight put on it. With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.
Insofar as I value conscious experiences purely by virtue of their valence (i.e. positivity or negativity), I value animals not too much less than humans (discounted to the extent I suspect that they're "less conscious" or "less capable of feeling highly positive states", which I'm still quite uncertain about).
Insofar as I value preference fulfilment in general, I value humans significantly more than animals (because human preferences are stronger and more complex than animals') but not overwhelmingly so, because animals have strong and reasonably consistent preferences too.
Insofar as I value specific types of conscious experiences and preference fulfilment, such as "reciprocated romantic love" or "achieving one's overarching life goals", then I value humans far more than animals (and would probably value posthumans significantly more than humans).
I don't think there are knock-down arguments in favour of any of these approaches, and so I usually try to balance all of these considerations. Broadly speaking, I do this by prioritising hedonic components when I think about preventing disvalue, and by prioritising the other components when I think about creating value.
Something like that. Maybe the key idea here is my ranking of possible lives:
In other words, if I imagine myself suffering enough hedonically I don't really care... (read more)