If you start decomposing minds into their computational components, you find many orders of magnitude differences in the numbers of similar components. E.g. both a honeybee and a human may have visual experience, but the latter will have on the order of 10,000 times as many photoreceptors, with even larger disparities in the number of neurons and computations for subsequent processing. If each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.
Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses, associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.
On the other side, trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples. See also this discussion. A global workspace may broadcast to thousands of processes or billions.
We can also consider minds much larger than humans, e.g. imagine a network of humans linked by neural interfaces, exchanging memories, sensory input, and directions to action. As we increased the bandwidth of these connections and the degree of behavioral integration, eventually you might have a system that one could consider a single organism, but with vastly greater numbers of perceptions, actions, and cognitive processes than a single human. If we started with 1 billion humans who gradually joined their minds together in such a network, should we say that near the end of the process their total amount of experience or moral weight is reduced to that of 1-10 humans? I'd guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.
The usual discussions on this topic seem to assume that connecting and integrating many mental processes almost certainly destroys almost all of their consciousness and value, which seems questionable both for the view itself and for the extreme weight put on it. With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.
Some more related articles:
Is Brain Size Morally Relevant? by Brian Tomasik
Quantity of experience: brain-duplication and degrees of consciousness by Nick Bostrom
I also wrote an article about minimal instantiations of theories of consciousness: Physical theories of consciousness reduce to panpsychism.
My initial intuition is the same here; it seems like nothing is lost, only things are added. I suppose one could object that actually independence of the parts is lost, and this could actually make the parts less conscious than they would have been if separate (although there is now also a greater whole that's more conscious), but I'm not sure why this should be the case; why shouldn't it make the parts more conscious? One reason to believe it would reduce the consciousness in each part (whether increasing or decreasing the amount of consciousness in the system) if the connections are reoptimized is that there's new redundancy; the parts could be readjusted to accomplish the same while working less (e.g. firing less frequently). Of course, the leftover resources could be used for other things, which might have decreasing or increasing returns.
It seems like the implementations of particular capabilities are necessarily pretty integrative and interdependent (especially the global workspace and selective/top-down attention), and it could be that if you try to carve up the networks without also readjusting the remaining connections, you just get garbage.
Can you throw away 99% of a human brain (99% of neurons and synapses), not readjust what's left, and get something that's around 1% as conscious as the original brain? You might not have to throw away much to get something which doesn't do much at all, while still being orders of magnitude larger than a bee brain. 1% of a human brain would have about as many or more neurons than a magpie, cat, rat or mouse (and more synapses than the rat and mouse, at least). Is there any 1% of a human brain you could carve out that would be about as conscious as a magpie, who can pass a mirror test? Can you do this while also keeping all of the sensory modalities (vision, hearing, etc.) and the same emotions common to humans and magpies? My guess is no, although I'm not confident.
Of course, this isn't to say that bigger brains don't tend to matter more; I think they do. I'm also not sure that they don't often matter more than proportionally more; this might happen if certain important conscious functions require significant investment that some small brains don't make at all, e.g. self-awareness and metacognition.