M

MichaelStJules

10750 karmaJoined

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2360

Topic contributions
12

I try to brush them off gently or blow or push them away, but often kill them reflexively. I guess I sometimes feel mildly disappointed when I kill them.

It might be good for mosquitoes for them to be killed and have their populations reduced (if their lives are net negative overall or on suffering-focused views), but that doesn't mean the death itself or any potential pain we cause isn't regrettable. That individual mosquito had her own interests (assuming she was a moral patient at all). But those interests could be outweighed by others.

Should EAs feel bad? I don't know. I think the main effects of EAs feeling bad will be indirect, through our work and donations, not through the effects on mosquitoes. Maybe getting us to care about mosquitoes will make us more inclined to care about invertebrate welfare more generally, which Open Phil has decided to stop supporting with grants.

Also actually giving the fish food could reduce the unpleasantness+give them pleasure when caught.

But there's probably still some risk of longer term injury and pain. And it might condition them to be less risk-averse, and make them more prone to being caught by less humane fishers.

Or, you could add in large print that they would get more to their Favorite Charity if they just donated the same amount to it directly, not through FarmKind. That should totally dispel any misconception otherwise, if they actually read, understand and believe it.

(b) Causes: The regular donor gets to pick any Favorite Charity, from any cause, and their donation will cause money from the Bonus Fund to go to it. Unless by some miracle, the Bonus Fund supporters would otherwise have collectively donated to the same causes as the regular donors in the same proportions, then regular donations do have direct counterfactual impact on how much money goes to different causes ✅ direct counterfactual impact on donations to different causes ✅ 

The money moved to their Favorite Charity isn't positive counterfactually if their Favorite Charity gets less than the donor would have otherwise donated to their Favorite Charity on their own without FarmKind. I expect, more often than not, it will mean less to their Favorite Charity, so the counterfactual is actually negative for their Favorite Charity.

My guess for the (more direct) counterfactual effects of FarmKind on where money goes is:

  1. Shift some money from Favorite Charities to EAA charities.
  2. Separately increase funding for EAA charities by incentivizing further (EAA) donation overall. (Shift more money from donors to EAA charities.)

It is possible FarmKind will incentivize enough further overall donation from donors to get even more to their Favorite Charities than otherwise, but that's not my best guess.

FWIW, I agree with point (c) Charities, and I think that's a way this is counterfactual that's positive from the perspective of donors: they get to decide to which EAA charities the bonus funding goes.

But something like DoubleUpDrive would be the clearest and simplest way to do this without potentially confusing or (unintentionally) misleading people about whether their Favourite Charity will get more than it would have otherwise. You'd cut everything about their Favorite Charities and donating to them, and just let them pick among a set of EAA charities to donate to and match those donations to whichever they choose.

I agree that anyone seeing how the system works could see that if they give $150 directly to their Favorite Charity, more will go to their Favorite Charity than if they gave that $150 through FarmKind and split it. But they might not realize it, because FarmKind also giving to their Favorite Charity confuses them.

I think you can have involuntary attention that aren’t particularly related to wanting anything (I’m not sure if you’re denying that).

I agree you can, but that's not motivational salience. The examples you give of the watch beeping and a sudden loud sound are stimulus-driven or bottom-up salience, not motivational salience. There are apparently different underlying brain mechanisms. A summary from Kim et al., 2021:

Traditionally, the allocation of limited attentional resources had been thought to be governed by task goals (Wolfe, Cave, & Franzel, 1989) and physical salience (Theeuwes, 2010). A newer construct, selection history, challenges this dichotomy and suggests previous episodes of attentional orienting are capable of independently biasing attention in a manner that is neither top-down nor bottom-up (Awh, Belopolsky, & Theeuwes, 2012). One component of selection history is reward history. Via associative learning, initially neutral stimuli come to predict reward and thus acquire heightened attentional priority, consequently capturing attention even when non-salient and task-irrelevant (referred to as value-driven attentional capture; e.g., Anderson, Laurent, & Yantis, 2011).

I'd say there is some "innate" motivational salience, e.g. probably for innate drives, physical pains, innate fears and perhaps pleasant sensations, but then reinforcement (when it's working as typically) biases your systems for motivational salience and action towards things associated with those, to get more pleasure and less unpleasantness.

 

I'll address two things you said in opposite order.

The thing you wrote is kinda confusing in my ontology. I’m concerned that you’re slipping into a mode where there’s a soul / homunculus “me” that gets manipulated by the exogenous pressures of reinforcement learning. If so, I think that’s a bad ontology—reinforcement learning is not an exogenous pressure on the “me” concept, it is part of how the “me” thing works and why it wants what it wants. Sorry if I’m misunderstanding.

I don't have in mind anything like a soul / homunculus. I think it's mostly a moral question, not an empirical one, to what extent we should consider the mechanisms for reinforcement to be a part of "you", and to what extent your identity persists through reinforcement. Reinforcement basically rewires your brain and changes your desires. I definitely consider your desires, as motivational salience, which have been shaped by past reinforcement, to be part of "you" now and (in my view) morally important.

In my ontology, voluntary actions (both attention actions and motor actions) happen if and only if the idea of doing them is positive-valence, while involuntary actions (again both attention actions and motor actions) can happen regardless of their valence. In other words, if the reinforcement learning system is the reason that something is happening, it’s “voluntary”.

From my understanding of the cognitive (neuro)science literature and their use of terms, attentional and action biases/dispositions caused by reinforcement are not necessarily "voluntary".

I think they use "voluntary", "endogenous", "top-down", "task-driven/directed" and "goal-driven/directed" (roughly) interchangeably for a type of attentional mechanism. For example, you have a specific task in mind, and then things related to that task become salient and your actions are biased towards actions that support that task. This is what focusing/concentration is. But then other motivationally salient stimuli (pain, hunger, your phone, an attractive person) and intense stimuli or changes in background stimuli (a beeping watch, a sudden loud noise) can get in the way.

My impression is that there is indeed a distinct mechanism describable as voluntary/endogenous/top-down attention, which lets you focus and block irrelevant but otherwise motivationally salient stimuli. It might also recruit motivational salience towards relevant stimuli. It's an executive function. And I'm inclined to reserve the term "voluntary" for executive functions.

In this way, we can say:

  1. a drug addict's behaviour is often (largely) involuntarily driven, specifically by high motivational salience, like cravings (and perhaps also dysfunction of top-down attention control), and
  2. the distractibility of someone with ADHD by their phone or random websites, for example, is involuntary, driven by a dysfunction of top-down attention control, which lets task-irrelevant stimuli, including task-irrelevant motivationally salient stimuli, pull the person's attention.

In both cases, reinforcement for motivational salience is partly the reason for the behaviour. But they seem less voluntary than when executive/top-down control works better.

Motivational salience can also be manipulated in experiments to lead to dissociation with remembered, predicted and actual reward (Baumgartner et al., 2021):

These hyper-reactive states of mesolimbic systems can even cause ‘wanting for what hurts’, such as causing a laboratory rat to compulsively seek out electric shocks repeatedly. In such cases, the ‘miswanted’ shock stimulus is remembered to hurt, predicted to hurt, and actually does hurt—yet is still positively sought as a target of incentive motivation.

I'm pretty sympathetic to suffering ≈ displeasure + involuntary attention to the displeasure, or something similar.

I think wanting is downstream from the combination of displeasure + attention.

I think wanting, or at least the relevant kind here, just is involuntary attention effects, specifically motivational salience. Or, at least, motivational salience is a huge part of what it is. This is how Berridge often uses the terms.[1] Maybe a conscious 'want' is just when the effects on our attention are noticeable to us, e.g. captured by our model of our own attention (attention schema), or somehow make it into the global workspace. You can feel the pull of your attention, or resistance against your voluntary attention control. Maybe it also feels different from just strong sensory stimuli (bottom-up, stimulus-driven attention).

Well, when you do think about it, you still immediately want it to stop!

Where I might disagree with "involuntary attention to the displeasure" is that the attention effects could sometimes be to force your attention away from an unpleasant thought, rather than to focus on it. Unpleasant signals reinforce and bias attention towards actions and things that could relieve the unpleasatness, and/or disrupt your focus so that you will find something to relieve it. Sometimes the thing that works could just be forcing your attention away from the thing that seems unpleasant, and your attention will be biased to not think about unpleasant things. Other times, you can't ignore it well enough, so you your attention will force you towards addressing it. Maybe there's some inherent bias towards focusing on the unpleasant thing.

But maybe suffering just is the kind of thing that can't be ignored this way. Would we consider an unpleasant thought that's easily ignored to be suffering?

  1. ^

    Berridge and Robinson (2016) distinguish different kinds of wanting/desires, and equate one kind with motivational (incentive) salience:

    Ordinarily, cognitive wanting and incentive salience ‘wanting’ go together, so that incentive salience can give heightened urgency to feelings of cognitive desire. But the two forms of wanting vs. ‘wanting’ can sometimes dissociate, so that incentive salience can occur either in opposition to a cognitive desire or even unconsciously in absence of any cognitive desire. Incentive salience ‘wanting’ in opposition to cognitive wanting, for example, occurs when a recovering addict has a genuine cognitive desire to abstain from taking drugs, but still ‘wants’ drugs, so relapses anyway when exposed to drug cues or during vivid imagery about them. Nonconscious ‘wants’ can be triggered in some circumstances by subliminal stimuli, even though the person remains unable to report any change in subjective feelings while motivation increases are revealed in their behavior (Childress et al., 2008; Winkielman, Berridge, & Wilbarger, 2005).

Unpleasantness doesn't only apply to sensations. I think sadness, like as an empathetic response to the bug struggling, involves unpleasantness/negative affect. That's the case on most models, AFAIK. I agree (or suspect) it's not the sensations (visual experience) that are unpleasant.

To add to this, there’s evidence negative valence depends on brain regions common to unpleasant physical pain, empathic pains and social pains (from social rejection, exclusion, or loss) (Singer et al., 2004Eisenberger, 2015). In particular, the title of Singer et al., 2004 is "Empathy for Pain Involves the Affective but not Sensory Components of Pain".

I'm not sure either way whether I'd generally consider sadness to be suffering, though. I'd say suffering is at least unpleasantness + desire (or maybe unpleasantness + aversive desire specifically), but I'm not sure that's all it is. I might also be inclined to use some desire (and unpleasantness) intensity cutoff to call something suffering, but that might be too arbitrary.

You're right that I didn't define desire. The kind of desire I had in mind in this post basically just is motivational (incentive + aversive) salience, which is a neurological mechanism that affects (biases) your attention.[1] There might be more to this kind of desire, but I think motivational salience is a lot of it, and could be all of it. (There are other types of desires, like goals, but those are not what I have in mind here.)

Brody (20182023) defines suffering so that an individual suffers when “she has an unpleasant or negative affective experience that she minds, where to mind some state is to have an occurrent desire that the experience not be occurring.” Unpleasantness and aversion wouldn't be enough for suffering: there must be aversion to the experience itself. Whereas, instead, we could be averse to other things outside of us, like a bear we're afraid of, rather than to the experience itself.

I have some sympathy for this definition, but I'm also not sure it even makes sense. If this kind of "occurrent desire" is just aversive salience, how exactly would it apply differently from when we're afraid of a bear, say? If it's not aversive salience, then what kind of desire is it and how exactly does that work?

  1. ^

    It's a different mechanism from the one for (bottom-up) stimulus intensity, and the one for (top-down) task-based attention control. From Kim et al., 2021:

    Traditionally, the allocation of limited attentional resources had been thought to be governed by task goals (Wolfe, Cave, & Franzel, 1989) and physical salience (Theeuwes, 2010). A newer construct, selection history, challenges this dichotomy and suggests previous episodes of attentional orienting are capable of independently biasing attention in a manner that is neither top-down nor bottom-up (Awh, Belopolsky, & Theeuwes, 2012). One component of selection history is reward history. Via associative learning, initially neutral stimuli come to predict reward and thus acquire heightened attentional priority, consequently capturing attention even when non-salient and task-irrelevant (referred to as value-driven attentional capture; e.g., Anderson, Laurent, & Yantis, 2011).

If you agree that suffering is (at least) unpleasantness + desire, and both components can vary separately in intensity, then there isn't really "one intensity" for an experience of suffering, but two, one for each component. Suffering would vary in intensity along the two dimensions of unpleasantness intensity and desire intensity.

You could have some function that takes both component intensities and spits out a "suffering intensity" or "suffering badness", though.

Also, Jeff hinted at the issue there, too, and seemed to have gotten downvotes, although still net positive karma (3 karma with 8 votes, at the time of writing this comment).

Load more