Epistemic status: speculating, hypothesizing

At first approximation, there are two types of motivation for acting – egoistic & altruistic.

Almost immediately, someone will come along and say "Wait! In fact, there's only one type of motivation for acting – egotistic motivation. All that 'altruistic' stuff you see is just people acting towards their own self-interest along some dimension, and those actions happen to help out others as a side effect."

(cf. The Elephant in the Brain, which doesn't say exactly this but does say something like this.)

In response, many people are moved to defend the altruistic type of motivation (because they want to believe in altruism as a thing, because it better matches their internal experience, because of idealistic attachments; motivations vary).

I'm definitely one of these people – I think the altruistic motivation is a thing, distinct from the egoistic motivation. Less fancily – I think that people sometimes work to genuinely help other people, without trying to maximize some aspect of their self-interest.

Admittedly, it can be difficult to suss out a person's motivations. There are strong incentives for appearing to act altruistically when in fact one is acting egotistically. And beyond that, there's a fair bit of self-deception – people believing / rationalizing that they're acting altruistically when in reality their motivations are self-serving (this gets confusing to think about, as it's not clear when to disbelieve self-reports about a person's internal state).

Here's a potential heuristic to help determine when you're acting altruistically or egotistically – altruistic action tends to be dispassionate. The altruist tends to not care very much about their altruistic actions. They are unattached to them.

It's a bit subtle – an altruistic actor still wants things to go well for the situation they're acting upon. They're motivated to act, after all. But that care seems distinct from caring about their actions themselves – considerations about how they will be received & perceived.

The locus of their care is in the other people involved in the situation – if things go better for those people, the altruist is happy. If things go worse, the altruist is sad. It doesn't matter who helped those people, or what third parties thought of the situation. It doesn't matter who got the credit. Those considerations are immaterial to the altruist. They aren't the criteria by which the altruist is judging their success.

This heuristic doesn't help very much for determining whether other people are acting from altruistic or egotistic bases (though if you see someone paying particular attention to optics, PR, etc., that may be a sign that they are being more moved by egotistic considerations in that particular instance).

I think this heuristic does help introspectively – I find that it helps me sort out the things I do for (mostly) altruistic reasons from the things I do for (mostly) egotistic reasons. (I do a large measure of both.)


Cross-posted to my blog.

Comments18


Sorted by Click to highlight new comments since:

I agree with this paragraph: "The locus of their care is in the other people involved in the situation – if things go better for those people, the altruist is happy. If things go worse, the altruist is sad. It doesn't matter who helped those people, or what third parties thought of the situation.... "

But I don't think of the word 'dispassionate' ('not influenced by strong emotion') when I try to describe those behaviours. An altruistic person could have very strong feelings about the outcomes for the other person - I don't see how 'dispassionate' comes into it at all.

Yeah, I think my title is too lossy. (Open to suggestions for alternatives!)

I'm trying to point to this thing where the altruist has basically no feelings / emotions about their particular actions. They have feelings about the situation in which they're acting, and/or about other people in the situation.

So regarding their actions, the altruist is dispassionate.

My dictionary backs me up, somewhat: "Altruistic – showing a disinterested and selfless concern for the well-being of others..."

I disagree with the claim 'Altruists are dispassionate' because it suggests that altruists have no feelings about anything, including outcomes for people they want to help.

I'd agree with the claim 'Altruists don't care who gets the credit.'

Agreed, I think.

What do you think about the claim "Altruists are dispassionate regarding the particular actions they take?"

I'd agree with that, although it's not very catchy :P

:-)

Yeah, it'll have to pass through the dank EA memes filter for any hope of catchiness.

I understand that you aren't saying that altruism is completely unemotional, but I still want to emphasize the role that emotion plays. I do not distinguish too sharply between things that I want for personal reasons, and things that I want for altruistic concerns. Personally, when I learned about utility functions, it was a watershed moment for my understanding of ethics.

If you describe an agent as having a utility function, it means that all of its preferences are commensurate. To put it another way, the agent might want to have a cup of coffee and also want world peace. Importantly, the two preferences are the same type -- I don't distinguish between moral wants and non-moral wants.

Therefore, when I say that I am altruistic, I am not saying that it is my duty to be so. If I were to put my biases aside and dispassionately calculate the action with the highest utility, it is because I truly believe that being dispassionate is the best way to get what I want. I would do the same for actions which concern my own life, and feelings.

Splitting our motivation into two pieces, one personal, and one moral, seems like a remnant of our evolutionary past. It seems to me that people naturally believe in social norms, moral standards, duty, virtues and these don't always align with what they personally want. I seek to dissolve this whole dichotomy: there is simply a world that I want to be in, and I am trying to do whatever is necessary to make that world the real one.

I'd argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you're just one agent, but that's certainly not a human universal.

Yeah, there are many possible ways to frame this. I like the idea of a coherent agent, but that might just be the part of me capable of putting verbal thoughts on a forum page. In any case, over time I've experienced a shift from viewing preferences as different types which compete, to viewing preferences as all existing together in one coherent thread. Of course, my introspection is not perfect, but this is how I feel when I look inward to find what I really want.

I do not claim that this is what other people feel. However, to the extent which I find the idea pleasing, I certainly would like if people shared my view.

At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives. I find that counting myself and my needs into my moral framework makes my moral framework more robust.

I realise that I've been implicitly assuming this is true, which made me resist optimizing for impressions. Doing that I could no longer convince myself that I was acting altruistically. The awful and hard to accept reality is that you sometimes do have to convince people in order for your work to be supported.

For sure.

I think there's a complicated relationship between altruistic & egotistic motivations. Oftentimes you can have a larger post hoc positive impact by acting egotistically (because this increases your reputation, your deployable capital, and/or other relevant resources).

So the egotistic motivation seems super important! I'm just pointing out that I've found it helpful to get more internal clarity on when I'm acting out of self-interest versus when I'm acting altruistically.

If my values say "I should help lots of people", and I work to maximize my values (which makes my life meaningful) which category does that fit into? Does it matter if I'm doing it "because" it makes my life meaningful, or because it helps other people?

To me that last distinction doesn't even make a lot of sense - I try to maximize my values BECAUSE they're my values. Sometimes I think the egoists are just saying "maximize your values" and the altruists are just saying "my values are helping others" and the whole thing is just a framing argument.

Eh, I think there's probably two separate motivations here:

  • Doing things that help other people
  • Doing things that make you believe that you are helping other people (and thus making your life meaningful)

And I think those motivations overlap substantially, such that you can do actions that fulfill both motives. But they do strike me as separate, such that you can do actions that fulfill one but not the other.

Sure, but if one has the value of actually helping other people, that distinction disappears, yes?

As an example of a famous egoist, I think someone like Ayn Rand would say that fooling yourself about your values is doing it wrong.

I'm not clear on the crux of our disagreement, or if we're even disagreeing at all.

I think my crux is something like "this is a question to be dissolved, rather than answered"

To me, trying to figure out whether a goal is egoistic or altruistic is like trying to figure out whether a whale is a mammal or a fish - it depends heavily on my framing and why I'm asking the question, and points to two different useful maps that are both correct in different situations, rather than something in the territory.

Another useful map might be something like "is this eudomonic or hedonic egoism" which I think can get less squirrely answers than the "egoic or altruistic" frame. Another useful one might be the "Rational Compassion" frame of "Am I working to rationally optimize the intutions that my feelings give me?"

I think most actions-in-the-world result from a (very) complicated matrix of motivations inside the actor's head.

I think it's very rare for an action to be entirely driven by altruistic motivations, or entirely by egotistic motivations.

I do think that many actions are mostly driven by altruistic motivations, and many others mostly driven by egotistic motivations. (And I've found it personally helpful to get more clarity on when I'm acting from a mostly altruistic basis, versus when I'm acting from a mostly egotistic basis.)

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg