Hide table of contents

Let's say you are unsure if you are the only conscious being who exists. You end up assigning it a probability of 50%.

How would you decide what actions to take, as someone in the EA community? Should 50% of your "doing good" time/energy/resources be applied with the mindset that the world is filled with other beings who feel pain and pleasure, and the other 50% of your "doing good" time/energy/resources go towards benefitting yourself?

3

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

(Edited for clarity and rigour.)

Basically, the answer can depend on why you assign such a high probability to solipsism, specifically to what extent those reasons are normative or empirical. Mostly empirical favours giving more to others, and mostly normative can favour closer to an even split.

If you assign X% credence to conjunctions of normative views for which, conditional on those normative views, these hold simultaneously:

  1. your leftover uncertainty about solipsism is purely empirical (not also normative),
  2. you assign at most, say, 95% probability to solipsism conditional on each of the normative views (and the rest to basically standard views about consciousness and existence),
  3. it seems better to help, say, 40 other potentially conscious beings even if solipsism is up to 95% likely to be true than to definitely help yourself, and
  4. the normative views all agree on enough cases like 2 available to you,

then those X% of you normative views by credence would recommend helping others, and on most approaches to normative/moral uncertainty,[1] it would be better to use X% to 100% of your effort to help others.[2]

Under something like the the property rights approach to moral uncertainty, where resources are allocated in proportion to your credence in normative views, if the other 100%-X% of your normative views were fully committed to (near-)egoism or solipsism, then I think you would spend 100%-X% on yourself.

You could take X% to be at least 99% and assign exactly 95% probability to solipsism on those normative views, and so around 95% probability to solipsism overall, but be required to spend at least 99% of your efforts on others.

 

If you aren't familiar with normative/moral uncertainty (moral uncertainty is a type of normative uncertainty), then the above probably won't make sense, and I'd recommend taking a look at some of the following:

  1. Making decisions under moral uncertainty and Making decisions when both morally and empirically uncertain
  2. https://www.moraluncertainty.com/
  3. https://reducing-suffering.org/two-envelopes-problem-for-brain-size-and-moral-uncertainty/
  4. https://www.happierlivesinstitute.org/report/property-rights/ - for an approach of splitting resources according to moral/normative uncertainty, not really covered in detail elsewhere
  5. https://forum.effectivealtruism.org/topics/normative-uncertainty-1 and https://forum.effectivealtruism.org/topics/moral-uncertainty , and references and tagged posts.
  1. ^

    I could see this not being the case under some approaches, though, maybe variance voting?

  2. ^

    I don't mean spending X% of your money on others. To be able to help others, it's important to take care of yourself, but some apparently selfish expenses are (also) investments in yourself to help others.

  3. ^

    For example, whether or not they exist is not a matter of normative uncertainty after conditioning on those views, and you're a risk neutral expected value-maximizing total utilitarian. This rules out some solipsistic wagers for egoism like Oesterheld, 2017, Tarsney, 2021 and Shiller, 2022 that result from the marginal value of helping a moral patient decreasing as the number of other people in the universe increases to the point that it's tiny if there are billions of conscious beings relative to just you. I think you can discount tiny probabilities or be difference-making risk averse (in specific ways). Person-affecting views are also usually okay. Non-aggregative views will usually be okay as long as You can also include non-consequentialist considerations, as long as they don't significantly shift the balance towards focusing on yourself.

  4. ^

    And there are options where the upsides predictably outweigh the downsides/risks a non-tiny amount, e.g. you're generally right about the direction of people's interests.

You would donate as much of your income as possible to effective charities and/or dedicate your life to effective Altruist causes (however you determine that) because the utility derived from your income/work is likely hundreds times greater applied rationally for the benefit of others than for your personal welfare, even with a 50% discount rate. You would have to have a very high confidence in sollipsism before it would be rational to live selfishly.

Curated and popular this week
Relevant opportunities