M

MichaelDickens

4138 karmaJoined Sep 2014

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
649

I was just thinking about this a few days ago when I was flying for the holidays. Outside the plane was a sign that said something like

Warning: Jet fuel emits chemicals that may increase the risk of cancer.

And I was thinking about whether this was a justified double-hedge. The author of that sign has a subjective belief that exposure to those chemicals increases the probability that you get cancer, so you could say "may give you cancer" or "increases the risk of cancer". On the other hand, perhaps the double-hedge is reasonable in cases like this because there's some uncertainty about whether a dangerous thing will cause harm, and there's also uncertainty about whether a particular thing is dangerous, so I supposed it's reasonable to say "may increase the risk of cancer". It means "there is some probability that this increases the probability that you get cancer, but also some probability that it has no effect on cancer rates."

I may be misinterpreting your argument, but it sounds like it boils down to:

  1. Given that we don't know much about qualia, we can't be confident that shrimp have qualia.
  2. [implicit] Therefore, shrimp have an extremely low probability of having qualia.
  3. Therefore, it's ok to eat shrimp.

The jump from step 1 to step 2 looks like a mistake to me.

You also seemed to suggest (although I'm not quite sure whether you were actually suggesting this) that if a being cannot in principle describe its qualia, then it does not have qualia. I don't see much reason to believe this to be true—it's one theory of how qualia might work, but it's not the only theory. And it would imply that, e.g., human stroke victims who are incapable of speech do not have qualia because they cannot, even in principle, talk about their qualia.

(I think there is a reasonable chance that I just don't understand your argument, in which case I'm sorry for misinterpreting you.)

I have only limited resources with which to do good. If I'm not doing good directly through a full-time job, I budget 20% of my income toward doing as much good as possible, and then I don't worry about it after that. If I spend time and money on advocating for a ceasefire, that's time and money that I can't spend on something else.

If you ask me my opinion about whether Israel should attack Gaza, I'd say they shouldn't. But I don't know enough about the issue to say what should be done about it, and I doubt advocacy on this issue would be very effective—"Israel and Palestine should stop fighting" has been more or less the consensus position among the general public for ~70 years, and it still hasn't happened, and I doubt anything I do will have an impact on the same scale as a donation to a GiveWell top charity.

To convince me to advocate for a ceasefire, you have to argue not just that it's good, but that it's the best thing I could be doing. All you've said is that it's good. Why is it the best thing that I could be doing? I'd like this post better if you said more about why it's the best thing. (I doubt I'd end up agreeing, but I appreciate when people make the argument.)

the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value)

This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can't handwave away the implications of a finite-everywere distribution with infinite EV.

(Just an offhand thought, I wonder if there's a way to fix infinite-EV distributions by positing that utility is bounded, but that you don't know what the bound is? My subjective belief is something like, utility is bounded, I don't know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)

I think this subject is very important and underrated, so I'm glad you wrote the post, and you raised some points that I wasn't aware of, and I would like to see people write more posts like this one. The post didn't do as much for me as it could have because I found two of its three main arguments hard to understand:

  1. For your first argument ("Unbounded utility functions are irrational"), the post spends several paragraphs setting up a specific function that I could have easily constructed myself (for me it's pretty obvious that there exist finite utility functions with infinite EV), and then ends by saying utilitarianism "lead[s] to violations of generalizations of the Independence axiom and the Sure-Thing Principle", which I take to be the central argument, but I don't know what the Sure-Thing Principle is. I think I know what Independence is, but I don't know what you mean by "generalizations of Independence". So it feels like I still have no idea what your actual argument is.
  2. I had no difficulty following your money pump argument.
  3. For the third argument, the post claims that some axioms rule out expectational total utilitarianism, but the axioms aren't defined and I don't know what they mean, and I don't know how they rule out expectational total utilitarianism. (I tried to look at the cited paper, but it's not publicly available and it doesn't look like it's on Sci-Hub either.)

Some (small-sample) data on public opinion:

  1. Scott Alexander did a survey on moral weights: https://slatestarcodex.com/2019/03/26/cortical-neuron-number-matches-intuitive-perceptions-of-moral-value-across-animals/
  2. SlateStarCodex commenter Tibbar's Mechanical Turk survey on moral weights: https://slatestarcodex.com/2019/05/01/update-to-partial-retraction-of-animal-value-and-neuron-number/

These surveys suggest that the average person gives considerably less moral weight to non-human animals than the RP moral weight estimates, although still enough weight that animal welfare interventions look better than GiveWell top charities (and the two surveys differed considerably from each other, with the MTurk survey giving much higher weight to animals across the board).

FWIW I haven't looked much into this but my surface impression is that climate change groups are eager to paint CCC as biased/bad science/climate deniers because (1) they don't like CCC's conclusion that many causes in global health and development are more cost-effective than climate change and (2) they tend to exaggerate the expected harms of climate change, and CCC doesn't.

My impression is that most of Lomborg's critics don't understand his claims—they don't understand the difference between "climate change isn't the top priority" and "climate change isn't real".

From what I've read, Lomborg's beliefs on climate change are in line with John Halstead's Climate Change & Longtermism report.

From the Australia Climate Council link, the most egregious claim I see from Lomborg is "But the [2014 IPCC] report also showed that global warming has dramatically slowed or entirely stopped in the last decade and a half." (The link in the article is broken but I found it via archive.org.) It looks to me like Lomborg's claim is literally true according to Australia Climate Council (I actually thought it was false but apparently I was wrong and Lomborg was right), but possibly misleading. In the context of Lomborg's article, it doesn't look to me like he's trying to claim global warming isn't happening, but that it's exaggerated.

A small thought that occurred to me while reading this post:

In fields where most people do a lot of independent diligence, you should defer to other evaluators more. (Maybe EA grantmaking is an example of this.)

In fields where people mostly defer to each other, you're better off doing more diligence. (My impression is VC is like this—most VCs don't want to fund your startup unless you already got funding from someone else.)

And presumably there's some equilibrium where everyone defers N% of their decisionmaking and does (100-N)% independent diligence, and you should also defer N%.

How feasible do you think this is? From my outsider perspective, I see grantmakers and other types of application-reviewers taking 3-6 months across the board and it's pretty rare to see them be faster than that, which suggests it might not be realistic to consistently review grants in <3 months.

eg the only job application process I've ever done that took <3 months was an application to a two-person startup.

Thanks, I hadn't gotten to your comment yet when I wrote this. Having read your comment, your argument sounds solid, my biggest question (which I wrote in a reply to your other comment) is where the eta=0.38 estimate came from.

Load more