Bio

Participation
4

I am open to work. I see myself as a generalist quantitative researcher.

How others can help me

You can give me feedback here (anonymous or not).

You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Posts
152

Sorted by New

Comments
1796

Topic contributions
26

I think a reasonably independent reviewer who is not perfectly trustworthy would still be better than no reviewer at all.

Thanks, Guy. I am very much for transparency in general[1], but I do not think it matters that much whether I know what happens with 70 % or 100 % of AWF's funds. Even in a worst case scenario where there was no information about 30 % of the money granted by AWF, and the enspecified grants had a cost-effectiveness of 0, AWF's cost-effectiveness would only decrease by 30 %. This would be significant, but still small in comparison with other considerations. In particular, I estimate the Shrimp Welfare Project (SWP) has been 173 times as cost-effective as cage-free campaigns. AWF has funded both SWP and cage-free campaigns, so they implicitly estimate the marginal cost-effectiveness of SWP and cage-free campaigns has not been that different[2]. I suspect our disagreement is mostly explained by me believing excruciating pain is more intense, and lack of scope-sensitivity in AWF's grantmaking decisions, which is based on grantmakers' ratings of grants (from -5 to 5) instead of explicit cost-effectiveness analyses.

  1. ^

    Not necessarily in this case. I would have to know the details.

  2. ^

    If they thought SWP was way more cost-effective at the margin, they would just fund SWP and not cage-free campaigns.

Hi Tobias.

If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas.

I think donating to the Shrimp Welfare Project (SWP) would still have super high cost-effectiveness even if the world was certain to end in 10 years. I estimate it has been 64.3 k times as cost-effective as GiveWell’s top charities (ignoring their effects on animals) for 10 years of acceleration of the adoption of electrical stunning, as used by Open Philanthropy (OP). If the acceleration followed a normal distribution, SWP's cost-effectiveness would only become 50 % as high if the world was certain to end in 10 years. I think this would still be orders of magnitude more cost-effective than the best interventions in global health and development and AI safety.

There is also the question of whether the world will actually be radically reshaped. I am happy to bet 10 k$ against short timelines for that.

Hi Michael,

I discuss that from the following sentence on.

FWI says “most additional funding right now supports our R&D [research and development] work [not their farm program], which will enable us to become more cost-effective in the future”. [...]

As in you're 100% certain, and wouldn't put weight on other considerations even as a tiebreaker?

Yes.

(If, say, you became convinced all your options were incomparable from an ETHU perspective because of cluelessness, you would presumably still all-things-considered-prefer not to do something that injures yourself for no reason.)

Injuring myself can very easily be assessed under ETHU. It directly affects my mental states, and those of others via decreasing my productivity.

Thanks, Jaime!

In the case where  we have then  that the expected value is , which is exactly the geometric mean of the expected values of the individual predictions.

I have checked this generalises. If all the lognormals have logarithms whose standard deviation is the same, the mean of the aggregated distribution is the geometric mean of the means of the input distributions.

Thanks for the comment, and welcome to the EA Forum, Katrina! Great point. I speculated the effects on target species make cost-effectiveness 50 % as large[1], but I have little idea about how accurate this is, and which pesticides achieve a better combination between effects on target and non-target species. I assume WAI is doing research which can inform this.

  1. ^

    This can be thought of as the mean of a uniform distribution ranging from -0.5 to 1.5.

Thanks, Michael! Nitpick, E((X)) in the 3rd line from the bottom should be E(u(X)).

Thanks, Michael.

If you allow arbitrarily large values and prospects with infinitely many different possible outcomes, then you can construct St Petersburg-like prospects, which have infinite expected value but only take finite value in every outcome. These violate Continuity (if it's meant to apply to all prospects, including ones with infinitely many possible outcomes). So from arbitrary large values, we violate Continuity.

Sorry for the lack of clarity. In principle, I am open to lotteries with arbitrarily large expected utility, but not infinite, and continuity does not rule out arbitratily large expected utilities. I am open to lotteries with arbitrarily many outcomes (in principle), but not to lotteries with infinitely many outcomes (not even in principle).

We've also discussed this a bit before, and I don't expect to change your mind now, but I think actually infinite effects are quite plausible (mostly through acausal influence in a possibly spatially infinite universe), and I think it's unwarranted to assign them probability 0.

I think empirical evidence can take us from a very large universe to an arbitrarily large universe (for arbitrarily strong evidence), but never to an infinite universe. An arbitrarily large universe would still be infinitely smaller than an infinite universe, so I would say the former provides no empirical evidence for the latter. So I am confused about why discussions about infinite ethics often mention there is empirical evidence pointing to the existence of infinity[1]. Assigning a probability of 0 to something for which there is not empirical evidence at all makes sense to me.

There are decision rules that are consistent with violations of Completeness. I'm guessing you want to treat incomparable prospects/lotteries as equivalent or that whenever you pick one prospect over another, the one you pick is at least as good as the latter, but this would force other constraints on how you compare prospects/lotteries that these decision rules for incomplete preferences don't.

I have not looked into the post you linked, but you guessed correctly. Which constraints would be forced as a result? I do not think preferential gaps make sense in principle.

You could read more about the relevant accounts of risk aversion and difference-making risk aversion, e.g. discussed here and here. Their motivations would explain why and how Independence is violated. To be clear, I'm not personally sold on them.

Thanks for the links. Plato's section The Challenge from Risk Aversion argues for risk aversion based on observed risk aversion with respect to resources like cups of tea and money. I guess the same applies to Rethink Priorities' section. I am very much on board with risk aversion with respect to resources, but I still think it makes all sense to be risk neutral relative to total hedonistic welfare.

  1. ^

    From Bostrom (2011), "Recent cosmological evidence suggests that the world is probably infinite".

Load more