Vasco Grilo

3644 karmaJoined Jul 2020Working (0-5 years)Lisbon, Portugal


  • Organizer of EA Lisbon
  • Completed the Precipice Reading Group
  • Completed the In-Depth EA Virtual Program
  • Attended more than three meetings with a local EA group


Topic Contributions

Hi Jason,

If I remember correctly, GHDF predated GW's creation of the Top Charities Fund and the All Grants Fund.

The All Grants Fund was launched in August 2022, and GW's Maximum Impact Fund was renamed to Top Charities Fund one month later. GHDF made its 1st grant in 2017. The Maximum Impact Fund had been making grants since 2014.

In addition, I think GiveWell UK is of fairly recent origin, so EA Funds would have offered UK tax advantages that were not then (at least readily) available through GiveWell.

Good point! GiveWell UK was launched in August 2022.

So I think at least some of the original advantages of GHDF may have become much less significant with subsequent developments at GiveWell?

I think so.

Nice points on GHDF, Jason! I will publish a related post in the next few days following up of this comment I made recently.

I'm not planning on continuing a long thread here, I mostly wanted to help address the questions about my previous comment, so I'll be moving on after this.

Fair, as this is outside of the scope of the original post. I noticed you did not comment on RP's neuron counts post. I think it would be valuable if you commented there about the concerns you expressed here, or did you already express them elsewhere in another post of RP's moral weight project sequence?

First, this effect (computational scale) is smaller for chickens but progressively enormous for e.g. shrimp or lobster or flies.

I agree that is the case if one combines the 2 wildly different estimates for the welfare range (e.g. one based on the number of neurons, and another corresponding to RP's median welfare ranges) with a weighted mean. However, as I commented above, using the geometric mean would cancel the effect.

Suppose we compared the mass of the human population of Earth with the mass of an individual human. We could compare them on 12 metrics, like per capita mass, per capita square root mass, per capita foot mass... and aggregate mass. If we use the equal-weighted geometric mean, we will conclude the individual has a mass within an order of magnitude of the total Earth population, instead of billions of times less.

Is this a good analogy? Maybe not:

  • Broadly speaking, giving the same weight to multiple estimates only makes sense if there is wide uncertainty with respect to which one is more reliable. In the example above, it would make sense to give negligible weight to all metrics except for the aggregate mass. In contrast, there is arguably wide uncertainty with respect to what are the best models to measure welfare ranges, and therefore distributing weights evenly is more appropriate.
  • One particular model on which we can put lots of weight on is that mass is straightforwardly additive (at least at the macro scale). So we can say the mass of all humans equals the number of humans times the mass per human, and then just estimate this for a typical human. In contrast, it is arguably unclear whether one can obtain the welfare range of an animal by e.g. just adding up the welfare range of its individual neurons.


I was also curious to understand why superforecasters' nuclear extinction risk was so high. Sources of agreement, disagreement and uncertainty, and arguments for low and high estimates are discussed on pp. 298 to 303. I checked these a few months ago, and my recollection is that the forecasters have the right qualitative considerations in mind, but I do believe they are arriving to an overly high extinction risk. I recently commented about this.

Note domain experts guessed an even higher nuclear extinction probability by 2100 of 0.55 %, 7.43 (= 0.0055/0.00074) times that of the superforecasters. This is specially surprising considering:

  • The pool of experts drew more heavily from the EA community than the pool of superforecasters. "The sample drew heavily from the Effective Altruism (EA) community: about 42% of experts and 9% of superforecasters reported that they had attended an EA meetup".
  • I would have expected people in the EA community to guess a lower nuclear extinction risk. 0.55 % is 5.5 times Toby Ord's guess given in The Precipice for nuclear existential risk from 2021 to 2120 of 0.1 %, and extinction risk should be lower than existential risk.

Thanks for elaborating, Carl!

Luke says in the post you linked that the numbers in the graphic are not usable as expected moral weights, since ratios of expectations are not the same as expectations of ratios.

Let me try to restate your point, and suggest why one may disagree. If one puts weight w on the welfare range (WR) of humans relative to that of chickens being N, and 1 - w on it being n, the expected welfare range of:

  • Humans relative to that of chickens is E("WR of humans"/"WR of chickens") = w*N + (1 - w)*n.
  • Chickens relative to that of humans is E("WR of chickens"/"WR of humans") = w/N + (1 - w)/n.

You are arguing that N can plausibly be much larger than n. For the sake of illustration, we can say N = 389 (ratio between the 86 billion neurons of a humans and 221 M of a chicken), n = 3.01 (reciprocal of RP's median welfare range of chickens relative to humans of 0.332), and w = 1/12 (since the neuron count model was one of the 12 RP considered, and all of them were weighted equally). Having the welfare range of:

  • Chickens as the reference, E("WR of humans"/"WR of chickens") = 35.2. So 1/E("WR of humans"/"WR of chickens") = 0.0284.
  • Humans as the reference (as RP did), E("WR of chickens"/"WR of humans") = 0.305.

So, as you said, determining welfare ranges relative to humans results in animals being weighted more heavily. However, I think the difference is much smaller than the suggested above. Since N and n are quite different, I guess we should combine them using a weighted geometric mean, not the weighted mean as I did above. If so, both approaches output exactly the same result:

  • E("WR of humans"/"WR of chickens") = (N^w*n^(1 - w))^0.5 = 2.12. So 1/E("WR of humans"/"WR of chickens") = (N^w*n^(1 - w))^-0.5 = 0.471.
  • E("WR of chickens"/"WR of humans") = ((1/N)^w*(1/n)^(1 - w))^0.5 = 0.471.

The reciprocal of the expected value is not the expected value of the reciprocal, so using the mean leads to different results. However, I think we should be using the geometric mean, and the reciprocal of the geometric mean is the geometric mean of the reciprocal. So the 2 approaches (using humans or chickens as the reference) will output the same ratios regardless of N, n and w as long as we aggregate N and n with the geometric mean. If N and n are similar, it no longer makes sense to use the geometric mean, but then both approaches will output similar results anyway, so RP's approach looks fine to me as a 1st pass. Does this make any sense?

Of course, it would still be good to do further research (which OP could fund) to adjudicate how much weight should be given to each model RP considered.

I had argued for many years that insects met a lot of the functional standards one could use to identify the presence of well-being, and that even after taking two-envelopes issues and nervous system scale into account expected welfare at stake for small wild animals looked much larger than for FAW.


I happen to be a fan of animal welfare work relative to GHW's other grants at the margin because animal welfare work is so highly neglected

Thanks for sharing your views!


Would it make sense to have Docs or pages where you explain how you got all your default parameters (which could then be linked in the CCM)?


According to the CCM, the cost-effectiveness of direct cash transfers is 2 DALY/k$. However, you calculated a significantly lower cost-effectiveness of 1.20 DALY/k$ (or 836 $/DALY) based on GiveWell's estimates. The upper bound of the 90 % CI you use in the CCM actually matches the point estimate of 836 $/DALY you inferred from GiveWell's estimates. Do you have reasons to believe GiveWell's estimate is pessimistic?

We adopt a (mostly arbitrary) uncertainty distribution around this central estimate [inferred from GiveWell's estimates].

I agree the distribution will have some arbitrariness, but I think the mean cost-effectiveness should still match the one corresponding to GiveWell's estimates, as these are supposed to be interpreted as best guesses?

Because direct cash transfer are about 2 times as cost-effective in the CCM as for GiveWell, Open Phil's bar equals GiveWell's bar in the CCM, whereas I thought Open Phil bar was supposed to become 2 times as high as GiveWell's after this update.

This gives it an average cost per expected DALY averted of $611.00 with a median cost per expected DALY averted of $649.1690% of simulations fall between $1.13 and $2.45.

Nitpick, ". " is missing before "90%".

Hi Ariel,

Not strictly related to this post, but just in case you need ideas for further posts ;), here are some very quick thoughts on 80,000 Hours.

I wonder whether 80,000 Hours should present "factory-farming" and "easily preventable human [human] diseases" as having the same level of pressingness.

80,000 Hours' thinking the above have similar pressingness is probably in agreement with a list they did in 2017, when factory-farming came out 2 points above (i.e. 10 times as pressing as) developing world health.

It is also interesting that 3 of 80,000 Hours' current top 5 most pressing problems came out as similarly pressing or less pressing than factory-farming. More broadly, it would be nice if 80,000 Hours were more transparent about how their rankings of problems and careers are produced, as I guess these have a significant impact on shaping the career choices of many people. I will post one question about this on the EA Forum in a few weeks.

I am glad what I said here no longer applies to your organisation. Thanks for assessing your cost-effectiveness!

Load more