VG

Vasco Grilo

5384 karmaJoined Jul 2020Working (0-5 years)Lisbon, Portugal
sites.google.com/view/vascogrilo?usp=sharing

Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering, and part-time or full-time paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1235

Topic contributions
25

To what extent does Open Philanthropy use Rethink Priorities' welfare ranges to compare interventions targetting different species? What else does OP use?

Thanks, David! Strongly upvoted.

To clarify, are those numbers relative to the people who got to know about EA in 2023 (via 80,000 Hours or any source)?

Thanks for following up, Matthew.

Plenty of people want wealth and power, which are "conducive to gaining control over [parts of] humanity".

I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others. In addition, if a single AI system expressed such a desire, humans would not want to scale up its capabilities.

I agree with Robin Hanson on this question. However, I think humans will likely become an increasingly small fraction of the world over time, as AIs become a larger part of it. Just as hunter-gatherers are threatened by industrial societies, so too may biological humans one day become threatened by future AIs. Such a situation may not be very morally bad (or deserving the title "existential risk"), because humans are not the only morally important beings in the world. Yet, it is still true that AI carries a great risk to humanity.

I agree biological humans will likely become an increasingly small fraction of the world, but it does not follow that AI carries a great risk to humas[1]. I would not say people born after 1960 carry a great risk risk to people born before 1960, even though the fraction of the global resources controlled by the latter is becoming increasingly small. I would consider that AI poses a great risk to humans if these were expected to suffer significantly more than in their typical lives, which also involve suffering, in the process of losing control over resources.

  1. ^

    You said "risk to humanity" instead of "risk to humans". I prefer this because humanity is sometimes used to include other beings.

Hi David,

There is a significant overlap between EA and AI safety, and it is often unclear whether people supposedly working on AI safety are increasing/decreasing AI risk. So I think it would be helpful if you could point to some (recent) data on how many people are being introduced to global health and development, and animal welfare via 80,000 Hours.

Thanks for the clarifications, Joel.

At the end of the day, I apologize if the publication was not in line with what EA Funds would have wanted - I agree it's probably a difference in norms. In a professional context, I'm generally comfortable with people relaying that I said X in private, unless there was an explicit request not to share

It is clear from Linch's comment that he would have liked to see a draft of the report before it was published. Did you underestimate the interest of EA Funds in reviewing the report before its publication, or did you think their interest in reviewing the report was not too relevant? I hope the former.

Thanks for the comment, Richard.

So I don't think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).

I used to prefer focussing on tail risk, but I now think expected deaths are a better metric.

  • Interventions in the effective altruism community are usually assessed under 2 different frameworks, existential risk mitigation, and nearterm welfare improvement. It looks like 2 distinct frameworks are needed given the difficulty of comparing nearterm and longterm effects. However, I do not think this is quite the right comparison under a longtermist perspective, where most of the expected value of one’s actions results from influencing the longterm future, and the indirect longterm effects of saving lives outside catastrophes cannot be neglected.
  • In this case, I believe it is better to use a single framework for assessing interventions saving human lives in catastrophes and normal times. One way of doing this, which I consider in this post, is supposing the benefits of saving one life are a function of the population size.
  • Assuming the benefits of saving a life are proportional to the ratio between the initial and final population, and that the cost to save a life does not depend on this ratio, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.

Thanks for pointing that out, Ted!

1/10,000 * 8 billion people = 800,000 current lives lost in expectation

The expected death toll would be much greater than 800 k assuming a typical tail distribution. This is the expected death toll linked solely to the maximum severity, but lower levels of severity would add to it. Assuming deaths follow a Pareto distribution with a tail index of 1.60, which characterises war deaths, the minimum deaths would be 25.3 M (= 8*10^9*(10^-4)^(1/1.60)). Consequently, the expected death toll would be 67.6 M (= 1.60/(1.60 - 1)*25.3*10^6), i.e. 1.11 (= 67.6/61) times the number of deaths in 2023, or 111 (= 67.6/0.608) times the number of malaria deaths in 2022. I certainly agree undergoing this risk would be wild.

Side note. I think the tail distribution will eventually decay faster than that of a Pareto distribution, but this makes my point stronger. In this case, the product between the deaths and their probability density would be lower for higher levels of severity, which means the expected deaths linked to such levels would represent a smaller fraction of the overall expected death toll.

Thanks for elaborating, Joseph!

A major consideration here is the use of AI to mitigate other x-risks. Some of Toby Ord's x-risk estimates

I think Toby's existential risk estimates are many orders of magnitude higher than warranted. I estimated an annual extinction risk of 5.93*10^-12 for nuclear wars, 2.20*10^-14 for asteroids and comets, 3.38*10^-14 for supervolcanoes, a prior of 6.36*10^-14 for wars, and a prior of 4.35*10^-15 for terrorist attacks. These values are already super low, but I believe existential risk would still be orders of magnitude lower. I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, to be existential. I got my estimate assuming:

  • An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
    • An exponential distribution with a mean of 66 M years describes the time between:
      • 2 consecutive such catastrophes.
      • i) and ii) if there are no such catastrophes.
    • Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1/2).
    • Consequently, one should expect the time between i) and ii) to be 2 times (= 1/0.50) as long as that if there were no such catastrophes.
  • An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.

Do you think Open Philanthropy's animal welfare grants should have write-ups whose main text is longer than 1 paragraph? I think it would be great if you shared the cost-effectiveness analyses you seem to be doing. In your recent appearance on The 80,000 Hours Podcast (which I liked!), you said (emphasis mine):

Lewis Bollard: Our goal is to help as many animals as we can, as much as we can — and the challenge is working out how to do that.

[...]

If there’s not a track record, if this is maybe more speculative or a longer-term play, we try to vet the path to impact. So we try to look at what are the steps that would be required to get to the long-term goal. How realistic are those steps? Do they logically lead to one another? And what evidence is there about whether we’re on that path, about whether the group has achieved those initial steps? But then there is also some degree of needing to look at plans and just assess plausibly how strong do these plans look? It’s not always possible to pin down the exact numbers. We try as hard as we can to do that, though.

To be clear, the main text of the write-ups of Open Philanthropy’s large grants is 1 paragraph across all areas, not just the ones related to animal welfare. However, I wonder whether there would be freedom for a given area to share more information (in grant write-ups or reports) if the people leading it thought that to be valuable.

Load more