R

Ramiro

Civil service @ BCB
1632 karmaJoined Jun 2017Working (6-15 years)Lisboa, Portugal
www.admonymous.co/ramiroap

Bio

Brazilian legal philosopher, postdoc in intergenerational justice, financial supervisor, GWWC Pledger Bachelor of Laws, Master and Doctor of Philosophy from the Federal University of Rio Grande do Sul (UFRGS), having published articles and translations in the areas of Political Philosophy, Applied Ethics and Philosophy of Economics – with a recent focus on climate risks, Environmental and Social Responsibility, and intergenerational justice. Post-Doctoral Researcher at the Institute of Philosophy, Faculty of Social and Human Sciences, Universidade Nova de Lisboa, integrating the Ethics and Political Philosophy Laboratory (EPLAB) and the project Present Democracy for Future Generations. Also a member of the Graduate Committee and Special Studies Analyst in the area of supervision of non-banking institutions at the Central Bank of Brazil (BCB). Member of the Inclusive and Sustainable Solutions association (SIS) and of the Effective Altruism community in Brazil (AE Brasil). https://philpeople.org/profiles/ramiro-avila-peres

How I can help others

All my public forum posts must be considered as under CC-BY license

Suggestions of new cause areas: let's pay people so that every podcast episode is shorter than 40min, every pdf book is compressed to a file as light as possible, and every EA thinks twice before spending their day on EA-Meta and EA criticism.

Comments
508

Topic contributions
4

Answer by RamiroFeb 22, 20242
0
0

I think there's a relevant distinction to be made between field building (i.e., developing a new area of expertise to provide advice to decision-makers - think about the history of gerontology) and movement building (which makes me think of advocacy groups, free masons, etc.). Of course, many things lie in-between, such as neoliberals & Mont Pelerin Society.

Thinking about this one year later, I realize that Global Catastrophic events are much like Carnival in Brazil: unlivable climatic conditions, public services are shut down, traffic becomes impossible, crowds of crazy people roam randomly through the streets... but without Samba and beaches, of course (or, in the case of Curitiba, without zombies selling you beer)

How consistent are "global risk reports"?

We know that the track record of pundits is terrible, but many international consultancy firms have been publishing annual "global risks reports" like the WEF's, where they list the main global risks (e.g. top 10) for a certain period (e.g., 2y). Well, I was wondering if someone has measured their consistency; I mean, I suppose that if you publish in 2018 a list of the top 10 risks for 2019 & 2020, you should expect many of the same risks to show up in your 2019 report (i.e., if you are a reliable predictor, risks in report y should appear in report y+1). Hasn't anyone checked this yet?
If not, I'll file this under "a pet project I'll probably not have time to take in the foreseeable future"

Let me briefly try to reply or clarify this:

I think there is a massive difference between one's best guess for the annual extinction risk[1] being 1 % or 10^-10 (in policy and elsewhere). I guess you were not being literal? In terms of risk of personal death, that would be the difference between a non-Sherpa first-timer climbing Mount Everest[2] (risky), and driving for 1 s[3] (not risky).

I did say that I'm not very concerned with the absolute values of precise point-estimates, and more interested in proportional changes and in relative probabilities; allow me to explain:

First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survival - so yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstad's toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, I'm assuming it doesn't make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because I'm assuming that EV will stay the same. This is not always true; I should have clarified this.

Second, it's not that I don't see any difference between "1%" vs. "10^-10"; I just don't take sentences of the type “the probability of p is 10^-14” at face value. For me, the reference for such measures might be quite ambiguous without additional information - in the excerpt I quoted above, you do provide that when you say that this difference would correspond to the distance between the risk of death for Everest climbing vs. driving for 1s – which, btw, are extrapolated from frequencies (according to the footnotes you provided).

Now, it looks like you say that, given your best estimate, the probability of extinction due to war is really approximately like picking a certain number from a lottery with 10^14 possibilities, or the probability of tossing a fair coin 46-47 times and getting only heads; it’s just that, because it’s not resilient, there are many things that could make you significantly update your model (unlike the case of the lottery and the fair coin). I do have something like a philosophical problem with that, which is unimportant; but I think it might result in a practical problem, which might be important. So...

It reminds me of a paper by the epistemologist Duncan Pritchard, where he supposes that a bomb will explode if (i) in a lottery, a specific number out of 14 million is withdrawn, or if ( (ii) a conjunction of bizarre events (eg., the spontaneous pronouncement of a certain Polish sentence during the Queen's next speech, the victory of an underdog at the Grand National...) occurs, with an assigned probability of 1 in 14 million. Pritchard concludes that, though both conditions are equiprobable, we consider the latter to be a lesser risk because it is "modally farther away", in a "more distant world"; I think that's a terrible solution: people usually prefer to toss a fair coin rather than a coin they know is biased (but whose precise bias they ignore), even though both scenarios have the same “modal distance”. Instead, the problem is, I think, that reducing our assessment to a point-estimate might fail to convey our uncertainty regarding the differences in both information sets – and one of the goals of subjective probabilities is actually to provide a measurement of uncertainty (and the expectation of surprise). That’s why, when I’m talking about very different things, I prefer statements like “both probability distributions have the same mean” to claims such as “both events have the same probability”.

Finally, I admit that the financial crisis of 2008 might have made me a bit too skeptical of sophisticated models yielding precise estimates with astronomically tiny odds, when applied to events that require no farfetched assumptions - particularly if minor correations are neglected, and if underestimating the probability of a hazard might make people more lenient regarding it (and so unnecessarily make it more likely). I'm not sure how epistemically sound my behavior is; and I want to emphasize that this skepticism is not quite applicable to your analysis - as you make clear that your probabilities are not resilient, and point out the main caveats involved (particularly that, e.g., a lot depends on what type of distribution is a better fit for predicting war casualties, or on what role tech plays).

Something that surprised me a bit, but that is unlikely to affect your analysis:

I used Correlates of War’s data on annual war deaths of combatants due to fighting, disease, and starvation. The dataset goes from 1816 to 2014, and excludes wars which caused less than 1 k deaths of combatants in a year.

Actually, I'm not sure if this dataset is taking into account average estimates of excess deaths in Congo Wars (1996-2003, 1.5 million - 5.4 million) - and I'd like to check how it takes into account Latin American Wars of the 19th century.

Thanks for the post. I really appreciate this type of modeling exercise.

I've been thinking about this for a while, and there are some reflections it might be proper to share here. In summary, I'm afraid a lot of effort in x-risks might be misplaced. Let me share some tentative thoughts on this:

a) TBH, I'm not very concerned with precise values of point-estimates for the probability of human extinction. Because  of anthropic bias, or the fact that this is necessarily a one-time event, the incredible values involved, and doubts about how to extrapolate from past events here, etc., So many degress of freedom, that I don't expect the uncertainties in question to be properly expressed. Thus, if the overall "true" x-risk is 1% or 0.00000001%, that doesn't make a lot of difference to me - at least in terms of policy recommendation.

I'm rather more concerned with odds ratios. If one says that every x-risk estimate is off by n orders of magnitude, I have nothing to reply; instead, I'm interested in knowing if, e.g., one specific type of risk is off, or if it makes human extinction 100 times more likely than the "background rate of extinction" (I hate this expression, because it suggests we are talking about frequencies).

b) So I have been wondering if, instead of trying to compute a causal chain leading from now to extinction, it'd be more useful to do backward reasoning instead: suppose that humanity is extinct (or reduced to a locked-in state) by 3000 CE (or any other period you choose); how likely is it that factor x figures in a causal chain leading to that?

When I try to consider this, I think that a messy unlucky narrative where many catastrophes concur is at least on a pair with a "paperclip-max" scenario. Thus, even though WW 3 would not wipe us out, it would make it way more likely that something else would destroy us afterwards. I'll someday try to properly model this.

Ofc, I admit that this type of reasoning "makes" x-risks less comparable with near-termist interventions - but I'm afraid that's just the way it is.

c) I suspect that some confusions might be due to Parfit's thought-experiment: because extinction would be much worse than an event that killed 99% of humanity, people often think about events that could wipe us out once and for all. But, in the real world, an event that killed 99% of humanity at once is way more likely than extinction at once, and the former would probably increase extinction risk in many orders of magnitude (specially if most survivors were confined to a state where they would be fragile against local catastrophes). The last human will possibly die of something quite ordinary.

d) There's an interesting philosophical discussion to be had about what "the correct estimate of the probability of human extinction" even means. It's certainly not an objective probability; so the grounds for saying that such an estimate is better than another might be something like that it converges towards what an ideal prediction market or logical inductor would output. But then, I am quite puzzled about how such a mechanism could work for x-risks (how would one define prices? well, one could perhaps value lives with the statistical value of life, like Martin & Pyndick).

Thanks for this report. It'll be quite useful.
I'd like to share some critical remarks I had previously sent RCG by e-mail:

  1. Definition of “RCG”

<Los RCG se definen como aquellos con el potencial de infligir un daño grave al bienestar humano a escala global. > (p.2; cf. p. 6)

This definition might be too wide – it could include the global financial crisis of 2008, for instance. It is constrained, though, by the subsequent sentence: <Si bien se han identificado diversos riesgos que cumplen con esta definición, el presente trabajo se enfoca en los riesgos asociados a la inteligencia artificial, los riesgos biológicos y los escenarios de reducción abrupta de la luz solar.>

However, afterwards, a lot of the material is based on scientific diplomacy, and preparedness for local disasters and insurance that is not directly related to these types of events. But then, it’s not clear why other risks are not considered, such as the threat of conflict, extreme global warming, or other risks with cascading effects. They provide historical examples of catastrophes; an ERALs like the Tambora eruption (1815-16) caused “the year without summer”, but didn’t kill more than 250k people; the ENSO event of 1876-79 killed around 30-50 million people (s. Our World in Data; https://doi.org/10.1175/JCLI-D-18-0159.1).

Also, I don’t get what you mean by “seguros por riesgos catastróficos” (p. 8); if you mean insurance for local disasters, sure, people should buy it more often, there’s probably a market failure… on the other hand, there is also significant moral hazard here: people will often fail to avoid risky regions because they are insured. However, if you mean RCG insurance… I really don’t know this could work, as no current insurance system could be expected to survive such a loss in global output – but it would interesting to explore some possible arrangements in depth[1].

 

2.Things I missed most:

a) More emphasis on geographical and economic aspects of Latin America

According to Wikipedia, Latin America has 656 million people, 20,000 km2, and a combined nominal GDP of US$5.188 trillion and a GDP PPP of US$10.285 trillion; but more than half of it is in Mexico and Brazil, which amount to 350 million people, 10,500 km2, and a combined GDP of aprox. US$4 trillion. And yet, they barely show up in the assessment; Brazilian policies are totally absent from appendix II.

Also, I think the report would have greatly benefitted from an assessment of the state capacity and fiscal space in Latin American countries (perhaps you considered it unnecessary, as it is taken into account by the INFORM index and by Dahl’s GCR Index?) 

b) Historical examples of relevant disasters:
Such as Grande Seca - Wikipedia (included in the ENSO event of 1876-79), Haiti’s earthquake and cholera epidemics, Andean seismic and volcanic events, etc. 

3. Outdated reference?

“Se estima que los daños por desastres en América Latina y el Caribe han ascendido a unos US$20.000 millones anuales en una década, con más de 45.000 muertes y 40 millones de personas afectadas (Kiepi e Tayson, 2002)”.
Could we find a source more up to date? This is from more than twenty years ago, when the region's GDP and population was quite smaller. By way of comparison, the National Confederation of Municipalities in Brazil estimates that natural disasters have caused losses of R$ 400 billion (US$S80 billion) in the last decade in Brazil alone (more conservative estimates put that value around half of this). If that sounds like a lot, consider that Newman and Noy (2023) estimate that global warming alone causes US$143 billion in damage per year in the world (of which 63% refers to the value of deaths), and that Latin America accounts for 8.4% of the world's population  and 7.5% of GDP – from which we could expect at least US$7 billion to US$13 billion of annual damage in the region just because of global warming.


 


[1] What one usually wants from an insurance scheme is: a) pool the risk between different agents, and b) internalize ex ante the costs of risks, and c) hedge or protection against uncertain events. There are some proposed mechanisms along these lines: i) World Climate Bank (Broome & Foley, 2016); (ii) the Glasgow Loss and Damage Mechanism; iii) Cotton-Barratt’s proposal of insurance for dual-use pathogen research; etc.

Opportunity for Austrians
Article by Seána Glennon: “In the coming week, thousands of households across Austria will receive an invitation to participate in a citizens’ assembly with a unique goal: to determine how to spend the €25 million fortune of a 31-year-old heiress, Marlene Engelhorn, who believes that the system that allowed her to inherit such a vast sum of money (tax free) is deeply flawed."

Load more