This is a special post for quick takes by Evan R. Murphy. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 7:49 AM

Open Phil claims that campaigns to make more Americans go vegan and vegetarian haven't been very successful. But does this analysis account for immigration?

If people who already live in the US are shifting their diets, but new immigrants skew omnivore, a simple analysis could easily miss the former shift because immigration is fairly large in the US.

Source of Open Phil claim at https://www.openphilanthropy.org/research/how-can-we-reduce-demand-for-meat/ :

But these advocates haven’t achieved the widespread dietary changes they’ve sought — and that boosters sometimes claim they have. Despite the claims6% of Americans aren’t vegan and vegetarianism hasn’t risen fivefold lately: Gallup polls show a constant 5-6% of Americans have identified as vegetarians since 1999 (Gallup found 2% identified as vegans the only time it asked, in 2012). The one credible poll showing vegetarianism doubling in recent years still found only 5-7% of Americans identifying as vegetarian in 2017 — consistent with the stable Gallup numbers.

Although the cited Gallup report doesn't explicitly distinguish on immigrant status or ethnicity, it does say that "[a]lmost all segments of the U.S. population have similar percentages of vegetarians" while noting a larger difference in marital status.

Even if one assumes that almost no immigrants are vegetarian, the rate of immigration isn't so high as to really move a low percentage very much. As of 2018, there were ~45M people in the US who were born in another country. [https://www.pewresearch.org/short-reads/2020/08/20/key-findings-about-u-s-immigrants/]

As a brief example with easyish math, 15M out of 300M = 5%; 15M out of 330M (adding 30M extra meat eaters) only drops it to ~4.5%. Addition of 30M non-v*gan immigrants would mask an 1,500,000 increase in the number of non-immigrant vegetarians (15M/300M = 5% = 16.5M/330M). Without the 30M immigrants, the vegetarian population would have risen from 5% to 16.5M/300M = 5.5%. Given that the assumption that no immigrants are vegetarian is unrealistic, this shows that adding a good number of meat-eaters to the denominator doesn't move the percentages much at all.

People in bunkers, "sardines" and why biorisks may be overrated as a global priority

I'm going to make the case here that certain problem areas currently prioritized highly in the longtermist EA community are overweighted in their importance/scale. In particular I'll focus on biorisks, but this could also apply to other risks such as non-nuclear global war and perhaps other areas as well.

I'll focus on biorisks because that is currently highly prioritized by both Open Philanthropy and 80,000 Hours and probably other EA groups as well. If I'm right that biotechnology risks should be deprioritized, that would relatively increase the priority of other issues like AI, growing Effective Altruism, global priorities research, nanotechnology risks and others by a significant amount. So it could help allocate more resources to those areas which still pose existential threats to humanity.

I won't be taking issue with the longtermist worldview here. In fact, I'll assume the longtermist worldview is correct. Rather, I'm questioning whether biorisks really pose a significant existential/extinction risk to humanity. I don't doubt that they could lead to major global catastrophes which it would be really good to avert. I just think that it's extremely unlikely for them to lead to total human extinction or permanent civilization collapse.

This started when I was reading about disaster shelters. Nick Beckstead has a paper considering whether they could be a useful avenue for mitigating existential risks [1]. He concludes there could be a couple of special scenarios where they are that need further research, but by and large new refuges don't seem like a great investment because there are already so many existing shelters and other things which could serve to protect people from many global catastrophes. Specifically, the world already has a lot of government bunkers, private shelters, people working on submarines, and 100-200 uncontacted peoples which are likely to produce survivors from certain otherwise devastating events. [1]

A highly lethal engineered pandemic is among the biggest risks considered from biotechnology. This could potentially wipe out billions of people and lead to a collapse of civilization. But it's extremely unlikely not to spare at least a few hundred or thousand people among those who have access to existing bunkers or other disaster shelters, people who are working on submarines, and among the dozens of tribes and other peoples living in remote isolation. Repopulating the Earth and rebuilding civilization would not be fast or easy, but these survivors could probably do it over many generations.

So are humans immune then from  all existential risks thanks to preppers, "sardines" [2] and uncontacted peoples? No. There are certain globally catastrophic events which would likely spare no one. A superintelligent malevolent AI could probably hunt everyone down. The feared nanotechnological "gray goo" scenario could wreck all matter on the planet. A nuclear war extreme enough that it contaminated all land on the planet with radioactivity - even though it would likely have immediate survivors - might create such a mess that no humans would last long-term. There are probably others as well.

I've gone out on a bit of a limb here to claim that biorisks aren't an existential risk. I'm not a biotech expert, so there could be some biorisks that I'm not aware of. For example, could there be some kind of engineered virus that contaminates all food sources on the planet? I don't know and would be interested to hear from folks about that. This could be similar to a long-lasting global nuclear fallout in that it would have immediate survivors but not long-term survivors.  However, mostly the biorisks I have seen people focus on seem to be lethal virulent engineered pandemics that target humans. As I've said, it seems unlikely this would kill all the humans in bunkers/shelters, submarines and on remote parts of the planet.

Even if there is some kind of lesser-known biotech risk which could be existential, my bottom-line claim is that there seems to be an important line between real existential risks that would annihilate all humans and near-existential risks that would spare some people in disaster shelters and shelter-like situations. I haven't seen this line discussed much and I think it could help with better prioritizing global problem areas for the EA community.

--

[1]: "How much could refuges help us recover from a global catastrophe?" https://web.archive.org/web/20181231185118/https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf

[2]: I just learned that sailors use this term for submariners which is pretty fun. https://www.operationmilitarykids.org/what-is-a-navy-squid-11-slang-nicknames-for-navy-sailors/

It's hard to have a discussion about this in the open because many EAs (and presumably some non-EAs) with biosecurity expertise strongly believe that this is too dangerous a topic to talk about in detail in the open, because of information hazards and related issues. 

Speaking for myself, I briefly looked into the theory of information hazards, as well as thought through some of the empirical consequences, and my personal view is that while the costs of having public dialogue about various xrisk stuff (including biorisk) are likely underestimated, the  benefits are also likely underestimated as well, and on balance more things should be shared rather than less. I think Will Bradshaw and (I'm less confident) Anders Sandberg share* this view.

Unfortunately, it's hard to have a conversation about frank open conversation about biorisk before having a frank meta-conversation about the value of open conversations about biorisk, so here we are.

(EDIT: Note however that I am likely personally not aware of many of the empirical considerations that pro-secrecy biorisk people are aware of, which makes this conversation somewhat skewed)

*both of whom, unlike me, did nontrivial work in advancing the theory of infohazards, in addition to having biosecurity expertise.

Thanks, Linch. I didn't realize I might be treading near information hazards. It's good to know and an interesting point about the pros and cons of having such conversations openly.  

Curated and popular this week
Relevant opportunities