Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
1 min read 11

28

Peter Thiel & Eric Weinstein discuss global catastrophic risks, including biosecurity and AI alignment, starting at around the 2:33:00 mark of Thiel's interview on Weinstein's new podcast.

tl;dl – Thiel thinks GCRs are a concern, but is also very worried about political violence / violence perpetrated by strong states. He thinks catastrophic political violence is much more likely than GCRs like AI misalignment.

He has some story about political violence becoming more likely when there's no economic growth, and so is worried about present stagnation. (Not 100% sure I'm representing that correctly.)


Also there's an interesting bit about transparency & how transparency often becomes weaponized when put into practice, soon after the GCR discussion.

Comments11


Sorted by Click to highlight new comments since:

Economic growth likely isn't stagnating, it just looks that way due to some catch up growth effects:

https://rhsfinancial.com/2019/01/economic-growth-speeding-up-or-slowing/

I think how the 'middle class' (a relative measure) of the USA is doing is fairly uninteresting overall. I think most meaningful progress at the grand scale (decades to centuries) is how fast is the bottom getting pulled up and how high can the very top end (bleeding edge researchers) go. Shuffling in the middle results in much wailing and gnashing of teeth but doesn't move the needle much. Their main impact is just voting for dumb stuff that harms the top and bottom.

Great point.

I like the Russ Roberts videos as demonstrations of how complicated macro is / how malleable macroeconomic data is.

Thiel thinks GCRs are a concern, but is also very worried about political violence / violence perpetrated by strong states.

Robin Hanson's latest (a) is related.

Given the stakes, it's a bit surprising that "has risk of war secularly declined or are we just in a local minimum?" hasn't received more attention from EA.

Holden looked at this (a) a few years ago and concluded:


I conclude that [The Better Angels of Our Nature's] big-picture point stands overall, but my analysis complicates the picture, implying that declines in deaths from everyday violence have been significantly (though probably not fully) offset by higher risks of large-scale, extreme sources of violence such as world wars and oppressive regimes.

If I recall correctly, Pinker also spent some time noting that violence appears to be moving to more of a power-law distribution since the early 20th Century (fewer episodes, magnitude of each episode is much more severe).

"War aversion" seems like a plausible x-risk reduction focus area in its own right (it sorta bridges AI risk, biosecurity, and nuclear security).

This chart really conveys the concern at a glance:

chart

(source) (a)

... what if the curve swings upward again?

Hacker News comments about the interview, including several by Thiel skeptics.

Also Nintil has some good notes (a). (Notes at bottom of post.)

I have been working on my billionaire VC / EA elevator pitch.

“Money me. Money now. Me a money, needing a lot now.”

What do you think?

The Fed should lower interest rates soon and that will help to create a tighter labor market which will increase wages. The natural rate of unemployment may be a lot lower than previously thought.

Personally, I think this is due to dollarization and how the US exports our inflation to other countries. Our M0 money is often used for currency substitution in countries with a poorly managed central bank. Removing the M0 money supply from the banking system reduces the expected money supply created from fractional reserve banking. The US can and has to keep printing money to satisfy the world demand for dollars.

Nonetheless higher wages will follow after lower interest rates lower unemployment rates. The natural rate of unemployment should be higher but there is a lack of inflation which I believe is from dollarization. A tighter labor market and higher wages will incentivize more research into technology to increase productivity and increase the payoffs from innovations that increase productivity. Why build steam engines if slaves are cheap?

Are these predictions informing your investments? Seems like you could make a lot of money if you're able to predict upcoming macro trends.

Even if I nailed the macro trends prediction, the Fed lowered interest rates, I cannot predict presidential tweets. Realistically, starting from the bottom you want to invest in low cost index funds.

VCs have a lot of capital to invest and only a few plays can make up for all their losses and then some. Most people cannot beat the market. I could spend all my time trying to squeeze out a few extra percent. However, I still would not know if I am a good investor with smart money or a dumb one who got lucky.

I can compound my investments historically around 10% per year. Including inflation puts the real dollar return at 8% per year. If I want more growth I really need to earn a higher salary. With a tighter job market, from lower interest rates and lower levels of natural unemployment, means switching jobs creates double digit raises. The trend in business is wage compression where people with more experience who continue to work for the same employer are only given inflation wage adjustments but never any real wage growth.

https://www.forbes.com/sites/cameronkeng/2014/06/22/employees-that-stay-in-companies-longer-than-2-years-get-paid-50-less/#6a133b87e07f

People should invest in index funds since they require no thought and do better than most managed investments. But this also frees up time to change careers and grow your income which is often easier to do, has a better return, and is under their direct control.

The excess income should go into index funds until someone can choose if they want to continue to work.

Index altruism might be a better strategy for most people too. If someone can identify a more altruistic charity that does more good then the efficient market hypothesis should quickly level the playing field. Maybe there is more smart money in investing that becomes dumb money when giving it away?

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4