1 min read 11

28

Peter Thiel & Eric Weinstein discuss global catastrophic risks, including biosecurity and AI alignment, starting at around the 2:33:00 mark of Thiel's interview on Weinstein's new podcast.

tl;dl – Thiel thinks GCRs are a concern, but is also very worried about political violence / violence perpetrated by strong states. He thinks catastrophic political violence is much more likely than GCRs like AI misalignment.

He has some story about political violence becoming more likely when there's no economic growth, and so is worried about present stagnation. (Not 100% sure I'm representing that correctly.)


Also there's an interesting bit about transparency & how transparency often becomes weaponized when put into practice, soon after the GCR discussion.

Comments11


Sorted by Click to highlight new comments since:

Economic growth likely isn't stagnating, it just looks that way due to some catch up growth effects:

https://rhsfinancial.com/2019/01/economic-growth-speeding-up-or-slowing/

I think how the 'middle class' (a relative measure) of the USA is doing is fairly uninteresting overall. I think most meaningful progress at the grand scale (decades to centuries) is how fast is the bottom getting pulled up and how high can the very top end (bleeding edge researchers) go. Shuffling in the middle results in much wailing and gnashing of teeth but doesn't move the needle much. Their main impact is just voting for dumb stuff that harms the top and bottom.

Great point.

I like the Russ Roberts videos as demonstrations of how complicated macro is / how malleable macroeconomic data is.

Thiel thinks GCRs are a concern, but is also very worried about political violence / violence perpetrated by strong states.

Robin Hanson's latest (a) is related.

Given the stakes, it's a bit surprising that "has risk of war secularly declined or are we just in a local minimum?" hasn't received more attention from EA.

Holden looked at this (a) a few years ago and concluded:


I conclude that [The Better Angels of Our Nature's] big-picture point stands overall, but my analysis complicates the picture, implying that declines in deaths from everyday violence have been significantly (though probably not fully) offset by higher risks of large-scale, extreme sources of violence such as world wars and oppressive regimes.

If I recall correctly, Pinker also spent some time noting that violence appears to be moving to more of a power-law distribution since the early 20th Century (fewer episodes, magnitude of each episode is much more severe).

"War aversion" seems like a plausible x-risk reduction focus area in its own right (it sorta bridges AI risk, biosecurity, and nuclear security).

This chart really conveys the concern at a glance:

chart

(source) (a)

... what if the curve swings upward again?

Hacker News comments about the interview, including several by Thiel skeptics.

Also Nintil has some good notes (a). (Notes at bottom of post.)

I have been working on my billionaire VC / EA elevator pitch.

“Money me. Money now. Me a money, needing a lot now.”

What do you think?

The Fed should lower interest rates soon and that will help to create a tighter labor market which will increase wages. The natural rate of unemployment may be a lot lower than previously thought.

Personally, I think this is due to dollarization and how the US exports our inflation to other countries. Our M0 money is often used for currency substitution in countries with a poorly managed central bank. Removing the M0 money supply from the banking system reduces the expected money supply created from fractional reserve banking. The US can and has to keep printing money to satisfy the world demand for dollars.

Nonetheless higher wages will follow after lower interest rates lower unemployment rates. The natural rate of unemployment should be higher but there is a lack of inflation which I believe is from dollarization. A tighter labor market and higher wages will incentivize more research into technology to increase productivity and increase the payoffs from innovations that increase productivity. Why build steam engines if slaves are cheap?

Are these predictions informing your investments? Seems like you could make a lot of money if you're able to predict upcoming macro trends.

Even if I nailed the macro trends prediction, the Fed lowered interest rates, I cannot predict presidential tweets. Realistically, starting from the bottom you want to invest in low cost index funds.

VCs have a lot of capital to invest and only a few plays can make up for all their losses and then some. Most people cannot beat the market. I could spend all my time trying to squeeze out a few extra percent. However, I still would not know if I am a good investor with smart money or a dumb one who got lucky.

I can compound my investments historically around 10% per year. Including inflation puts the real dollar return at 8% per year. If I want more growth I really need to earn a higher salary. With a tighter job market, from lower interest rates and lower levels of natural unemployment, means switching jobs creates double digit raises. The trend in business is wage compression where people with more experience who continue to work for the same employer are only given inflation wage adjustments but never any real wage growth.

https://www.forbes.com/sites/cameronkeng/2014/06/22/employees-that-stay-in-companies-longer-than-2-years-get-paid-50-less/#6a133b87e07f

People should invest in index funds since they require no thought and do better than most managed investments. But this also frees up time to change careers and grow your income which is often easier to do, has a better return, and is under their direct control.

The excess income should go into index funds until someone can choose if they want to continue to work.

Index altruism might be a better strategy for most people too. If someone can identify a more altruistic charity that does more good then the efficient market hypothesis should quickly level the playing field. Maybe there is more smart money in investing that becomes dumb money when giving it away?

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.