Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3612 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
351

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

Sorry about that!  I think I just intended to link to the same place I did for my earlier use of the phrase "AI-enabled coups", namely this Forethought report by Tom Davidson and pals, subtitled "How a Small Group Could Use AI to Seize Power": https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power

But also relevant to the subject is this Astral Codex Ten post about who should control an LLM's "spec": https://www.astralcodexten.com/p/deliberative-alignment-and-the-spec

The "AI 2027" scenario is pretty aggressive on timelines, but also features a lot of detailed reasoning about potential power-struggles over control of transformative AI which feels relevant to thinking about coup scenarios.  (Or classic AI takeover scenarios, for that matter. Or broader, coup-adjacent / non-coup-authoritarianism scenarios of the sort Thiel seems to be worried about, where instead of getting taken over unexpectedly by China, Trump, or etc, today's dominant western liberal institutions themselves slowly become more rigid and controlling.)

For some of the shenanigans that real-world AI companies are pulling today, see the 80,000 Hours podcast on OpenAI's clever ploys to do away with its non-profit structure, or Zvi Mowshowitz on xAI's embarrassingly blunt, totally not-thought-through attempts to manipulate Grok's behavior on various political issues (or a similar, earlier incident at Google).

it could be the case that he is either lying or cognitively biased to believe in the ideas he also thinks are good investments

Yeah.  Thiel is often, like, so many layers deep into metaphor and irony in his analysis, that it's hard to believe he keeps everything straight inside his head.  Some of his investments have a pretty plausible story about how they're value-aligned, but notably his most famous and most lucrative investment (he was the first outside investor in Facebook, and credits Girardian ideas for helping him see the potential value) seems ethically disastrous!  And not just from the commonly-held liberal-ish perspective that social media is bad for people's mental health and/or seems partly responsible for today's unruly populist politics.  From a Girardian perspective it seems even worse!!  Facebook/instagram/twitter/etc are literally the embodiment of mimetic desire, hugely accelerating the pace and intensity of the scapegoat process (cancel culture, wokeness, etc -- the very things Thiel despises!) and hastening a catastrophic Girardian war of all against all as people become too similar in their desires and patterns of thinking (the kind of groupthink that is such anathema to him!).

Palantir also seems like a dicey, high-stakes situation where its ultimate impact could be strongly positive or strongly negative, very hard to figure out which.

If you take seriously either of these donations, they directly contradict your claim that he is worried about stable totalitarianism and certainly personal liberty

I would say it seems like there are three potential benefits that Thiel might see for his support for Blake / Masters:

  1. Grim neoreactionary visions of steering the future of the country by doing unlawful, potentially coup-like stuff at some point in the future. (I think this is a terrible idea.)
  2. A kind of vague, vibes-based sense that we need to support conservatives in order to shake up the stagnant liberal establishment and "change the conversation" and shift the culture.  (I think this is a dumb idea that has backfired so far.)
  3. The normal concept of trying to support people who agree with you on various policies, in the hopes they pass those policies -- maybe now, or maybe only after 2028 on the off chance that Vance becomes president later.  (I don't know much about the details here, but at least this plan isn't totally insane?)

Neoreaction: In this comment I try to map out the convoluted logic by which how Thiel might be reconciling his libertarian beliefs like "I am worried about totalitarianism" with neoreactionary ideas like "maybe I should help overthrow the American government".  (Spoilers: I really don't think his logic adds up; any kind of attempt at a neoreactionary power-grab strikes me as extremely bad in expectation.)  I truly do think this is at least some part of Thiel's motivation here.  But I don't think that his support for Vance (or Blake Masters) was entirely or mostly motivated by neoreaction.  There are obviously a lot of reasons to try and get one of your buddies to become senators!  If EA had any shot at getting one of "our guys" to be the next Dem vice president, I'm sure we'd be trying hard to do that!

"Shifting the conversation": In general, I think Thiel's support for Trump in 2016 was a dumb idea that backfired and made the world worse (and not just by Dem lights -- Thiel himself now seems to regret his involvement).  He sometimes seems so angry at the stagnation created by the dominant liberal international order, that he assumes if we just shake things up enough, people will wake up and the national conversation will suddenly shift away from culture-war distractions to more important issues.  But IMO this hasn't happened at all. (Sure, Dems are maybe pivoting to "abundance" away from wokeness, which is awesome.  But meanwhile, the entire Republican party has forgotten about "fiscal responsibility", etc, and fallen into a protectionist / culture-war vortex.  And most of all, the way Trump's antics constantly saturate the news media seems like the exact opposite of a healthy national pivot towards sanity.)  Nevertheless, maybe Thiel hasn't learned his lesson here, so a misguided desire to generally oppose Dems even at the cost of supporting Trump probably forms some continuing part of his motivation.

Just trying to actually get desired policies (potentially after 2028): I'd be able to say more about this if I knew more about Vance and Masters' politics.  But I'm not actually an obsessive follower of JD Vance Thought (in part because he just seems to lie all the time) like I am with Thiel.  But, idk, some thoughts on this, which seems like it probably makes up the bulk of the motivation:

  • Vance does seems to just lie all the time, misdirecting people and distracting from one issue by bringing up another in a totally scope-insensitive way.  (Albeit this lying takes a kind of highbrow, intellectual, right-wing-substacker form, rather than Trump's stream-of-consciousness narcissistic confabulation style.)  He'll say stuff like "nothing in this budget matters at all, don't worry about the deficit or the benefit cuts or etc -- everything will be swamped by the importance of [some tiny amount of increased border enforcement funding]".
    • The guy literally wrote a whole book about all the ways Trump is dumb and bad, and now has to constantly live a lie to flatter Trump's whims, and is apparently pulling that trick off successfully!  This makes me feel like "hmm, this guy is the sort of smart machiavellian type dude who might have totally different actual politics than what he externally espouses".  So, who knows, maybe he is secretly 100% on board with all of Thiel's transhumanist libertarian stuff, in which case Thiel's support would be easily explained!
    • Sometimes (like deficit vs border funding, or his anti-Trump book vs his current stance) it's obvious that he's knowingly lying.  But other times he seems genuinely confused and scope-insensitive.  Like, maybe one week he's all on about how falling fertility rates is a huge crisis and #1 priority.  Then another week he's crashing the Paris AI summit and explaining how America is ditching safetyism and going full-steam ahead since AI is the #1 priority.  (Oh yeah, but also he claims to have read AI 2027 and to be worried about many of the risks...)  Then it's back to cheerleading for deportations and border control, since somehow stopping immigrants is the #1 priority.  (He at least knows it's Trump's #1 best polling issue...)  Sometimes all this jumping-around seems to happen within a single interview conversation, in a way that makes me think "okay, maybe this guy is not so coherent".
  • All the lying makes it hard to tell where Vance really stands on various issues.  He seems like he was pushing to be less involved in fighting against Houthis and Iran?  (Although lost those internal debates.)  Does he actually care about immigration, or is that fake?  What does he really think about tarriffs and various budget battles?
  • Potential Thiel-flavored wins coming out of the white house:
    • Zvi says that "America's AI Action Plan is Pretty Good"; whose doing is that?  Not Trump.  Probably not Elon.  If this was in part due to Vance, then this is probably the biggest Vance-related payoff Thiel has gotten so far.
      • The long-threatened semiconductor tariff might be much weaker than expected; probably this was the work of Nvidia lobbyists or something, but again, maybe Vance had a finger on the scale here?
      • Congress has also gotten really pro-nuclear-power really quickly, although again this is probably at the behest of AI-industry lobbyists, not Vance.
      • But it might especially help to have a cheerleader in the executive branch when you are trying to overhaul the government with AI technology, eg via big new Palantir contracts or providing chatGPT to federal workers.
    • Thiel seems to be a fan of cryptocurrency; the republicans have done a lot of pro-crypto stuff, although maybe they would have done all this anyways without Vance.
    • Hard to tell where Thiel stands on geopolitical issues, but I would guess he's in the camp of people who are like "ditch russia/ukraine and ignore iran/israel, but be aggressive on containing china".  Vance seems to be a dove on Iran and the Houthis, and his perrenial europe-bashing is presumably seen as helpful as regards Russia, trying to convince europe that they can't always rely on the USA to back them up, and therefore need to handle Russia themselves.
    • Tragically, RFK is in charge of all the health agencies and is doing a bunch of terrible, stupid stuff.  But Marty Makary at the FDA and Jim O'Neill at the HHS are Thiel allies and have been scurrying around amidst the RFK wreckage, doing all kinds of cool stuff -- trying to expedite pharma manufacturing build-outs, building AI tools to accelerate FDA approval processes, launching a big new ARPA-H research program for developing neural interfaces, et cetera.  This doesn't have anything to do with Vance, but definitely represents return-on-investment for Thiel's broader influence strategy.  (One of the few arguable bright spots for the tech right, alongside AI policy, since Elon's DOGE effort has been such a disaster, NASA lost an actually-very-promising Elon-aligned administrator, Trump generally has been a mess, etc.)
  • Bracketing the ill effects of generally continuing to support Trump (which are maybe kind of a sunk cost for Thiel at this point), the above wins seem easily worth the $30m or so spent on Vance and Masters' various campaigns.
    • And then of course there's always the chance he becomes president in 2028, or otherwise influences the future of a hopefully-post-Trump republican party, and therefore gets a freer hand to implement whatever his actual politics are.
    • I'm not sure how the current wins (some of them, like crypto deregulation or abandoning Ukraine or crashing the Paris AI summit, are only wins from Thiel's perspective, not mine) weighs up against bad things Vance has done (in the sense of bad-above-replacement of the other vice-presidential contenders like Marco Rubio) -- compared to more normal republicans, Vance seems potentially more willing to flatter Trump's idiocy on stuff like tariffs, or trying to annex Greenland, or riling people up with populist anti-immigrant rhetoric.

I am a biased center left dem though

I am a centrist dem too, if you can believe it!  I'm a big fan of Slow Boring, and in recent months I have also really enjoyed watching Richard Hannania slowly convert from a zealous alt-right anti-woke crusader into a zealous neoliberal anti-Trump dem and shrimp-welfare-enjoyer.  But I like to hear a lot of very different perspectives about life (I think it's very unclear what's going on in the world, and getting lots of different perspectives helps for piecing together the big picture and properly understanding / prioritizing things), which causes me to be really interested in a handful of "thoughtful conservatives".  There are only a few of them, especially when they keep eventually converting to neoliberalism / georgism / EA / etc, so each one gets lots of attention...

I think Thiel really does have a variety of strongly held views.  Whether these are "ethical" views, ie views that are ultimately motivated by moral considerations... idk, kinda depends on what you are willing to certify as "ethical".

I think you could build a decent simplified model of Thiel's motivations (although this would be crediting him with WAY more coherence and single-mindedness than he or anyone else really has IMO) by imagining he is totally selfishly focused on obtaining transhumanist benefits (immortality, etc) for himself, but realizes that even if he becomes one of the richest people on the planet, you obviously can't just go out and buy immortality, or even pay for a successful immortality research program -- it's too expensive, there are too many regulatory roadblocks to progress, etc.  You need to create a whole society that is pro-freedom and pro-property-rights (so it's a pleasant, secure place for you to live) and radically pro-progress.  Realistically it's not possible to just create an offshoot society, like a charter city in the ocean or a new country on Mars (the other countries will mess with you and shut you down).  So this means that just to get a personal benefit to yourself, you actually have to influence the entire trajectory of civilization, avoiding various apocalyptic outcomes along the way (nuclear war, stable totalitarianism), etc.  Is this an "ethical" view?

  • Obviously, creating a utopian society and defeating death would create huge positive externalities for all of humanity, not just Mr Thiel.
    • (Although longtermists would object that this course of action is net-negative from an impartial utilitarian perspective -- he's short-changing unborn future generations of humanity, running a higher level of extinction risk in order to sprint to grab the transhumanist benefits within his own lifetime.)
  • But if the positive externalities are just a side-benefit, and the main motivation is the personal benefit, then it is a selfish rather than altruistic view.  (Can a selfish desire for personal improvement and transcendence still be "ethical", if you're not making other people worse off?)
    • Would Thiel press a button to destroy the whole world if it meant he personally got to live forever?  I would guess he wouldn't, which would go to show that this simplified monomanaical model of his motivations is wrong, and that there's at least a substantial amount of altruistic motivation in there.

I also think that lots of big, world-spanning goals (including altruistic things like "minimize existential risk to civilization", or "minimimze animal suffering", or "make humanity an interplanetary species") often problematically route through the convergent instrumental goal of "optimize for money and power", while also being sincerely-held views.  And none moreso than a personal quest for immortality!  But he doesn't strike me as optimising for power-over-others as a sadistic goal for its own sake (as it may have been for, say, Stalin) -- he seems to have such a strong belief in the importance of individual human freedom and agency that it would be suprising if he's secretly dreaming of enslaving everyone and making them do his bidding.  (Rather, he consistently sees himeself as trying to help the world throw off the shackles of a stultifying, controlling, anti-progress regime.)

But getting away from this big-picture philosophy, Thiel also seems to have lots of views which, although they technically fit nicely into the overall "perfect rational selfishness" model above, seem to at least in part be fueled by an ethical sense of anger at the injustice of the world.  For example, sometime in the past few years Thiel started becoming a huge Georgist.  (Disclaimer: I myself am a huge Georgist, and I think it always reflects well on people, both morally and in terms of the quality of their world-models / ability to discern truth.)

  • Here is a video lecture where Thiel spends half an hour at the National Conservatism Conference, desperately begging Republicans to stop just being obsessed with culture-war chum and instead learn a little bit about WHY California is so messed up (ie, the housing market), and therefore REALIZE that they need to pass a ton of "Yimby" laws right away in all the red states, or else red-state housing markets will soon become just as disfunctional as California's, and hurt middle class and poor people there just like they do in California.  There is some mean-spiritedness and a lot of Republican in-group signalling throughout the video (like when he is mocking the 2020 dem presidential primary candidates), but fundamentally, giving a speech trying to save the American middle class by Yimby-pilling the Republicans seems like a very good thing, potentially motivated by sincere moral belief that ordinary people shouldn't be squeezed by artificial scarcity creating insane rents.
  • Here's a short, two-minute video where Thiel is basically just spreading the Good News about Henry George, wherin he says that housing markets in anglosphere countries are a NIMBY catastrophe which has been "a massive hit to the lower-middle class and to young people".

Thiel's georgism ties into some broader ideas about a broken "inter-generational compact", whereby the boomer generation has unjustly stolen from younger generations via housing scarcity pushing up rents, via ever-growing medicare / social-security spending and growing government debt, via shutting down technological progress in favor of safetyism, via a "corrupt" higher-education system that charges ever-higher tuition and not providing good enough value for money, and various other means.

The cynical interpretation of this is that this is just a piece of his overall project to "make the world safe for capitalism", which in turn is part of his overall selfish motivation:  He realizes that young people are turning socialist because the capitalist system seems broken to them.  It seems broken to them, not because ALL of capitalism is actually corrupt, but specifically because they are getting unjustly scammed by NIMBYism.  So he figures that to save capitalism from being overthrown by angry millenials voting for Bernie, we need to make America YIMBY so that the system finally works for young people and they have a stake in the system. (This is broadly correct analysis IMO)  Somewhere I remember Thiel explicitly explaining this (ie, saying "we need to repair the intergenerational compact so all these young people stop turning socialist"), but unfortunately I don't remember where he said this so I don't have a link.

So you could say, "Aha!  It's really just selfishness all the way down, the guy is basically voldemort."  But, idk... altruistically trying to save young people from the scourge of high housing prices seems like going pretty far out of your way if your motivations are entirely selfish.  It seems much more straightforwardly motivated by caring about justice and about individual freedom, and wanting to create a utopian world of maximally meritocratic, dynamic capitalism rather than a world of stagnant rent-seeking that crushes individual human agency. 

Thiel seems to believe that the status-quo "international community" of liberal western nations (as embodied by the likes of Obama, Angela Merkel, etc) is currently doomed to slowly slide into some kind of stagnant, inescapable, communistic, one-world-government dystopia.

Personally, I very strongly disagree with Thiel that this is inevitable or even likely (although I see where he's coming from insofar as IMO this is at least a possibility worth worrying about).  Consequently, I think the implied neoreactionary strategy (not sure if this is really Thiel's strategy since obviously he wouldn't just admit it) -- something like "have somebody like JD Vance or Elon Musk coup the government, then roll the dice and hope that you end up getting a semi-benevolent libertarian dictatorship that eventually matures into a competent normal government, like Singapore or Chile, instead of ending up getting a catastrophic outcome like Nazi Germany or North Korea or a devastating civil war" -- is an incredibly stupid strategy that is likely to go extremely wrong.

I also agree with you that Christianity is obviously false and thus reflects poorly on people who sincerely believe it.  (Although I think Ben's post exaggerates the degree to which Thiel is taking Christian ideas literally, since he certainly doesn't seem to follow official doctrine on lots of stuff.)  Thiel's weird reasoning style that he brings not just to Christianity but to everything (very nonlinear, heavy on metaphors and analogies, not interested in technical details) is certainly not an exemplar of rationalist virtue.  (I think it's more like... heavily optimized for trying to come up with a different perspective than everyone else, which MIGHT be right, or might at least have something to it.  Especially on the very biggest questions where, he presumably believes, bias is the strongest and cutting through groupthink is the most difficult.  Versus normal rationalist-style thinking is optimized for just, you know, being actually fully correct the highest % of the time, which involves much more careful technical reasoning, lots of hive-mind-style "deferring" to the analysis of other smart people, etc)

Agreed that it is weird that a guy who seems to care so much about influencing world events (politics, technology, etc) has given away such a small percentage of his fortune as philanthropic + political donations.

But I would note that since Thiel's interests are less altruistic and more tech-focused, a bigger part of his influencing-the-world portfolio can happen via investing in the kinds of companies and technologies he wants to create, or simply paying them for services.  Some prominent examples of this strategy are founding Paypal (which was originally going to try and be a kind of libertarian proto-crypto alternate currency, before they realized that wasn't possible), founding Palantir (allegedly to help defend western values against both terrorism and civil-rights infringement) and funding Anduril (presumably to help defend western values against a rising China).  A funnier example is his misadventures trying to consume the blood of the youth in a dark gamble for escape from death, via blood transfusions from a company called Ambrosia.  Thiel probably never needed to "donate" to any of these companies.

(But even then, yeah, it does seem a little too miserly...)

He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.

  • Here is a Wired article tracing Peter Thiel's early funding of the Singularity Institute, way back in 2005.  And here's a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a "burning man, hippie luddite" movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowsky's "Death with Dignity" essay.
  • When he is bashing EA's focus on existential risk (like in that "anti-anti-anti-anti classical liberalism" presentation), he doesn't do what most normal people do and say that existential risk is a big fat nothingburger.  Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about it -- eg, he relates somewhere that people should have been "afraid of the blast" from nuclear weapons, but instead became "afraid of the radiation", which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
  • His whole recent schtick about "Why can we talk about the existential-risk / AI apocalypse, but not the stable-totalitarian / stagnation Antichrist?", which of course places him squarely in the "techno-optimist" / accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of "If technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isn't it okay to say that AI is communist?"  (Here is one example.)  Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individuals' behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
    • (Thiel's positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to "reduce terrorism while preserving civil liberties".)
  • Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying "I'm working on going to mars, it's the most important project in the world" and Demis argues "actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars".  (This is in the context of Thiel's long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that "there's nowhere else to go" to escape mainstream culture/civilization, that you can't escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
    • Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesn't make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.


A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):

  • Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that it'll turn out being 3000 IQ doesn't actually help you do amazing things like design nanotech or take over the world:

    PETER THIEL: It’s probably a Silicon Valley ideology and maybe, maybe in a weird way it’s more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that it’s all about smart people. And if you have more smart people, they’ll do great things.  And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, it’s just, they don’t know how to apply it, or our society doesn’t know what to do with them and they don’t fit in. And so that suggests that the gating factor isn’t IQ, but something, you know, that’s deeply wrong with our society.

    ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, I’m very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence.  It’s like, we ramp up intelligence and boom, Alzheimer’s is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, I’m an intelligent skeptic in the sense I don’t think, yeah, I think you probably have limits.

    PETER THIEL: It’s, it’s, it’s hard to prove one way or it’s always hard to prove these things.
     

  • Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:

    ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than you’re expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesn’t anymore.  It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine.  Do you think that’s just all kind of irrelevant fantasy? Or do you think it’s just hype? Do you think people are trying to raise money by pretending that we’re going to build a machine god?  Is it delusion? Is it something you worry about?  I think you, you would prefer the human race to endure, right?  You’re hesitating.

    PETER THIEL: I don’t know. I, I would... I would...

    ROSS DOUTHAT: This is a long hesitation.

    PETER THIEL: There’s so many questions and pushes.

    ROSS DOUTHAT: Should the human race survive?

    PETER THIEL: Yes.

    ROSS DOUTHAT: Okay.

    PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body.  And there’s a critique of, let’s say, the trans people in a sexual context or, I don’t know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that.  The critique is not that it’s weird and unnatural. It’s man, it’s so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
     

  • Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:

    PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, “We’re going to have a billion humanoid robots in the US in 10 years.” And I said, “Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth. The growth will take care of this.” And then, well, he’s still worried about the budget deficits. And then this doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it.

From a podcast conversation with Ross Douthat, trying to explain why his interest in transhumanism and immortality is not heresy:

ROSS DOUTHAT: I generally agree with what I think is your belief that religion should be a friend to science and ideas of scientific progress. I think any idea of divine providence has to encompass the fact that we have progressed and achieved and done things that would have been unimaginable to our ancestors.  But it still also seems like, yeah, the promise of Christianity in the end is you get the perfected body and the perfected soul through God’s grace. And the person who tries to do it on their own with a bunch of machines is likely to end up as a dystopian character.

PETER THIEL: Well, it’s. Let’s, let’s articulate this and you can.

ROSS DOUTHAT: Have a heretical form of Christianity. Right. That says something else.

PETER THIEL: I don’t know. I think the word nature does not occur once in The Old Testament. And so if you, and there is a word in which, a sense in which the way I understand the Judeo Christian inspiration is it is about transcending nature. It is about overcoming things.

And the closest thing you can say to nature is that people are fallen. And that that’s the natural thing in a Christian sense is that you’re messed up. And that’s true. But, you know, there’s some ways that, you know, with God’s help, you are supposed to transcend that and overcome that.

Thiel is definitely not following "standard theology" on some of the stuff you mention!

"Jesus will win for certain."  "If chaos is inevitable... why [bother trying to accelerate economic growth]?"  Peter Thiel is constantly railing against this kind of sentiment.  He literally will not shut up about the importance of individual human agency, so much so that he has essentially been pascal's mugged by the idea of the centrality of human freedom and the necessity of believing in the indeterminacy of the future.  Some quotes of his:

"At the extreme, optimism and pessimism are the same thing. If you're extremely pessimistic, there's nothing you can do. If you're extremely optimistic, there's nothing you need to do. Both extreme optimism and extreme pessimism converge on laziness."

"I went to the World Economic Forum in Davos the last time in 2013... And people are there only in their capacity as representatives of corporations or of governments or of NGOs. And it really hit me: There are simply no individuals. There are no individuals in the room. There’s nobody there who’s representing themselves. And it’s this notion of the future I reject. A picture of the future where the future will be a world where there are no individuals. There are no people with ideas of their own."

"The future of technology is not pre-determined, and we must resist the temptation of technological utopianism — the notion that technology has a momentum or will of its own, that it will guarantee a more free future, and therefore that we can ignore the terrible arc of the political in our world.  A better metaphor is that we are in a deadly race between politics and technology. The future will be much better or much worse, but the question of the future remains very open indeed. We do not know exactly how close this race is, but I suspect that it may be very close, even down to the wire. Unlike the world of politics, in the world of technology the choices of individuals may still be paramount. The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom that makes the world safe for capitalism."

"COWEN: What number should I keep my eye on? Let’s say you’re going to take a long nap and I need someone to tell me, “Tyler, we’re out of the great stagnation now.” What’s the impersonal indicator that I should look at?

THIEL: I disagree with the premise of that question. I don’t think the future is this fixed thing that just exists. I don’t think there’s something automatic about the great stagnation ending or not ending. I think — I always believe in human agency and so I think it matters a great deal whether people end it or not.  There was this sort of hyperoptimistic book by Kurzweil, The Singularity Is Near; we had all these sort of accelerating charts. I also disagree with that, not just because I’m more pessimistic, but I disagree with the vision of the future where all you have to do is sit back, eat popcorn, and watch the movie of the future unfold.  I think the future is open to us to decide what to do. If you take a nap, if you encourage everybody else to take a nap, then the great stagnation is never going to end."

He is constantly on about this, mentioning the point about optimism/pessimism both leading to inaction in almost every interview.  In some of his christian stuff he also talks about the importance of how God gave us free will, etc.  Not sure exactly how all the theology adds up in his head, since as you point out, it seems very hard to square this with taking christian ideas about the end times 100% literally.

 

Similar situation regarding longevity and human flourishing versus a literalist take of tallying up "number of souls saved" -- he definitely doesn't seem to be tallying souls in the usual way where it's just about telling people the Good News, rather seems to think of the kingdom of heaven as something more material that humanity will potentially help bring about (perhaps something like, eg, a future transhumanist utopia of immortal uploaded super-minds living in a dyson swarm, although he doesn't come out and say this).  When christian interviewers ask him about his interest in life extension, he talks about how christianity is very pro-life, it says that life is good and more life is better, that christianity says death is bad and importantly that it's is something to be overcome, not something to be accepted.  (The christian interviewers usually don't seem to buy it, lol...)

 

"Isn't that goal quite similar to more standard goals of keeping societies open, innovative and prosperous?"

I think Thiel might fairly argue that his quest to conquer death, achieve transcendence, and build a utopian society has a pretty strong intrinsic spiritual connotation even when pursued by modern bay-area secular-rationalist programmer types who say they are nonreligious.

He might also note that (sadly) these transhumanist goals (or even the milder goals of keeping society "innovative and prosperous", if you interpret that as "very pro-tech and capitalistic") are very far from universal or "standard" goals held by most people or governments.  (FDA won't even CONSIDER any proposed treatments for aging because they say aging isn't a disease!  If you even try, journalists will write attack articles calling you a eugenicist.  (Heck, just look at what happened to poor Dustin Moskovitz... guy is doing totally unobjectionable stuff, just trying to save thousands of lives and minimize existential risk entirely out of the goodness of his own heart, and some unhinged psycho starts smearing him as the antichrist!)  A man can't even build a simple nuclear-battery-powered flying car without the FAA, NRC, and NHTSA all getting upset and making absurdly safetyist tradeoffs that destroy immense amounts of economic value.  And if you want to fix any of that, good luck getting any nation to give you even the tiniest speck of land on which to experiment with your new constitution outlining an AI-prediction-market-based form of government... you'd have better odds trying to build a city at the bottom of the ocean!)

Load more