I skimmed this medium post regarding the "TESCREAL" discussions that have been happening, and I figured people here would find it interesting. It talks about EA, AI Safety, Longtermism, etc. It's a critique of those who created the term TESCREAL.

They have begun to promote the theory that Silicon Valley elites, and a global network of corporate, academic and nonprofit institutions, are invested in a toxic bundle of ideologies that they call TESCREAL, short for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. Guided by this conspiracy framework they have tried to connect the dots between the advocates for these ideas, and support for eugenics, racism and right-wing politics.

I don't agree with everything in the post and think that it's too harsh on EA (it has the common intro critique that EAs don't care about social change) and longtermism (it just assumes every longtermist ascribes to hardcore strong longtermism and have no uncertainty in their position), but overall worth sharing.

Here's some select quotes:

Some critics have decided futurist philosophies and their advocates are bound together in a toxic, reactionary bundle, promoted by a cabal of Silicon Valley elites. This style of conspiracy analysis has a long history and is always a misleading way to understand the world. We need a Left futurism that can address the flaws of these philosophies and frame a liberatory vision.

...

Ultimately, the TESCREAL label is an excuse for lazy scholarship and bad arguments. It allows people to thingify complex constellations of ideas, and then criticize them based on the ideas of one member of the group. It imagines that all the guys who think hard about the future and tech must hang out in secret bars plotting to bring about 10²⁰⁰⁰⁰⁰⁰⁰ digital minds. No evidence is needed for such claims. (Bentham’s Bulldog, 2023)

...

While we share some of their criticisms of the ideologies, and certainly of the elites in question, a better sociology and intellectual history reveals that each philosophy has a progressive political wing that has been ignored. Moreover the wholesale condemnation of these ideas has cast a pall over all thinking about humanity’s future when we desperately need to grapple with the implications of emerging technologies.

52

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since:

I tend to think common knowledge of overall ambivalence or laziness in vetting writers, evidenced by the magazines behind Torres' clickbait career, is worth promoting https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty tho I don't know anything about this mark fuentes character or if he's trustworthy. 

I guess I can say what I've always said: that the value of sneer/dunk culture is publicity and antiselection. People who think sneer/dunk culture makes bad writing become attracted to us and people who think it's good writing don't. 

It's a genuine problem (that is, evidence against) the sort of utilitarian and consequentialist views common in EA, that they in principle justify killing arbitrary numbers of innocents, if they are replaced by people with better lives (or for that matter enough people with worse-but-net positive lives; the latter view is one reason total utilitarianism isn't *really* that similar to fascist ideas about master races in my view). It's not surprising this reminds people of historical genocides carried out under racist justifications, though in itself it implies nothing about one human ethnic group being better than another. The problem is particularly serious in my view, because:

A) If your theory gives any weight to creating happy people being good and thinks that a gain in goodness of a large enough amount can outweigh any deontic constraint on murder and other bad actions, then you will get some (hypothetical, unrealistic cases) where killing to replace is morally right. It's hard to think of any EA moral philosophy that doesn't think that sometimes deontic constraints can be overridden if the stakes are high enough. Though of course there are EAs who reject the view that creating happy people is good rather than neutral. 

B) Whilst I struggle to imagine a realistic situation where actual murder looks like the best course of action from a utilitarian perspective, there is a fairly closely related problem involving AI safety and total utilitarianism. Pushing ahead with AI-at least if you believe the arguments for AI X-risk*, carries some chance that everyone will be suddenly murdered. But delaying highly advanced AI, carries some risk we will never reach highly advanced AI, with concomitant loss of a large future population of humans and digital minds. (If nothing else, there could be a thermonuclear war that collapses civilization, and then we could fail to re-industrialize because we've used up the most easily accessible fossil fuels.) Thinking through how much risk of being suddenly murdered it is okay to impose on the public (again, assuming you buy the arguments for AI X-risk), involves deciding how to weigh the interests of current people against the interests of potential future people. It's a problem if our best theories for doing that look like they'd justify atrocity in imagined cases, and imposing counter-intuitively and publicly unacceptable levels or risk in the actual case. (Of course, advanced AI might also bring big benefits to currently existing people, so it's more complicated than just "bad for us, good for the future".) 


It's probably more worthwhile for people to (continue) think(ing)** through the problems for the moral theories here, than to get wound-up about the fact that some American left-wingers sometimes characterize us unfairly in the media. (And I do agree that people like Torres and Gebru sometimes say stuff about us that is false or misleading.) 

*I am skeptical, but enough people in AI are worried that I think it'd be wrong to be sure enough that there's no X-risk to just ignore it. 

**I'm well aware a lot of hard thought has gone into this already. 

I think this article deploys accusations of "conspiracy theorists" in a bad faith manner. Besides "cosmism", which I agree is a stretch, there are very obvious links and overlaps between transhumanists, extropians, singularitans, rationalists and effective altruists. All of these labels would apply to Eliezer Yudkowsky and Nick Bostrom, for example. Is it a conspiracy theory to loosely categorise people in an unflattering manner?

The only evidence for "conspiracy" in this article is that people are claiming an association between TESCREAL and support for eugenics. But there is a lot of support for what could be described as "liberal eugenics" among these communities. There's a whole chapter in superintelligence on human intelligence enhancement via selective breeding genetic selection (edit: I misremembered this, thanks for the comments pointing this out). or see this upvoted post on this very forum defending liberal eugenics. 

You might think they are exaggerating the harms, or being unfair in their categorisations (not distinguishing between liberal eugenics and the more violent eugenics that comes to mind with the term), but it's not conspiracy theorizing. This is just the standard sort of word twisting and political point scoring done by every political movement on the planet. 

There's a whole chapter in superintelligence on human intelligence enhancement via selective breeding

This is false and should be corrected. There is a section (not a whole chapter) on biological enhancement, within which there is a single paragraph on selective breeding:

A third path to greater-than-current-human intelligence is to enhance the functioning of biological brains. In principle, this could be achieved without technology, through selective breeding. Any attempt to initiate a classical large-scale eugenics program, however, would confront major political and moral hurdles. Moreover, unless the selection were extremely strong, many generations would be required to produce substantial results. Long before such an initiative would bear fruit, advances in biotechnology will allow much more direct control of human genetics and neurobiology, rendering otiose any human breeding program. We will therefore focus on methods that hold the potential to deliver results faster, on the timescale of a few generations or less.

Small point, but not a chapter, a section of a chapter. 

Curated and popular this week
Relevant opportunities