MJ

Matrice Jacobine

Student in fundamental and applied mathematics
177 karmaJoined Pursuing a graduate degree (e.g. Master's)

Comments
25

Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra's report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.

Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.

Those are meta-level epistemological/methodological critiques for the most part, but meta-level epistemological/methodological critiques can still be substantive critiques and not reducible to mere psychologization of adversaries.

In addition to what @gw said on the public being in favor of slowing down AI, I'm mostly basing this on reactions to news about PauseAI protests on generic social media websites. The idea that LLMs scaling without further technological breakthrough will for sure lead to superintelligence in the coming decade is controversial by EA standards, fringe by general AI community standard, and resoundly mocked by the general public.

If other stakeholders agree with the existential risk perspective then that is of course great and should be encouraged. To develop further on what I meant (though see also the linked post), I am extremely skeptical that allying with copyright lobbyists is good by any EA/longtermist metric, when ~nobody think art generators pose any existential risk and big AI companies are already negotiating deals with copyright giants (or even the latter creating their own AI divisions as with Adobe Firefly or Disney's new AI division), while independent EA-aligned research groups like EleutherAI are heavily dependent on the existence of open-source datasets.

https://nickbostrom.com/papers/astronomical-waste/

In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.

However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.8 Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.

@EliezerYudkowsky famously called this position requiredism. A common retort is that most self-identified compatibilist philosophers are in fact requiredists, making the word "compatibilism" indeed a bit of a misnomer.

PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company's office talking about existential risk from artificial general intelligence won't convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.

Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).

¹: Microsoft/OpenAI, Amazon/Anthropic, Google/DeepMind, Facebook/Meta

It seems to plausible that, much like Environmental Political Orthodoxy (reverence for simple rural living as expressed through localism, anti-nuclear sentiment, etc.) ultimately led the environmental movement to be harmful for its own professed goals, EA Political Orthodoxy (technocratic liberalism, "mistake theory", general disdain for social science) could (and maybe already had, with the creation of OpenAI) ultimately lead EA efforts on AI to be a net negative by its own standards.

I identify with your asterisk quite a bit. I used to be much more strongly involved in rationalist circles in 2018-2020, including the infamous Culture War Thread. I distanced myself from it around ~2020, at the time of the NYT controversy, mostly just remaining on Rationalist Tumblr. (I kinda got out at the right time because after I left everyone moved to Substack, which positioned itself against the NYT by personally inviting Scott, and was seemingly designed to encourage every reactionary tendency of the community.)

One of the most salient memories of the alt-right infestation in the SSC fandom to me was this comment by a regular SSC commenter with an overtly antisemitic username, bluntly stating the alt-right strategy for recruiting ~rationalists:

[IQ arguments] are entry points to non-universalist thought.

Intelligence and violence are important, but not foundational; Few people disown their kin because they're not smart. The purpose of white advocacy is not mere IQ-maximization to make the world safe for liberal-egalitarianism; Ultimately, we value white identity in large part because of the specific, subjective, unquantifiable comfort and purpose provided by unique white aesthetics and personalities as distinct from non-whites and fully realized in a white supermajority civilization.

However, one cannot launch into such advocacy straight away, because it is not compatible with the language of universalism that defines contemporary politics among white elites. That shared language, on both left and right, is one of humanist utilitarianism, and fulfillment of universalist morals with no particular tribal affinity. Telling the uninitiated Redditor that he would experience greater spiritual fulfillment in a white country is a non-starter, not on the facts, but because this statement is orthogonal to his modes of thinking.

Most people come into the alt-right from a previous, universalist political ideology, such as libertarianism. At some point, either because they were redpilled externally or they had to learn redpill arguments to defend their ideology from charges of racism/sexism/etc, they come to accept the reality of group differences. Traits like IQ and criminality are the typical entry point here because they are A) among the most obvious and easily learned differences, and B) are still applicable to universalist thinking; that is, one can become a base-model hereditarian who believes in race differences on intelligence without having to forfeit the mental comfort of viewing humans as morally fungible units governed by the same rules.

This minimal hereditarianism represents an ideological Lagrange point between liberal-egalitarian and inegalitarian-reactionary thought; The redpilled libertarian or liberal still imagines themselves as supporting a universal moral system, just one with racial disparate impacts. Some stay there and never leave. Others, having been unmoored from descriptive human equality, cannot help but fall into the gravity well of particularism and "innate politics" of the tribe and race. This progression is made all but inevitable once one accepts the possibility of group differences in the mind, not just on mere gross dimensions of goodness like intelligence, but differences-by-default for every facet of human cognition.

The scope of human inequality being fully internalized, the constructed ideology of a shared human future cedes to the reality of competing evolutionary strategies and shared identities within them, fighting to secure their existence in the world.

There is isn't really much more to say, he essentially spilled the beans – but in front on an audience who pride itself so much in "high-decoupling" that they can't warp their mind around the idea that overt neo-Nazis might in fact be bad people who abuse social norms of discussion to their advantage – even when said neo-Nazis are openly bragging about it to their face.

If one is a a rationalist who seek to raise the sanity waterline and widely spread the tools of sound epistemology, and even more so if one is an effective altruist who seek to expand the moral circle of humanity, then there is zero benefit to encourage discussion of the currently unknowable etiology of a correlation between two scientifically dubious categories, when the overwhelming majority of people writing about it don't actually care about it, and only seek to use it as a gateway to rehabilitating a pseudoscientific concept universally rejected by biologists and geneticists, on explicitly epistemologically subjectivist and irrationalist grounds, to advance a discriminatory-to-genocidal political project.

Load more