M

magic9mushroom

-7 karmaJoined

Comments
5

A lot of these orgs are IMO -EV:

-I'm opposed to (most) animal rights

-some of the choices regarding AI look like potential backfires via feeding capabilities (PauseAI is the only one where this is completely implausible)

If someone has information suggesting that the Nucleic Acid Observatory and/or Midas Project might be -EV, please tell me (as biorisk and AI risk are the ones most susceptible to this).

I don't think world dystopia is entirely necessary, but a successful long stop for AI (the ~30+ years it'll probably take) is probably going to require knocking over a couple of countries that refuse to play ball. It seems fairly hard to keep even small countries from setting up datacentres and chip factories except by threatening or using military force.

To be clear, I think that's worth it. Heck, nuclear war would be worth it if necessary, although I'm not sure it will be - the PRC in particular I rate as >50% either a) agreeing to a stop, and/or b) getting destroyed in non-AI-related nuclear war in the next few years.

Two criticisms:

  1. On two occasions you referred to nuclear war as an "existential risk". It's not. You also referred to 1970s-tier bioweapons as an "existential risk"; they weren't. Both are GCRs but not X; there have never been enough nukes to kill all humans and even infectious diseases will have R drop below 1 before population density drops to 0. We are at a point now where biotechnology is beginning to pose notable X-risk, but we weren't then.
  2. You mentioned that the communities you reference, and EA/Rats, are overwhelmingly male, but you do not make any actual argument about how this is relevant. Do remember that a non-trivial fraction of Rats are not feminists, and this pings their "hostile politics" detectors (as does the editing of the quote from "men" to "people"); that's a loss in persuasiveness, which should be avoided unless you need it to make some sort of point.

Deceptive alignment is a convergent instrumental subgoal. If an AI is clearly misaligned while its creator still has the ability to pull the plug, the plug will be pulled; ergo, pretending to be aligned is worthwhile ~regardless of terminal goal.

Thus, the prior would seem to be that all sufficiently-smart AI appear aligned, but only X proportion of them are truly aligned where X is the chance of a randomly-selected value system being aligned; the 1-X others are deceptively aligned.

GPT-4 being the smartest AI we have and also appearing aligned is not really evidence against this; it's plausibly smart enough in the specific domain of "predicting humans" for its apparent alignment to be deceptive.

Drone swarms do take time to build. Also, nuclear war is "only" going to kill a large percentage of your country's citizens; if you're sufficiently convinced that any monkey getting the banana means Doom, then even nuclear war is worth it.

I think getting the great powers on-side is plausible; the Western and Chinese alliance systems already cover the majority. Do I think a full stop can be implemented without some kind of war? Probably not. But not necessarily WWIII (though IMO that would still be worth it).