Matrice Jacobine

Student in fundamental and applied mathematics
619 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Comments
91

Topic contributions
1

Matrice Jacobine
1
0
0
50% agree

While this is ostensibly called "strong longtermism", the precision of saying "near-best" instead of "best" makes (i) hard to deny (the opposite statement would be "one ought to choose an option that is significantly far from the best for the far future"). The best cruxes against (ii) would be epistemic ones i.e. whether benefits rapidly diminish or wash out over time.

I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.

There's a decent amount of French-speaking ~AI safety content on YouTube:

I added a bunch of relevant tags to your post that might help you search the forum better.

Do you think work on AI welfare can count as part of Cooperative AI (i.e. as fostering cooperation between biological minds and digital minds)?

It strikes me as very unlikely that a rudimentary Pong-playing AI running on biological wetware is more sentient than a modern LLM running on digital hardware.

One of the killings was, as far as we know, purely mimetic and (allegedly) made by someone (@Maximilian Snyder) who never even interacted online with Ziz, so I don't think it's an invalid example to bring up actually.

I've known EAs who have been all-consumed by abstract guilt. It has never led them to producing the greatest good for the greatest number. At best it led them to being chronically depressed and unable to do any stable work. At worst it has led to highly net negative actions like joining a cult.

Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying "I'm working on going to mars, it's the most important project in the world" and Demis argues "actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars".  (This is in the context of Thiel's long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that "there's nowhere else to go" to escape mainstream culture/civilization, that you can't escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).

FTR: while Thiel has already claimed this version before, the more common version (e.g. here, here, here from Hassabis' mouth, and more obliquely here in his lawsuit against Altman) is that Hassabis was warning Musk about existential risk from unaligned AGI, not threatening him with his own personally aligned AGI. However, this interpretation is interestingly resonant with Elon Musk's creation of OpenAI being motivated by fear of Hassabis becoming an AGI dictator (a fear his co-founders apparently shared). It is certainly an interesting hypothesis that Thiel and Musk engineered together for a decade both the AGI race and global democratic backsliding wholly motivated by a same single one-sentence possible slight by Hassabis in 2012.

Load more