Matrice Jacobine🔸🏳️‍⚧️

Student in fundamental and applied mathematics
1031 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Posts
49

Sorted by New

Comments
130

Topic contributions
1

Fully autonomous weapons seems to me to be a clear-cut case of differential acceleration in any case: not giving any kind of legitimate battlefield advantage for law-abiding democratic countries (human reflexes are top of the sigmoid; this is one of our main evolutionarily-selected skills for obvious reasons), but allowing authoritarians to establish a military dictatorship with minimal staff (historically "the army is ultimately made up of ordinary people who can refuse to shoot their brethren and/or shoot the dictator instead" have been an important pressure valve), or to organize genocidal massacres with automated recognition of targeted civilians (i.e. the FLI Slaughterbots scenario).

Democracy promotion is a common interest of many causes. It's highly unlikely we can do anything about (potentially, will ever be able to do anything about again) global poverty, factory farming, or existential risk, if all world powers become repressive autocracies squashing any sign of moral cosmopolitanism and freethought.

"Longtermists should primarily concern themselves with the lives/welfare/rights/etc. of future non-human minds, not humans."

"AI safety advocates should primarily seek an understanding with {AI ethics advocates,AI acceleration advocates}."

"It would be preferable for progress of open-weights models to keep up with progress of closed-weights models."

"Countering democratic backsliding is now a more urgent issue than more traditional longtermist concerns."

Load more