Matrice Jacobine

Student in fundamental and applied mathematics
519 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Comments
67

This isn't just abstract, historically in the South, it was often the feds who wanted to protect Black citizens and the state governments who wanted to avoid this under the banner of state's rights. 

This is exactly what I was thinking about. I thought this was the reason why the civil rights movement was heavily reliant on the constitutional amendments passed during Reconstruction.

Non-American here: how is this constitutional? Isn't the whole point of US federalism to not allow that kind of law to exist?

I'm on desktop, with ad-blocker off, and "hide Intercom" unchecked in the options, and still can't see Intercom. I even tried different browsers.

Stylistic pastiche is unambiguously protected by the First Amendment, not "forgery".

Digital rights organizations like the Electronic Frontier Foundation might be of particular interest, as they not just combat anti-democratic abuses by both state and corporate powers, but are particularly interested in protecting spaces of communication from surveillance and censorship, which seem of particular importance for making society resilient to authoritarianism in the long term, including in the least convenient possible world where democratic backsliding throughout the West turns out to be in fact a durable trend (in which case the more traditional organizations you cite will probably be useless).

This seems to complement @nostalgebraist's complaint that much of work on AI timelines (Bio Anchors, AI 2027) rely on a few load-bearing assumptions (e.g. the permanence of Moore's law, the possibility of software intelligence explosion) and then doing a lot of work crunching statistics and Fermi estimations to "predict" an AGI date, when really the end result is overdetermined by those beginning assumptions and not affected very much by changing the secondary estimations. It is thus largely a waste of time to focus on improving those estimations when there is a lot more research to be done on the actual load-bearing assumptions:

  • Is Moore's law going to continue indefinitely?
  • Is software intelligence explosion plausible? (If yes, does it require concentration of compute?)
  • Is technical alignment easy?
  • ...

Which are the actual cruxes for the most controversial AI governance questions like:

  • How much should we worry about regulatory capture?
  • Is it more important to reduce the rate of capabilities growth or for the US to beat China?
  • Should base models be open-sourced?
  • How much can friction when interacting with the real world (e.g. time needed to build factories and perform experiments (poke @titotal), regulatory red tape, labor unions, etc.) prevent AGI?
  • How continuous are "short-term" AI ethics efforts (FAccT, technological unemployment, military uses) with "long-term" AI safety?
  • How important is it to enhance collaboration between US, European and Chinese safety organizations?
  • Should EAs work with, for, or against frontier AI labs?
  • ...

I'd like to thank Sam Altman, Dario Amodei, Demis Hassabis, Yann LeCun, Elon Musk, and several others who declined to be named for giving me notes on each of the sixteen drafts of this post I shared with them over the past three months. Your feedback helped me polish a rough stone of thought into a diamond of incisive criticism. 

??? Was this meant for April's Fools Day? I'm confused.

It doesn't matter what you think they should have done, the fact is, Murati and Sutskever defected to Altman's side after initially backing his firing, almost certainly because the consensus discourse quickly became focused on EA and AI safety and not the object-level accusations of inappropriate behavior.

Load more