"I don't usually comment on Reddit. 

But when I do, I linkpost to the Effective Altruism forum. And I hide my existential vulnerability with such wit." -Yours truly

Reddit OP: "is a possible solution to keeping up with ASI to be Matrix-like "I know kung-fu" human learning?"

Me: List of "learn to learn" self-accelerating hacks:

  • Doman method of early child education (prespeech literacy)
  • mnemonics (like world memory championship)
  • vipassana-ish meditation
  • Gregg shorthand, especially when done with toddlers after pre speech literacy
  • then you add in the "let AI teach me" stuff

The future of AI is distillation. bigger models training smaller fresh models to be more efficient and nimble. The same goes with humans. 

Can we haz nice thingss? Futureburger n real organk lief maybs?  <3

https://www.reddit.com/r/singularity/comments/1o4aypu/is_a_possible_solution_to_keeping_up_with_asi_to/

Now, with my cowardly fear of downvotes, I will hide my real concern at the bottom of this page. Shh![1]

  1. ^

    The "Selectorate Theory" of political economy (check the "Rules for Rulers" 18min video by CGP Gray), combined with AI decreasing the size of the "minimum viable winning coalition", predicts with all the academic rigorousnesses that we are entering "the suck". Thank you for listening to my TEDtalk.

-3

0
0

Reactions

0
0
Comments3
Sorted by Click to highlight new comments since:

You have a list of "learn to learn" methods, and then you said "Can we haz nice thingss? Futureburger n real organk lief maybs?" I'm not sure I'm interpreting you correctly, but it sounds like you're saying something like

If we biological humans get sufficiently good at learning to learn, using methods such as the Doman method, mnemonics, etc., then perhaps we can keep up with the rate at which ASI learns things, and thus avoid bad outcomes where humans get completely dominated by ASI.

If that's what you mean then I disagree, I don't think our current understanding of the science of learning is remotely near where it would need to be to keep up with ASI, and in fact I would guess that even a perfect-learner human brain would still never be able to keep up with ASI regardless of how good a job it does. Human brains still have physical limits. An ASI need not have physical limits because it can (e.g.) add more transistors to its brain.

Here's a more straightforward presentation, hope it helps. https://forum.effectivealtruism.org/posts/PWYQh6uhxKCswrJLy/on-selectorate-theory-and-the-narrowing-window

What I mean is that it would be super nice to be able to enjoy these human learning techniques. And have decades of life in which to enjoy those things. 

But, because of the concerns about human political economy in the footnote, which Will McCaskill mentions super obliquely and quietly in his latest post I don't think that ASI is going to get the chance to kill off the first 4 billion of humanity. ASI might overrun the globe and finish off the next 4 billion, but we're going to get in the first punch 👊!

Please upload this humble cultivator, this one so totally upvoted your comment!🙇‍♂️😅

Curated and popular this week
Relevant opportunities