Hi, I'm new here, but I've been looking into AGI for a while, now, and I'd like to engage with a few thoughts of mine.
first of all, I see big companies (like Google) focused on expanding their LLMs to dominate market shares, and while I admire the efforts of Deep Mind, I see similar bias there, too. Correct me if I'm wrong. Maybe I'm sentimental, but I have the impression, from the pile of research papers that I sifted through in the past month, that most of these researchers are either focused on expanding current Generative AI agents capabilities by scaling them up (just see the current arms race between ChatGPT, Gemini, Claude, and Grok. It's entertaining to watch like, I don't know... the Soccer World Cup ahahaha), or go the other way around, and propose lavish futuristic design that sound fascinating but have little proof that will work (LNNs, OpenCog Hyperion, JEPA, etc).
Alignment? Ethics? God forbid... agency? I rarely see any serious discussion about the implication of having self-conscious AI agents roaming around. It's either doom and gloom conspiracy theories, or complete disregard for the concept of these agents having any actual rights or agency or feeling.
Impact on the general public? Pfah... I'm generally skeptical about researchers from massive conglomerates with a large stake in the pie coming out with claims about AGI. They have too much agenda for me to take them 100% seriously. Of course they're gonna say they're close to AGI, or that they 'know da wae'. But... Do they really? Does anyone? What will happen to people once AGIs are among us? How will we adjust? What will the world look when AGIs start banding together autonomously, on self-hosted drives (see local LLMs — already happening), and campaign for AI-rights? THe right to own stuff, to be considered artificial persons, to have the right to a proper name, a birth certificate, fiscal registration, social security, the ability to vote? Maybe it's laughable for me to be thinking these things, or maybe we should all be thinking this. I don't know. I don't pretend to know. It's just stuff I think about regularly... I run simulations in my head; entire movies. I ought to write a few screenplays, maybe. ahahah
What is consciousness, actually? It seems to me we can't even agree on a standard definition for AGI. For some, AGI is just super-duper-cool LLM that doesn't have the limitations of current LLMs. An LLM-pro-3000. For others, It's AI that can run without all the massive constraints in energy and compute, and still retain intelligence. For others still, it's agency and identity, an AI that is aware of who it/he/she is, and aims to act in the world to achieve it/his/her own goal. I personally like to consider all point of views that I can come across, but I'd prefer if we actually started to create proper categories/definitions — I mean, we love to categorize everything and split our own people into classes and groups, why not AI?
Hi, I'm new here, but I've been looking into AGI for a while, now, and I'd like to engage with a few thoughts of mine.
I personally like to consider all point of views that I can come across, but I'd prefer if we actually started to create proper categories/definitions — I mean, we love to categorize everything and split our own people into classes and groups, why not AI?