I have been thinking about this problem for a while, and I was wondering if someone who is more of an expert on this topic than me could explain to me whether I am onto something, or if not why I am wrong? I know very little about AI, but I do know a little about how reinforcement learners work. They work by getting rewards and punishments for certain behaviors, like if an AI that is trying to complete a race gets rewards when it crosses the finish line, and punishments when it goes off the track. It tries to maximize these rewards and minimize these punishments as much as possible.
This leads me to question one, which is why these rewards and punishments are not analogous to happiness and suffering in animals? After all, they do seem to work in pretty much the exact same way.
And that leads me to my second question: If these AI systems can experience happiness and suffering, then we could code them to feel extremely high levels of rewards, which from a utilitarian perspective would far outstrip any current charity in levels of happiness.
And if I am right about this, then we could make a perfect charity, one which simply could code AI to feel the equivalent of the best day of your life 1000 times per dollar donated (I am coming up with this figure off of the top of my head, but just think how easy it would be to do this, compared with flying thousands of aid workers to Africa and delivering bednets, or even convincing shrimp manufactures to use electrical stunners!)
Also, if I am right about this, I am only 14 and do not have the capabilities to build an effective charity, so please make the charity yourself if you have the capabilities!
Hello! These are great questions, and I'll do my best to explain this from a technical perspective. I'm not the best at explaining complex things, so please ask if you want me to clarify anything.
I've noticed there's often quite a big difference in how people think about AI. When you interact with AI through chat, it feels very much like talking to another person, and that's intentional since these systems were designed to communicate naturally. But under the hood, from a programmer's perspective, AI works quite differently from how our minds work.
The key thing is that there's no reward and punishment in AI systems in the same way there is for humans, animals, insects, or even plants. When programmers created these AI systems, they didn't build a "brain" that actually experiences rewards or punishments. Instead, AI is more like an incredibly advanced search engine that is very, VERY good at predicting what words should come next in a sentence.
Think of it this way: AI has been trained on huge amounts of text. It's been trained on basically the entire internet, books, newspapers, articles, you name it. When you ask it something, it finds the most likely response based on all that training data. It's not actually "thinking" about the answer or feeling any specific way about it.
Like, imagine a dictionary. When you look up the word "puppy", the dictionary page itself doesn't feel happy or excited about teaching you that word. When you look up something sad, the page doesn't feel sad. It's a piece of paper, a tool. It's just showing you information. AI works similarly, but it's so much better at showing you information and presenting it in the form of a natural, human-like conversation.
AI systems can't experience happiness or suffering because, at their core, they are electrical circuits processing data. When you flip a light switch, the switch doesn't feel happy about turning on the light, right? It's just completing an electrical connection. AI works in the same way, except instead of a couple of wires with two possible results, you have billions of wires with a very large number of possible results.
Some people argue that AI might be sentient. While I respect that this is still being debated, from a technical standpoint current AI systems aren't complex enough for that kind of experience. They don't have physical senses to feel pain or pleasure the way humans do. We understand pain and happiness because they come from our physical bodies reacting to different situations. Computer circuits are just ways to transport information and data. They do not react to the information that is being transported.
That said, maybe someday we'll develop truly sentient machines, or maybe we'll have to rethink what we mean by sentience if technology and machines evolve in ways we can't yet understand or predict.
Thank you for the explanation! This clarified a lot of what I was confused about.