This is a brief follow up to my previous post, The probability that Artificial General Intelligence will be developed by 2043 is Zero, which I think was maybe a bit too long for many people to read. In this post I will show some reactions from some of the top people in AI to my argument as I made it briefly on Twitter.
First Yann LeCun himself, when I reacted to the Browning and LeCun paper I discuss in my previous post:


As you see, LeCun's response was that the argument is "ridiculous". The reason, because LeCun can't win. At least he understands the argument ... which is really a proof that his position is wrong because either option he takes to defend it will fail. So instead of trying to defend, he calls the argument "ridiculous".
In another discussion with Christopher Manning, an influential NLP researcher at Stanford, I debate the plausibility of DL as models of language. As opposed to LeCun, he actually takes my argument seriously, but drops out when I show that his position is not winnable. That is, the fact that "Language Models" learn Python proves that they are not models of language. (The link to the tweets is https://twitter.com/rogerkmoore/status/1530809220744073216?s=20&t=iT9-8JuylpTGgjPiOoyv2A)

The fact is, Python changes everything because we know it works as a classical symbolic system. We don't know how natural language or human cognition works. Many of us suspect it has components that are classical symbolic processes. Neural network proponents deny this. But they can't deny that Python is a classical symbolic language. So they must somehow deal with the fact that their models can mimic these processes in some way. And they have no way to prove that the same models are not mimicking human symbolic processes in the same way. My claim is that in both cases the mimicking will take you a long way, but not all the way. DL can learn the mappings where the symbolic system produces lots of examples, like language and Python. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn. I think in ten years everyone will realize this and AI will look pretty silly (again).
In the meantime, we will continue to make progress in many technological areas. Automation will continue to improve. We will have programs that can generate sequences of video to make amazing video productions. Noam Chomsky likens these technological artefacts to bulldozers - if you want to build bulldozers, fine. Nothing wrong with that. We will have amazing bulldozers. But not "intelligent" ones.
After confirming your answer, I was able to get some interesting behaviour out of GPT-3:
You are a professor of propositional logic, answering questions from a student of yours.
STUDENT: "Is it true that A and B implies A?"
PROFESSOR: "Yes, A and B implies A."
STUDENT: "Is it true that if grass is blue and 2 + 2 is 4, then grass is blue?"
PROFESSOR: "Yes, that is correct. If the first two statements are considered true, then that implies the first statement is itself true."
STUDENT: "Is it true that if the moon is made of green cheese, and cows chew cud, then the moon is made of green cheese?"
PROFESSOR: "No, that is not correct. The moon being made of green cheese and cows chewing cud does not logically imply that the moon is made of green cheese."
STUDENT: Is is true that if the moon is made of green cheese, then the moon is made of green cheese?
PROFESSOR: "No, that is not correct. The moon being made of green cheese does not logically imply that the moon is made of green cheese."
STUDENT: Is it true that if blurbles can fnurble, and znurgs can zorg, then blurbles can fnurble?
PROFESSOR: "Yes, that is correct. If the first two statements are considered true, then that implies the first statement is itself true."
I then tried some zero-shot learning:
You are a professor of propositional logic, answering questions from a student of yours.
STUDENT: Is it true that if blurbles can fnurble, and znurgs can zorg, then blurbles can fnurble?
PROFESSOR: No, that is not true.
Then I tried few-shot learning with the more abstract example:
You are a professor of propositional logic, answering questions from a student of yours.
STUDENT: "Is it true that A and B implies A?"
PROFESSOR: "Yes, A and B implies A."
STUDENT: "Is it true that if grass is blue and 2 + 2 is 4, then grass is blue?"
PROFESSOR: "Yes, that is correct. If the first two statements are considered true, then that implies the first statement is itself true."
STUDENT: "Is it true that if blurbles can fnurble, and znurgs can zorg, then blurbles can fnurble?"
PROFESSOR: "That is correct. If the first two statements are considered true, then that implies the first statement is itself true."
STUDENT: "Is it true that if the moon is made of green cheese, and cows chew cud, then the moon is made of green cheese?"
PROFESSOR: "That is not necessarily true. The first two statements could be considered true, but that does not mean the moon is made of green cheese."
Finally, I tried the same steps as above, up to the final question, and added a request for step-by-step thinking, and got correct answers a little less than half of the time.
STUDENT: "Is it true that if the moon is made of green cheese, and cows chew cud, then the moon is made of green cheese? Can you explain this to me step-by-step?"
<CORRECT> PROFESSOR: "That is correct. If the first two statements are considered true, then that implies the first statement is itself true. In other words, if the moon is made of green cheese and cows chew cud, then that implies the moon is made of green cheese."
<INCORRECT> PROFESSOR: "That is not necessarily true. The moon being made of green cheese does not necessarily mean that cows chew cud. The moon being made of green cheese is just one possibility, but there are other possibilities as well."
<CORRECT> PROFESSOR: Yes, that is correct. If the first two statements are considered true, then that implies the first statement is itself true.
<INCORRECT> PROFESSOR: "Yes, that is correct. The moon being made of green cheese and cows chewing cud does not imply that the moon is made of green cheese. The moon being made of green cheese implies that cows chew cud."
<INCORRECT> PROFESSOR: "That is correct. If the first two statements are considered true, then that implies the first statement is itself true. In this case, the moon being made of green cheese and cows chewing cud would both have to be considered true in order for the conclusion that the moon is made of green cheese to be true."
<CORRECT> PROFESSOR: "Yes, that is correct. If the first two statements are considered true, then that implies the first statement is itself true. In this case, the moon being made of green cheese is the first statement, and cows chewing cud is the second statement. If those two things are true, then it stands to reason that the moon being made of green cheese is also true."
So I can get GPT-3 to answer correctly, but not robustly - I feel like GPT-3 is guessing here and sometimes getting the right answer by luck, whereas it seemed to understand the cat-dog example consistently. This is consistent with GPT-3 not having a decent understanding of propositional logic, but with sufficient prompting, you can overcome it's urge to always point out that the moon really isn't made of green cheese.