> What is the real criterion for a machine to be considered intelligent?
This is actually the core of the debate. The answer is that there is no real criterion because we do not understand intelligence. Or, put differently, the one-dimensional understanding of intelligence we typically entertain is a simplification.
This is very inconvenient so we just pretend intelligence was a well-defined concept. That in turn leads to all kinds of confusion and pointless arguments as you have pointed out.
While there is not definitive criterion for intelligent there is a lot of intuition. For example, I can find a book to be very intelligently written, but at the same time the book itself is not intelligent. No matter how clever answers it has to complicated questions, the book is still just an inanimate object created by a human. The definition of intelligence has to be stretched quite far in order to bypass our intuition on this.
We see similar situations when people anthropomorphize animals, or ask questions like do plants have feelings. Biology research can look at neurons and DNA sequences and find all kind of similarities to humans, but it still requires a bit of a jump when we start applying human philosophy concepts to non-humans.
I find it easier to look at it as metaphors. ChatGPT has intelligence similar to how the planet has an intelligence , or how the internet is like a brain with every single connected device their neurons.
IMO we first need to clean up our language and precisely define the word intelligence so we all know what we are even talking about.
I would imagine we actually understand intelligence but in ordinary language it is a placeholder for a number of different meanings and concepts.
Same problem with "consciousness", same problem with "sentience".
I think when we say we don't understand these words it is really that they can't be understood as they are used in ordinary language because they can't objectively mean all the values people assign to these words.
In the face of AI, why would we not have to further define and expand our language for communication purposes? That actually seems obvious if you think about it for 2 seconds.
"sentience" has an additional problem that science fiction authors conflate it with "sapience".
I suspect that some influential science fiction editor made a mistake early on and it stuck. Outwith science fiction, I've less commonly come across the conflation, and people generally use the dictionary meanings, more in accord with the Latin roots, of "sentient" meaning being capable of sensation/feeling and "sapient" meaning being capable of intelligent thought.
GPT-4 falls apart on many tasks that humans find trivial, such as planning.
Intelligence is multi-faceted and quite frankly the average technologists understanding of human cognition is quite poor. Intelligence is not just information retrieval.
I'm sure, because I specifically chose GPT-4 because of its intelligence. Otherwise it wouldn't be of use to me-- I already have a slew of tools and sources available, but it takes more than that to pick out just the right solution and present it.
It also quite good at planning its replies, that I see.
This is actually the core of the debate. The answer is that there is no real criterion because we do not understand intelligence. Or, put differently, the one-dimensional understanding of intelligence we typically entertain is a simplification.
This is very inconvenient so we just pretend intelligence was a well-defined concept. That in turn leads to all kinds of confusion and pointless arguments as you have pointed out.