DrZoidberg
Contributor
I did not say it need to understand, I said it has to look like it understands.
We don't want robots to pass the Turing test. That would be worthless to us. We just want them to make us happy and fix shit for us.
I think you have some sort of fantasy/sci-fi world concept of AI.
But one way of looking like it understands is actually understanding.
No, it's not. It's more a measure of how easily fooled you are. Understanding isn't a question of appearance. It's like saying that the important thing when passing a test is to look competent, not that you know wtf you're doing.
But your google argument is irrelevant because as long as machine passes the turing test for all intents and purposes it has a mind, certainly in practical sense.
Well, I agree with you. But it's highly controversial, and hinges on how we define mind. I have a very wide definition of mind. I'm on team Dennett. He defines mind as a collection of sensory inputs and outputs. It's the "nervous system" that makes something a mind. Not necessarily the thinking.
BTW, "passing the Turing test" is something different than something being "Turing complete". The Turing test is pointless, while Turing complete is a great goal, but something we will never reach.