pood
Veteran Member
- Joined
- Oct 25, 2021
- Messages
- 4,166
- Basic Beliefs
- agnostic
I think we could subdivide “mind” into thinking, sentience, consciousness and meta-consciousness (aware that one is aware). Humans and probably most other animals meet the first three criteria while humans meet all four and probably many other animals do too.
We don’t know how consciousness is generated. We have a functionalist account of it, but we don’t know how the firing of synapses, neurons etc. translates into qualia, awareness, self-awareness and so on. This is the hard problem of consciousness. The eliminativist maintains there is nothing to solve, that functionalism is all we need, but most find this unpersuasive.
Since we don’t know how consciousness arises from functionalism, I think our ignorance here should encourage a little humbleness about making any sweeping statements about what is or is not conscious, or what can or can’t be. Bilby mentioned the problem of other minds. It’s true I can’t prove other humans besides me have minds — the cogito referred to a sole thinker, not everyone — but it seems a safe inference that they do, for the alternative, solipsism, is inexplicable and runs afoul of Occam. I think this philosophy riddle is not intended to seriously support solipsism, but rather to illustrate how hard it is in general to prove any of our claims, even those that would appear almost to be self-evident. This is why scientists caution that they never prove anything, but only support models with evidence.
I think the above should serve as a reminder of how defeasible our knowledge is when it comes to asking whether ChatGPT or other algorithms have some degree of awareness. Our best answers here must be be best guesses based on philosophical predilections (i.e., biases). While I can’t prove that other people have minds, I can’t prove that they don’t, either, if they don’t. Just so I can’t prove that computers do or don’t have minds, to some degree. Panpsychists think everything has a mind, while metaphysical idealists think only mental states exist and that the brain supervenes on the mind rather than the other way around, as metaphysical naturalists hold. But if no test can distinguish between, say, metaphysical idealism and metaphysical naturalism — and it seems that none can — the debate lies outside science.
I was going to say that those who plump for the notion that metaphysical idealism is true, or that computers think and are aware at least to some degree, have the burden of proof, but then I ask myself, why should they? I can’t prove that other people have minds or don’t. Why should anyone be asked to prove whether a computer is aware or not? Maybe we should heed some of the ancient Greek skeptics who held that knowledge was impossible and beliefs unjustifiable.
Recognizing all these limitations and my own inbuilt biases, I think ChatGPT does not possess meta-consciousness, consciousness, or sentience. It might be said to “think,” though rather badly, to some degree. I think (but cannot prove) that minds are properties of embodied entities subject to selection pressures. ChatGPT has no body, no sense organs, and no selection pressures operating on it, and furthermore, computers do not operate like brains anyway.
When I read the transcripts of Chat GPT’s “conversations” with Jarhy and Copernicus, I did not get the impression of a mind operating inside ChatGPT. Quite the opposite, frankly; I strongly got a Clever Hans vibe.
We don’t know how consciousness is generated. We have a functionalist account of it, but we don’t know how the firing of synapses, neurons etc. translates into qualia, awareness, self-awareness and so on. This is the hard problem of consciousness. The eliminativist maintains there is nothing to solve, that functionalism is all we need, but most find this unpersuasive.
Since we don’t know how consciousness arises from functionalism, I think our ignorance here should encourage a little humbleness about making any sweeping statements about what is or is not conscious, or what can or can’t be. Bilby mentioned the problem of other minds. It’s true I can’t prove other humans besides me have minds — the cogito referred to a sole thinker, not everyone — but it seems a safe inference that they do, for the alternative, solipsism, is inexplicable and runs afoul of Occam. I think this philosophy riddle is not intended to seriously support solipsism, but rather to illustrate how hard it is in general to prove any of our claims, even those that would appear almost to be self-evident. This is why scientists caution that they never prove anything, but only support models with evidence.
I think the above should serve as a reminder of how defeasible our knowledge is when it comes to asking whether ChatGPT or other algorithms have some degree of awareness. Our best answers here must be be best guesses based on philosophical predilections (i.e., biases). While I can’t prove that other people have minds, I can’t prove that they don’t, either, if they don’t. Just so I can’t prove that computers do or don’t have minds, to some degree. Panpsychists think everything has a mind, while metaphysical idealists think only mental states exist and that the brain supervenes on the mind rather than the other way around, as metaphysical naturalists hold. But if no test can distinguish between, say, metaphysical idealism and metaphysical naturalism — and it seems that none can — the debate lies outside science.
I was going to say that those who plump for the notion that metaphysical idealism is true, or that computers think and are aware at least to some degree, have the burden of proof, but then I ask myself, why should they? I can’t prove that other people have minds or don’t. Why should anyone be asked to prove whether a computer is aware or not? Maybe we should heed some of the ancient Greek skeptics who held that knowledge was impossible and beliefs unjustifiable.
Recognizing all these limitations and my own inbuilt biases, I think ChatGPT does not possess meta-consciousness, consciousness, or sentience. It might be said to “think,” though rather badly, to some degree. I think (but cannot prove) that minds are properties of embodied entities subject to selection pressures. ChatGPT has no body, no sense organs, and no selection pressures operating on it, and furthermore, computers do not operate like brains anyway.
When I read the transcripts of Chat GPT’s “conversations” with Jarhy and Copernicus, I did not get the impression of a mind operating inside ChatGPT. Quite the opposite, frankly; I strongly got a Clever Hans vibe.