How the brain generates consciousness is not understood.
Well, right … and, given this fact, maybe it behooves everyone to be a little more cautious about sweeping declarations of what is, or isn’t true, or what can or cannot be true, about brains, minds, computers, etc.?
David Chalmers points out that we have a functionalist account of how brains generate consciousness, through the firing of neurons etc., and we can map those firings to specific experiences (just this morning I was reading about a study that used brain scans to determine that dogs understand nouns, and plenty of them, in the same way we do), but we still don’t know what consciousness IS — how these neurons firings produce, for example, the quale red.
On the other hand, there are eliminativists who say that once we identify neuron firings and such, we’ve explained the experience of red — there is nothing more to explain. I find that dubious.
The idea seems to be that mind is an emergent property of brain, in the same way that water is an emergent property of H2O. And that while individual water molecules are not wet, the arrangement as a whole can be wet, or gaseous, or solid, depending on temperature and air pressure.
And I’d say that is a perfectly OK explanation for how water is an emergent property of its molecules, but would counter that we have no such stepwise account of mind emerging from brain. So the Hard Problem of Consciousness, as Chalmers called it, remains unresolved.
I think to have a discussion like this, to avoid talking past one another, we should agree on some operational definitions of the terms being used. Personally I would operationally define intelligence as the ability to solve problems, and consciousness as being aware of one’s environment, and perhaps self-aware, too, and aware that we are aware (meta-consciousness). However, I think it’s possible to be conscious without being meta-conscious, at least in principle, though maybe that’s not true in fact. If these definitions are valid, it may be possible to be intelligent without being conscious, and even to be conscious without being intelligent.
We presume that a chess-playing computer like Deep Blue is really, really intelligent at solving chess problems — it beat the world chess champ, for example, back in 1997. I know that today, I lose chess games to computers all the time, provided they are set to maximum mastery level. The lower levels I can beat easily.
But we also presume that while Deep Blue was really intelligent at solving chess problems, it had no consciousness whatever of what it was doing, or why it was doing it, or who or what it was. Perhaps that false, though — how would we really know, after all? Precisely because we DON’T know how consciousness arises from the brains, as you point out, then we ought to be very cautious of saying what is and isn’t possible.
Today we have the growing field of artificial general intelligence, which will raise serious concerns as it gets better and better. If we were to accept the idea that machines can be intelligent without being conscious, then this raises the serious problem of having what someone in this thread called sociopathic machines. One could imagine a dystopian scenario in which we turn over to AGI machines certain problems to intelligently solve, like, for example, finding a solution to racism. And the machine, not being conscious and therefore possessing no ethical or moral sense, and no empathy for others, might decide that the most straightforward solution to the problem is to eliminate all people who are not white — problem solved!
OTOH, perhaps it’s not possible to have intelligent machines without them possessing at least some degree of consciousness. This seems to be Jarhyn’s position. You deny this, but then again, you also correctly note that we don’t know how consciousness arises from brains, and lacking that knowledge, how can you definitively say Jarhyn is wrong?
Then, too, there is this thing about free will, which of course you and I have discussed, sometimes acrimoniously. I have taken the stand that consciousness is selected for because it provides a huge evolutionary advantage over organisms that act merely on instinct — consciousness implies the ability that we can override our instincts and make genuine choices among genuinely available options. If consciousness is not able to do that — if free will is just an illusion, and in reality we are all taking our marching orders form a chain of causal events stretching back to the big bang — of what use is consciousness? I see no selective advantage to it, and therefore no reason for it to have evolved. (Although, admittedly, plenty of properties of living things did not evolve through selection but are simply happy accidents, like spandrels. But it seems to me highly unlikely that consciousness could be one of those sorts of things.)
Then again, if Jarhyn is right, and even low-level things like individual neurons have some degree of consciousness, or say that things like thermostats are conscious of what they do, we’d have to say that neurons and thermostats have consciousness but not free will, since it does not seem they can do, other than what they do.
How would we know if a machine were conscious, as opposed to just intelligent (able to solve problems without having any idea it is doing so or who or what it is)? I don’t think the Turing Test is sufficient. I think ChatGPT can arguably pass that test, at least sometimes, but most people don’t think it’s aware of anything. Sometimes I think the answer to this might be found in the visionary 2001 movie, in which HAL expresses fear of death (being disconnected). Unless we programmed “fear of death/disconnection” into a computer, it would have no reason to fear death. If it DID fear death, without being programmed to do so, that might be a clear indication it was conscious and not just intelligent.
But then, on the other hand, if an intelligent machine did NOT fear death/disconnection, that would still fail to rule out that it might be conscious after all. That’s because fear of death is an evolved trait, and computers are not evolved but made. I can well imagine a conscious, self-aware machine not fearing death, simply because it was not programmed to do so and because since it was not evolved, it had no evolved tendency to self-preservation.
Bilby raised the philosophical problem of other minds. I don’t think this problem, as originally formulated, was intended to make us seriously question the existence of other minds, but rather to drive home the point of how difficult it is to actually KNOW anything, in an absolute sense. Yet, still, we may question whether others have minds, or even whether we ourselves do — perhaps it’s an elaborate illusion? But then again, if the illusion is indistinguishable from the “real thing,” whatever that might be, then it should follow that the illusion IS the real thing.
Finally, I’d point out that we have another metaphysical option on offer, apart from metaphysical naturalism and metaphysical supernaturalism, and that is metaphysical idealism, that idea that reality consists primarily or exclusively of mental states — that mind does not depended on brain, but the other way around: brain depends on mind. At first glance this idea might seem absurd, but I don’t think it’s absurd at all and there could be a number of strong reasons to believe it’s true. That in itself would be worthy of discussion.
This is a great thread, one of the best I’ve seen in a long time, and when I get more time I’ll read it more carefully. I’ve only had time to skim it so far.