There is absolutely no evidence whatsoever supporting your interpretation of consciousness. If you want consciousness as an emergent property to be accepted, and any other interpretation to be rejected, you at the very least some kind of reason.
I... don't think you understand what you're saying here. You do realize that emergence refers to the process whereby larger entities or patterns arise through interactions among simpler entities that do not on their own exhibit such properties, right? In other words, you either accept that consciousness is an emergent property of *something*, or you're forced to posit a supernatural mind-body dualistic explanation. If we accept a materialistic universe, then consciousness is by definition going to be an emergent property of *something*, since we can not seem to reduce consiousness to a single process and still call it consciousness. And we already know that the 'something' in human consciousness is neural activity because we have over a century of observational data demonstrating that human consciousness can not exist without a functioning human brain; and we know that changes to the neural processes operating within the brain can cause changes in the functioning of consciousness.
Literally nobody in either philosophy (except some of those who are of the theistic persuasion) or science posits anything other than the notion that consciousness is an emergent property. So... what the hell are you even talking about?
When you claim there's no evidence supporting my interpretation of consciousness, you're demonstrating that at best you simply don't know what the term 'emergent property' means and at worst that you're actively suggesting a supernatural explanation for consciousness. I prefer the middle road though, where either or both of us is simply misinterpreting what the other's argument is.
And no reason to hypothesise that it can.
Nonsense. We have lots of reasons to hypothesize exactly that. By accepting that we live in a materialistic universe, we find ourselves forced to conclude that it is plausible that any process within it can be replicated since any such processes will be subject to certain basic natural laws and are not
fucking magic. You appear to be confusing my statement that there's no good theoretical reason why consciousness could not be artificially recreated for a non-existent statement where I claim there's no good practical reason why we can't do it.
Which is why the first hurdle is to define what we're trying to prove. Traditionally, attempts to form scientific hypotheses about consciousness have foundered on one of two rocks, either the 'we can't measure this' rock, or on the 'we've found something to measure, but no one really thinks it's consciousness'. This is why it's called the 'hard problem' of consciousness. Because there are lots of easy problems to solve, just by redefining conscious experience as something that's simple to measure.
Except this is not actually the issue at all if we're talking about creating artificial consciousness. You don't need to define something in order to create it; nor do you explicitly need to understand or measure it first. If we were to simulate all the neurons in the human brain in real-time, the resulting simulation might be conscious. Whether or not we have a working definition/understanding of consciousness is irrelevant to the factual reality that the simulation is conscious; the problem you're describing is not a problem for actually creating consciousness, it's a problem for identifying it. Artificial consciousness would still be conscious regardless of whether or not we can recognize it.
In order for science to be useful here, we need something we can measure. Or we need to prove that there is no possible difference between A and B. What we can't do is declare we're only interested in measureable things, say that the difference is not measureable, and then claim that because it's not measureable it somehow doesn't exist.
Which is where the simulation comes in. Since we know human consciousness to be a product of the brain (we don't need to understand in exacting detail how consciousness functions to know this, just like you don't need to understand the physical processes involved with smoke resulting from a fire to make the connection between the two) then we can reasonably conclude that a simulation of said brain; with a high enough resolution; is in fact conscious when it behaves similarly to a real brain. It wasn't programmed, after all, to pretend to be conscious; its consciousness is the result of a simulated version of the exact same processes that appear produce our own consciousness. So, at that point we can start to actually experimentally understand consciousness in ways that are not possible at present, by altering bits and pieces of the simulation in order to see what changes.
The argument that consciousness is an emergent property only goes so far. It's in effect claiming that any mechanism that is sufficiently complicated to duplicate the behaviour of a conscious person, develops consciousness.
...no, it's really not.
If I leave the right substances in the right mix under the right circumstances, and give it enough time... ordered, structured crystals will form. These crystals are an emergent structure. Their formation however, isn't entirely random. You need the right initial conditions. The same obviously applies to consciousness as an emergent property of complex systems. Consciousness *is* an emergent property that forms out of complex systems, but that doesn't imply that every sufficiently complex system will automatically develop consciousness.
that a 'cognitive zombie' would be logically impossible. If it's not logically impossible, then there's still no way of measuring whether consciousness is present or not.
This is not really a good argument since this kind of logic invalidates any and all measurements period. It leads to solipsism. It is not logically impossible that you are actually a brain in a vat and that everything you've ever experienced is a lie; therefore it is impossible to measure anything at all. While it's a nice little thought-experiment, it's nothing more than a distraction. If you genuinely accepted such logic and followed it you might as well become catatonic because really what's the point? On the other hand, we could just accept that even if its technically true we can't know the world we experience is a lie, it's not at all helpful to actually behave as if it is. We accept that the reality we experience is at least somewhat objectively true instead of a lie, and this then allows us to observe its nature and make conclusions about it. And if we can assume that the world we experience is actually real instead of a lie, then we can apply that same standard to an artificial brain's consciousness and assume that since it operates according to the same basic (if not perfectly understood) mechanisms that give rise to our own consciousness it must in fact be conscious.