• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

what we know is that consciousness is an emergent property of neural activity; which is itself a really just a complex system of data exchange between individual connections. This implies that any sufficiently complex system of data exchange could conceivably achieve the same effect.

That is a basic logical error. There is no such implication.
 
Of course, but I don't think anybody was ever suggesting that AI arising on its own would come about by simply adding CPU power to existing systems.

It seemad as if you were drifting in that direction, thank you for clarifying.

I've always understood that particular AI genesis path to result from things like increasingly complex network architectures and software agents; for example having a self-learning data-mining agent (or really any other function) that progressively increases its own complexity with consciousness being a possible by-product after enough successive generations even though its learning framework wasn't specifically designed for produce consciousness.

I guess it is somewhat plausible, but I still think it is more in the realm of science fiction. Self modifying code is generally written with the idea of reducing complexity in the code base, so I think any self modifying programming language that could eventually lead to consciousness would likely still have that consciousness as at least a partial intent of the designer. It may not necessarily have to incorporate all of the principles of consciousness in the beginning, so I do see your point.

To be a truly conscious AI, it would have to be able to fool everyone all the time,

Why? Humans may not even be able to do that all the time. Plus, the requirement you are positing here means it must behave like a *human*; but why should that be the requirement? Plenty of bird species have been demonstrated to have intelligence and self-awareness on par with that of human children, but this doesn't mean that these birds can convince anyone and everyone of that fact all the time. Machine consciousness/behavior may simply not be recognizable/understandable for most people in the same way that most people won't understand that the bird's behavior demonstrates advanced problem-solving intelligence, emotion and self-awareness. Most people just see a bird being a bird.

I didn't say that it would need to behave like a human, or be indistinguishable from a human. Just that it would need to fool people into thinking it was conscious all the time, in which case, it would not be fooling anyone, as it would actually be conscious. The AI could, in fact, be conscious of the fact that it is not human, and be able to express this consciousness in human interactions, while still convincing them that it is conscious. Of course, someone with an unrealistic expectation that a consciousness must be human would actually be fooling themselves, so being confronted by such an obstinate human would not refute the consciousness of the AI.
 
It wouldn't be a consciousness. Consciousness has physical properties which are separate from the data that neural nets process. An artificial neuron fires based on 1s and 0s, it experiences no qualia. It's a data processing abstraction, rather than a consciousness.

There is absolutely no evidence whatsoever supporting your interpretation of consciousness. As far as we can tell consciousness is an emergent property of neural activity,

There is absolutely no evidence whatsoever supporting your interpretation of consciousness. If you want consciousness as an emergent property to be accepted, and any other interpretation to be rejected, you at the very least some kind of reason.

and we have no reason to hypothesize that this emergent property can not be recreated through artificial neurons or other means.

And no reason to hypothesise that it can. Which is why the first hurdle is to define what we're trying to prove. Traditionally, attempts to form scientific hypotheses about consciousness have foundered on one of two rocks, either the 'we can't measure this' rock, or on the 'we've found something to measure, but no one really thinks it's consciousness'. This is why it's called the 'hard problem' of consciousness. Because there are lots of easy problems to solve, just by redefining conscious experience as something that's simple to measure.

The hard problem is, why there is subjective experience at all? The classic description of this is the cognitive zombie. Let's assume we have two people A and B. They behave identically, they respond identically. However, A has subjective experience, while B is a silent mechanism, with no subjective experience. A cognitive zombie. What's the measureable difference between them?

In order for science to be useful here, we need something we can measure. Or we need to prove that there is no possible difference between A and B. What we can't do is declare we're only interested in measureable things, say that the difference is not measureable, and then claim that because it's not measureable it somehow doesn't exist.

The argument that consciousness is an emergent property only goes so far. It's in effect claiming that any mechanism that is sufficiently complicated to duplicate the behaviour of a conscious person, develops consciousness. There's no evidence for this, there's no reason to suppose it's logically necessary for it to happen, that a 'cognitive zombie' would be logically impossible. If it's not logically impossible, then there's still no way of measuring whether consciousness is present or not.

So step one remains the same. How are we measuring what is described in the hard problem (i.e. subjective experience)? If we can't do that, then the answer to the OP is no.
 
There is absolutely no evidence whatsoever supporting your interpretation of consciousness. If you want consciousness as an emergent property to be accepted, and any other interpretation to be rejected, you at the very least some kind of reason.

I... don't think you understand what you're saying here. You do realize that emergence refers to the process whereby larger entities or patterns arise through interactions among simpler entities that do not on their own exhibit such properties, right? In other words, you either accept that consciousness is an emergent property of *something*, or you're forced to posit a supernatural mind-body dualistic explanation. If we accept a materialistic universe, then consciousness is by definition going to be an emergent property of *something*, since we can not seem to reduce consiousness to a single process and still call it consciousness. And we already know that the 'something' in human consciousness is neural activity because we have over a century of observational data demonstrating that human consciousness can not exist without a functioning human brain; and we know that changes to the neural processes operating within the brain can cause changes in the functioning of consciousness.

Literally nobody in either philosophy (except some of those who are of the theistic persuasion) or science posits anything other than the notion that consciousness is an emergent property. So... what the hell are you even talking about?

When you claim there's no evidence supporting my interpretation of consciousness, you're demonstrating that at best you simply don't know what the term 'emergent property' means and at worst that you're actively suggesting a supernatural explanation for consciousness. I prefer the middle road though, where either or both of us is simply misinterpreting what the other's argument is.



And no reason to hypothesise that it can.

Nonsense. We have lots of reasons to hypothesize exactly that. By accepting that we live in a materialistic universe, we find ourselves forced to conclude that it is plausible that any process within it can be replicated since any such processes will be subject to certain basic natural laws and are not fucking magic. You appear to be confusing my statement that there's no good theoretical reason why consciousness could not be artificially recreated for a non-existent statement where I claim there's no good practical reason why we can't do it.

Which is why the first hurdle is to define what we're trying to prove. Traditionally, attempts to form scientific hypotheses about consciousness have foundered on one of two rocks, either the 'we can't measure this' rock, or on the 'we've found something to measure, but no one really thinks it's consciousness'. This is why it's called the 'hard problem' of consciousness. Because there are lots of easy problems to solve, just by redefining conscious experience as something that's simple to measure.

Except this is not actually the issue at all if we're talking about creating artificial consciousness. You don't need to define something in order to create it; nor do you explicitly need to understand or measure it first. If we were to simulate all the neurons in the human brain in real-time, the resulting simulation might be conscious. Whether or not we have a working definition/understanding of consciousness is irrelevant to the factual reality that the simulation is conscious; the problem you're describing is not a problem for actually creating consciousness, it's a problem for identifying it. Artificial consciousness would still be conscious regardless of whether or not we can recognize it.


In order for science to be useful here, we need something we can measure. Or we need to prove that there is no possible difference between A and B. What we can't do is declare we're only interested in measureable things, say that the difference is not measureable, and then claim that because it's not measureable it somehow doesn't exist.

Which is where the simulation comes in. Since we know human consciousness to be a product of the brain (we don't need to understand in exacting detail how consciousness functions to know this, just like you don't need to understand the physical processes involved with smoke resulting from a fire to make the connection between the two) then we can reasonably conclude that a simulation of said brain; with a high enough resolution; is in fact conscious when it behaves similarly to a real brain. It wasn't programmed, after all, to pretend to be conscious; its consciousness is the result of a simulated version of the exact same processes that appear produce our own consciousness. So, at that point we can start to actually experimentally understand consciousness in ways that are not possible at present, by altering bits and pieces of the simulation in order to see what changes.



The argument that consciousness is an emergent property only goes so far. It's in effect claiming that any mechanism that is sufficiently complicated to duplicate the behaviour of a conscious person, develops consciousness.

...no, it's really not.

If I leave the right substances in the right mix under the right circumstances, and give it enough time... ordered, structured crystals will form. These crystals are an emergent structure. Their formation however, isn't entirely random. You need the right initial conditions. The same obviously applies to consciousness as an emergent property of complex systems. Consciousness *is* an emergent property that forms out of complex systems, but that doesn't imply that every sufficiently complex system will automatically develop consciousness.

that a 'cognitive zombie' would be logically impossible. If it's not logically impossible, then there's still no way of measuring whether consciousness is present or not.

This is not really a good argument since this kind of logic invalidates any and all measurements period. It leads to solipsism. It is not logically impossible that you are actually a brain in a vat and that everything you've ever experienced is a lie; therefore it is impossible to measure anything at all. While it's a nice little thought-experiment, it's nothing more than a distraction. If you genuinely accepted such logic and followed it you might as well become catatonic because really what's the point? On the other hand, we could just accept that even if its technically true we can't know the world we experience is a lie, it's not at all helpful to actually behave as if it is. We accept that the reality we experience is at least somewhat objectively true instead of a lie, and this then allows us to observe its nature and make conclusions about it. And if we can assume that the world we experience is actually real instead of a lie, then we can apply that same standard to an artificial brain's consciousness and assume that since it operates according to the same basic (if not perfectly understood) mechanisms that give rise to our own consciousness it must in fact be conscious.
 
what we know is that consciousness is an emergent property of neural activity; which is itself a really just a complex system of data exchange between individual connections. This implies that any sufficiently complex system of data exchange could conceivably achieve the same effect.

That is a basic logical error. There is no such implication.

Only if you assume that I was saying it implies that any sufficiently complex system *would* achieve the same effect. Well, granted, I suppose I should specify that the sufficiently complex system is structured in a way that actually facilitates the means through which consciousness operates; rather than just say any complex system at all. However, since we don't actually know what range of structures can and can not give rise to consciousness, it is perfectly logical to state that any sufficiently complex system could *conceivably* (operative word) give rise to consciousness.
 
I... don't think you understand what you're saying here. You do realize that emergence refers to the process whereby larger entities or patterns arise through interactions among simpler entities that do not on their own exhibit such properties, right? .........................
Yes. A strong AI could be presented with the image of a beautiful sunset and it could analyze the hell out of it, extracting every detail, then describe it in detail possibly even to calculating the time of day and location the image was taken. A consciousness could enjoy the image.
 
That is a basic logical error. There is no such implication.

Only if you assume that I was saying it implies that any sufficiently complex system *would* achieve the same effect. Well, granted, I suppose I should specify that the sufficiently complex system is structured in a way that actually facilitates the means through which consciousness operates; rather than just say any complex system at all. However, since we don't actually know what range of structures can and can not give rise to consciousness, it is perfectly logical to state that any sufficiently complex system could *conceivably* (operative word) give rise to consciousness.

Contrary: since we dont know whats makes the difference we cannot do any inferences at all.
 
Only if you assume that I was saying it implies that any sufficiently complex system *would* achieve the same effect. Well, granted, I suppose I should specify that the sufficiently complex system is structured in a way that actually facilitates the means through which consciousness operates; rather than just say any complex system at all. However, since we don't actually know what range of structures can and can not give rise to consciousness, it is perfectly logical to state that any sufficiently complex system could *conceivably* (operative word) give rise to consciousness.

Contrary: since we dont know whats makes the difference we cannot do any inferences at all.

Since we do not specifically know which paths gives rise to consciousness; all paths remain theoretically open.
 
I... don't think you understand what you're saying here. You do realize that emergence refers to the process whereby larger entities or patterns arise through interactions among simpler entities that do not on their own exhibit such properties, right? .........................
Yes. A strong AI could be presented with the image of a beautiful sunset and it could analyze the hell out of it, extracting every detail, then describe it in detail possibly even to calculating the time of day and location the image was taken. A consciousness could enjoy the image.

I don't see what your response has to do with anything I said in the quoted section. It has no bearing on emergent systems.

I also don't see any fundamental reason to assume a strong AI could not assign subjective value judgements like enjoyment to experiences. Indeed, according to some definitions of the term an AI would in fact have to be able to have subjective experiences in order to be referred to as a strong AI; so... I'm not exactly sure what you're trying to argue.
 
. A consciousness could enjoy the image.

And the difference is that it is experienced. Not that it is housed in a conciousness.
Exactly. Consciousness is an emergent property, not the hardware. If it is possible for a computer to become conscious, it will be an emergent property, not the hardware though the hardware would be necessary. Just as our consciousness is not the neurons in our brain but an emergent property also dependent on those neurons.
 
And the difference is that it is experienced. Not that it is housed in a conciousness.
Exactly. Consciousness is an emergent property, not the hardware. If it is possible for a computer to become conscious, it will be an emergent property, not the hardware though the hardware would be necessary. Just as our consciousness is not the neurons in our brain but an emergent property also dependent on those neurons.

Emergent properties are fiction. Scientists believe that things are determined. Its in their method. Sciencing this thing out, thanks bilby, has been a part of biological science since almost forever. Some of the best thinking go along the following lines.

We posit that the minimum requirement for sensory consciousness and qualia is a brain including a forebrain (but not necessarily a developed cerebral cortex/pallium), midbrain, and hindbrain. This brain must also have (1) hierarchical systems of intercommunicating, isomorphically organized, processing nuclei that extensively integrate the different senses into representations that emerge in upper levels of the neural hierarchy; and (2) a widespread reticular formation that integrates the sensory inputs and contributes to attention, awareness, and neural synchronization.

There is a minimum configuration of brain necessary to discriminate and use sensory information thus: The evolutionary and genetic origins of consciousness in the Cambrian Period over 500 million years ago http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3790330/#!po=25.4717 can serve as a primer in this area.

On the other hand modern neuroscienrtists have taken a more analytical approach based on what we have found out about the study of consciousness resulting from Descartes gracious religious justificartion so this: The claustrum’s proposed role in consciousness is supported by the effect and target localization of Salvia divinorum http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3935397/ brings us up to speed here.

This article brings together three findings and ideas relevant for the understanding of human consciousness: (I) Crick’s and Koch’s theory that the claustrum is a “conductor of consciousness” crucial for subjective conscious experience. (II) Subjective reports of the consciousness-altering effects the plant Salvia divinorum, whose primary active ingredient is salvinorin A, a κ-opioid receptor agonist. (III) The high density of κ-opioid receptors in the claustrum. Fact III suggests that the consciousness-altering effects of S. divinorum/salvinorin A (II) are due to a κ-opioid receptor mediated inhibition of primarily the claustrum and, additionally, the deep layers of the cortex, mainly in prefrontal areas. Consistent with Crick and Koch’s theory that the claustrum plays a key role in consciousness (I), the subjective effects of S. divinorum indicate that salvia disrupts certain facets of consciousness much more than the largely serotonergic hallucinogen lysergic acid diethylamide (LSD).

Finally, a video lecture by Christof Koch on the framework for consciousness.

http://www.youtube.com/watch?v=0Fti1IX78Io

[YOUTUBE]https://www.youtube.com/watch?v=0Fti1IX78Io[/YOUTUBE]



enjoy.
 
Last edited:
I understand the concept of data abstraction layers just fine, thanks. It isn't particularly relevant to the post you were replying to, nor does it particularly apply to the topic at all in the way you're trying to apply it.
The relevance of DALs (Data Abstraction Layers) to AIs is that AIs exist in DALs.

Do you understand that electrons are not necessarily aware of the information they are used to organize and transmit over the internet? Do you understand how one could use humans (if one had enough) in a DAL to calculate various things?

Here is a cool thing that some kids at Stanford did:
http://news.stanford.edu/news/2015/june/computer-water-drops-060815.html

You're still proposing a kind of dualism which isn't in evidence; asserting that consciousness has certain (undefined) physical properties that can not be achieved by the organization of data alone.
So you claim to understand DALs, yet you don't understand that the organization of data in a DAL, while supported by consciousnesses, is not necessarily conscious itself. It can be fed back to a consciousness....

There's really no reason why that human powered computer of yours could not theoretically in some sense produce consciousness; assuming such a system could be made complex and efficient enough (which is impossible in that particular example);
Do you think the following produces a consciousness, or a consciousness observes the product of the DAL:

you're simply asserting this is impossible because the calculations that human powered computer is doing are abstract without a direct physical connection between the calculations and the produced result.
I didn't assert that a human powered computer could not produce consciousness, that's silly. I don't know precisely how it would do so (maybe bowing towards Mecca at regular intervals synchronizes brainwaves), but I certainly don't know that it cannot do so.
 
Contrary: since we dont know whats makes the difference we cannot do any inferences at all.

Since we do not specifically know which paths gives rise to consciousness; all paths remain theoretically open.

This is not true. Some paths can be shown to be incorrect without knowing which of the remaining paths is the correct one.
 
I also don't see any fundamental reason to assume a strong AI could not assign subjective value judgements like enjoyment to experiences. Indeed, according to some definitions of the term an AI would in fact have to be able to have subjective experiences in order to be referred to as a strong AI; so... I'm not exactly sure what you're trying to argue.
Certainly it can assign subjective value judgements but it will be the subjective value judgements of the programmer who assigned value scales in the programming. However, the strong AI can't tell you which of a series of images it likes most. Just as an art critic can't tell me which of a series of painting I like most.
 
Certainly it can assign subjective value judgements but it will be the subjective value judgements of the programmer who assigned value scales in the programming.

You're forgetting that the kind of AI we're talking about isn't programmed like that. There's no programmer assigning value scales. The AI would arise from a basic learning framework and would be capable of modifying its own code. It wouldn't be designed in a top-down fashion by a programmer, it would evolve up to the point of consciousness from a bottom-up approach. The subjective value it assigns experiences would be ones that it evolved on its own, not that which has been dictated to it by a programmer.


However, the strong AI can't tell you which of a series of images it likes most.

Of course it could. If it's capable of assigning subjective values, it would be quite capable of doing exactly that. It could do this even in a top-down model, since it wouldn't be relevant whether or not its values are derived from a programmer or are its own evolved sensibilities; in effect they're its own values either way in the same way that your own tastes are still your own even if you just copied them from a parent.
 
Since we do not specifically know which paths gives rise to consciousness; all paths remain theoretically open.

This is not true. Some paths can be shown to be incorrect without knowing which of the remaining paths is the correct one.

Yes, of course. That said, in terms of the specific conditions needed for a complex system of data exchange to produce consciousness, we have not as yet shown any to be incorrect to my knowledge. Hence my original argument in regards of the conceivability of complex systems giving rise to consciousness is still valid.
 
The relevance of DALs (Data Abstraction Layers) to AIs is that AIs exist in DALs.

Which is not, actually, relevant at all. Indeed, human consciousness itself exists in what one could consider to be a data abstraction layer as well; so I fail to see how data abstraction is an argument against machine consciousness.

Do you understand that electrons are not necessarily aware of the information they are used to organize and transmit over the internet?

Naturally.


Do you understand how one could use humans (if one had enough) in a DAL to calculate various things?

Naturally.

Exactly how are either of these things at all relevant? Is it necessary for electrical impulses to be aware of the information they carry between neurons in order for human consciousness exist? No.

So you claim to understand DALs, yet you don't understand that the organization of data in a DAL, while supported by consciousnesses, is not necessarily conscious itself. It can be fed back to a consciousness....

You're getting things mixed up. Nobody is claiming that the organization of data is itself conscious or neccessarily so. Furthermore, it is not all the case that this organization of data is *supported* by consciousness; rather it would be the other way around. Human consciousness is the result of data being exchanged and organized in ways we are not consciously aware of; how could the same not apply to an AI?

Do you think the following produces a consciousness, or a consciousness observes the product of the DAL:


Obviously not, as should be self-evident by the very sentence you're responding to where I explicitly state a human powered computer (ie; humans moving about in the physical world in order to create a computational system through their physical actions) could not possibly achieve enough complexity or efficiency to allow for that unless you throw away the laws of physics.

I didn't assert that a human powered computer could not produce consciousness, that's silly. I don't know precisely how it would do so (maybe bowing towards Mecca at regular intervals synchronizes brainwaves), but I certainly don't know that it cannot do so.

If the computations happen through; as in your original example; people moving in and out of rooms, then such a system can't realistically achieve enough bandwidth to be on par with the bandwidth in the human brain itself.

That said, I did not claim your assertion is that a human powered computer could not produce consciousness; I claimed your assertion of non-consciousness in this examples is based on an unsubstantiated claim of needing some direct non-abstract connection between calculations and consciousness; which appears to be what you're saying.
 
The argument that consciousness is an emergent property only goes so far. It's in effect claiming that any mechanism that is sufficiently complicated to duplicate the behaviour of a conscious person, develops consciousness. There's no evidence for this, there's no reason to suppose it's logically necessary for it to happen, that a 'cognitive zombie' would be logically impossible. If it's not logically impossible, then there's still no way of measuring whether consciousness is present or not.
Well, if the behaviors reside entirely within a data abstraction layer in which the behaviors are determined by the rules of the DAL, we could safely say that the behaviors are not directly caused by a consciousness, even if a consciousness participates in the creation of the DAL, observes and molds the DAL, etc.

If we create a video game, and interact with NPCs, we don't think that the actions of the NPCs are caused directly by a consciousness, rather they are programmed into the DAL.
 
.....

Again, what we know is that consciousness is an emergent property of neural activity; which is itself a really just a complex system of data exchange between individual connections. This implies that any sufficiently complex system of data exchange could conceivably achieve the same effect. We do not have any reason to suspect there must be some sort of direct "connection" between the mechanism and the produced consciousness.

.....

If just a complex system of data exchange (processing, whatever) could achieve the same effect there is no need for emergent which conflicts with principles of science. One cannot run an experiment if that experiment generates emergent, not of that which it is composed, outcomes. Setting that rubbish aside you have nailed the paradigm needed. Why bother with rubbish anyway? Does it sound good to make what humans do other than determined? A has properties. B has properties. A*B has properties that may be unexpected, but, they are properties derived from the interaction of the properties of A and B. Anything else is psychological bullshit or magic if you will. The fool who spouted "The sum is greater than the sum of its parts" had a pfart problem.
 
Back
Top Bottom