• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

The hard problem indeed. Some sort of feedback loop between patterns of neuronal firings and interpretation of these 'pixel/pattern' formations on the 'screen' of a brain's 'global workspace' - forming the subjective experience of virtual representation of information....as a rough guess.
 
Conciousness is not the hard problem. It is the first hand experience that is.
 
Conciousness is not the hard problem. It is the first hand experience that is.

That's not what the term the hard problem of consciousnes represents.

''The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences—how sensations acquire characteristics, such as colors and tastes.'' - Wiki.
 
Conciousness is not the hard problem. It is the first hand experience that is.

That's not what the term the hard problem of consciousnes represents.

''The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences—how sensations acquire characteristics, such as colors and tastes.'' - Wiki.

Eh. Sigh. I forgot your problem understanding simple texts. "Qualia" is another way of expressing "first hand experience". Neither "qualia" or "first hand experince" is IMO really fitting. but "conciousness" is so much other stuff than the hard problem.

Creating an agent that says it is concious is not a solution.
 
That's not what the term the hard problem of consciousnes represents.

''The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences—how sensations acquire characteristics, such as colors and tastes.'' - Wiki.

Eh. Sigh. I forgot your problem understanding simple texts. "Qualia" is another way of expressing "first hand experience". Neither "qualia" or "first hand experince" is IMO really fitting. but "conciousness" is so much other stuff than the hard problem.

Creating an agent that says it is concious is not a solution.

Or maybe your writing isn't as clear as you think it is? Consciousness is the 'first hand experience'
 
If a brain thinks it's conscious, is it conscious?

Yes.

The word 'duh' comes to mind?

How are we distinguishing between a mechanism that experiences quaila, and a mechanism that produces statements saying that it does.

Science requires that a subject be measureable. That's the hurdle that we need to overcome - to come up with something that both fits the hard problem of consciousness, and is measureable in some way.
 
How are we distinguishing between a mechanism that experiences quaila, and a mechanism that produces statements saying that it does.

That wasn't the question I responded to. The question was "if it thinks its conscious, is it conscious?". Which is kind of like asking 'if a piece of fruit is an apple, is it an apple?'

As for your question; what exactly is the difference between an AI and a human brain in that regard? I have no mechanism for determining that the things you say about what you experience aren't just the result of a simple "If x then y" diagram.

If we accept that other humans are conscious on the basis of them saying they are... then unless we can demonstrate it's actually because of a programmed "if x then y" routine, why shouldn't we accept the same thing when said by an AI?

We don't really have a good reason to discount a simulated brain telling us it's conscious; and if we accept that then we can begin to try and create a measurable/scientifically understandable concept of consciousness.
 
http://aeon.co/magazine/psychology/is-consciousness-an-engineering-problem/

OK, so let's take this 'consciousness' stuff away from the philosophers, and see if we can't, to borrow a line from Andy Weir's The Martian, science the shit out of this thing.

I disagree with the premise laid out early in the article that "People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own."

While that makes for great science fiction, and science fiction writers are often thought of as futurists, I don't think that anyone who has even a passing knowledge of programming thought that to be the case. I think it has always been the case that those who might actually be working on AI have known that consciousness would need to be programmed in from the start, and have been dealing with precisely the issues the article describes. That means that it is not so easy as the article represents. It is not currently possible to just make a computer 'believe' it is conscious, and fool real people into believing that to be the case, or it would already have been done by now. People who are a lot smarter than the author of this article have been working on precisely what he lays out, and they have not been able to solve the problem.
 
http://aeon.co/magazine/psychology/is-consciousness-an-engineering-problem/

OK, so let's take this 'consciousness' stuff away from the philosophers, and see if we can't, to borrow a line from Andy Weir's The Martian, science the shit out of this thing.

I disagree with the premise laid out early in the article that "People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own."

While that makes for great science fiction, and science fiction writers are often thought of as futurists, I don't think that anyone who has even a passing knowledge of programming thought that to be the case. I think it has always been the case that those who might actually be working on AI have known that consciousness would need to be programmed in from the start, and have been dealing with precisely the issues the article describes. That means that it is not so easy as the article represents. It is not currently possible to just make a computer 'believe' it is conscious, and fool real people into believing that to be the case, or it would already have been done by now. People who are a lot smarter than the author of this article have been working on precisely what he lays out, and they have not been able to solve the problem.
I thought that the latest approach to AI was self-learning computers where they are given sensory input. Visual, voice recognition, etc. and then are taught pretty much like a child. I've heard catch phrases such as "deep learning" and "cognitive computing" used.
 
I disagree with the premise laid out early in the article that "People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own."

While that makes for great science fiction, and science fiction writers are often thought of as futurists, I don't think that anyone who has even a passing knowledge of programming thought that to be the case. I think it has always been the case that those who might actually be working on AI have known that consciousness would need to be programmed in from the start, and have been dealing with precisely the issues the article describes. That means that it is not so easy as the article represents. It is not currently possible to just make a computer 'believe' it is conscious, and fool real people into believing that to be the case, or it would already have been done by now. People who are a lot smarter than the author of this article have been working on precisely what he lays out, and they have not been able to solve the problem.
I thought that the latest approach to AI was self-learning computers where they are given sensory input. Visual, voice recognition, etc. and then are taught pretty much like a child. I've heard catch phrases such as "deep learning" and "cognitive computing" used.

That is pretty much what the article descries as well. They are providing the computer with programming that allows it to learn. It is one thing to program a computer to respond to specific questions to specific answers, and try to make it fool people into thinking that it is conscious, that is not what was making reference to. In order to really be conscious one needs to have the ability to learn. That is quite different from the simplistic notion of just throwing incredible amounts of information at a computer, and expecting it to wake up. Like I said, something like that makes for great scifi, but that is not the way any serious programmer is going to approach the problem.
 
http://aeon.co/magazine/psychology/is-consciousness-an-engineering-problem/

OK, so let's take this 'consciousness' stuff away from the philosophers, and see if we can't, to borrow a line from Andy Weir's The Martian, science the shit out of this thing.

I disagree with the premise laid out early in the article that "People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own."

While that makes for great science fiction, and science fiction writers are often thought of as futurists, I don't think that anyone who has even a passing knowledge of programming thought that to be the case. I think it has always been the case that those who might actually be working on AI have known that consciousness would need to be programmed in from the start, and have been dealing with precisely the issues the article describes.

That's not really what the idea of it arising on its own entails; nor is it an idea that has disappeared completely (nor should it). Rather it's always been accepted as a possibility that if you create a complex enough system capable of learning that it might develop consciousness on its own. And in fact, if one had a greater than just passing knowledge of programming, one would know there is absolutely no reason to assume that consciousness needs to be programmed in from the start; because one would be familiar with things like self-modifying code and metaprogramming; which is by no means science fiction and which, if given a solid enough learning algorithm as its base, could conceivably give rise to conciousness on its own if operating on sufficiently powerful hardware.


That means that it is not so easy as the article represents. It is not currently possible to just make a computer 'believe' it is conscious, and fool real people into believing that to be the case, or it would already have been done by now.

And it *has* in fact been done. Or at least, the act of a computer program fooling real people into believing it is conscious (by fooling them into thinking it is a real person) has in fact been accomplished. This is the so called Turing test, and it has been passed. It's not yet at the level where it can do this consistently, but it *has* been done.
 
I thought that the latest approach to AI was self-learning computers where they are given sensory input. Visual, voice recognition, etc. and then are taught pretty much like a child. I've heard catch phrases such as "deep learning" and "cognitive computing" used.

Very good TED talk about that. Granted, this isn't to develop a conscious machine. The goal far is just to have it develop visual recognition.

 
I thought that the latest approach to AI was self-learning computers where they are given sensory input. Visual, voice recognition, etc. and then are taught pretty much like a child. I've heard catch phrases such as "deep learning" and "cognitive computing" used.

This is a nice little video of neurons activating (SHORT... and good):

 
Last edited:
Rather it's always been accepted as a possibility that if you create a complex enough system capable of learning that it might develop consciousness on its own. And in fact, if one had a greater than just passing knowledge of programming, one would know there is absolutely no reason to assume that consciousness needs to be programmed in from the start; because one would be familiar with things like self-modifying code and metaprogramming; which is by no means science fiction and which, if given a solid enough learning algorithm as its base, could conceivably give rise to conciousness on its own if operating on sufficiently powerful hardware.

It wouldn't be a consciousness. Consciousness has physical properties which are separate from the data that neural nets process. An artificial neuron fires based on 1s and 0s, it experiences no qualia. It's a data processing abstraction, rather than a consciousness.

However, in the case of human brains, the data processing abstraction layer of the neural network is directly joined to consciousness, so.... how far are we going to distance ourselves from the abstract information layer? So far that we objectify porn?
 
Back
Top Bottom