• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

I thought that the latest approach to AI was self-learning computers where they are given sensory input. Visual, voice recognition, etc. and then are taught pretty much like a child. I've heard catch phrases such as "deep learning" and "cognitive computing" used.

Very good TED talk about that. Granted, this isn't to develop a conscious machine. The goal far is just to have it develop visual recognition.
Baby steps.... It seems reasonable that the first step to developing strong AI would be having the computer capable of recognizing its environment, just as an infant spends most of its first year trying to make sense of its surroundings even to what its hands and feet are.
 
It wouldn't be a consciousness. Consciousness has physical properties which are separate from the data that neural nets process. An artificial neuron fires based on 1s and 0s, it experiences no qualia. It's a data processing abstraction, rather than a consciousness.

However, in the case of human brains, the data processing abstraction layer of the neural network is directly joined to consciousness, so.... how far are we going to distance ourselves from the abstract information layer? So far that we objectify porn?

Are you suggesting we won't know whether computers can think until they start jacking off to porn? My, how the Turing test has changed!
 
I thought that the latest approach to AI was self-learning computers where they are given sensory input. Visual, voice recognition, etc. and then are taught pretty much like a child. I've heard catch phrases such as "deep learning" and "cognitive computing" used.

This is a nice little video of neurons activating (SHORT... and good):
I like that the programmers have to monitor what the computer is doing and how it is doing it rather than the old programming method of telling the computer step by step what to do and how to do it. Certainly seems like the self-learning approach has promise.
 
It wouldn't be a consciousness. Consciousness has physical properties which are separate from the data that neural nets process. An artificial neuron fires based on 1s and 0s, it experiences no qualia. It's a data processing abstraction, rather than a consciousness.

There is absolutely no evidence whatsoever supporting your interpretation of consciousness. As far as we can tell consciousness is an emergent property of neural activity, and we have no reason to hypothesize that this emergent property can not be recreated through artificial neurons or other means. What you are proposing is mind/body dualism; which is not scientifically supportable.
 
I disagree with the premise laid out early in the article that "People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own."

While that makes for great science fiction, and science fiction writers are often thought of as futurists, I don't think that anyone who has even a passing knowledge of programming thought that to be the case. I think it has always been the case that those who might actually be working on AI have known that consciousness would need to be programmed in from the start, and have been dealing with precisely the issues the article describes.

That's not really what the idea of it arising on its own entails; nor is it an idea that has disappeared completely (nor should it). Rather it's always been accepted as a possibility that if you create a complex enough system capable of learning that it might develop consciousness on its own. And in fact, if one had a greater than just passing knowledge of programming, one would know there is absolutely no reason to assume that consciousness needs to be programmed in from the start; because one would be familiar with things like self-modifying code and metaprogramming; which is by no means science fiction and which, if given a solid enough learning algorithm as its base, could conceivably give rise to conciousness on its own if operating on sufficiently powerful hardware.

While a computer capable of replicating consciousness would likely need to incorporate self-modifying code, or metaprogramming, it is science fiction to think that a supercomputer running self modifying C++ is just going to 'wake up'. You admit it yourself when you say "given a solid enough learning algorithm". The language incorporating the self-modifying code would need to be designed with the principles of developing consciousness from the beginning.

That means that it is not so easy as the article represents. It is not currently possible to just make a computer 'believe' it is conscious, and fool real people into believing that to be the case, or it would already have been done by now.

And it *has* in fact been done. Or at least, the act of a computer program fooling real people into believing it is conscious (by fooling them into thinking it is a real person) has in fact been accomplished. This is the so called Turing test, and it has been passed. It's not yet at the level where it can do this consistently, but it *has* been done.

Sure, some people can be fooled in specific enough cases. To be a truly conscious AI, it would have to be able to fool everyone all the time, in which case it would actually be conscious. It is kind of hard to word that correctly without being self-referential, so I apologize if it is confusing. My original point, however, is that the article is not telling anyone who is working on AI anything they don't already know, and they already know that a computer is not just going to become conscious on its own.
 
This is a nice little video of neurons activating (SHORT... and good):
I like that the programmers have to monitor what the computer is doing and how it is doing it rather than the old programming method of telling the computer step by step what to do and how to do it. Certainly seems like the self-learning approach has promise.

Yeah. Eventually we'll select neurons that select neurons that respond to what we are currently interested in, given our current and past data inputs. Let that loop run... selecting for more intelligent decision making neurons.
 
It wouldn't be a consciousness. Consciousness has physical properties which are separate from the data that neural nets process. An artificial neuron fires based on 1s and 0s, it experiences no qualia. It's a data processing abstraction, rather than a consciousness.

However, in the case of human brains, the data processing abstraction layer of the neural network is directly joined to consciousness, so.... how far are we going to distance ourselves from the abstract information layer? So far that we objectify porn?

Are you suggesting we won't know whether computers can think until they start jacking off to porn? My, how the Turing test has changed!
I would think that an indication of "thinking" or awareness of self would be if the computer began complaining that it wasn't receiving sensory input when someone disconnected the sensors and the computer hadn't been programmed to notify the operator of sensor problems. Sorta like you would complain if you suddenly went blind or deaf.
 
While a computer capable of replicating consciousness would likely need to incorporate self-modifying code, or metaprogramming, it is science fiction to think that a supercomputer running self modifying C++ is just going to 'wake up'. You admit it yourself when you say "given a solid enough learning algorithm". The language incorporating the self-modifying code would need to be designed with the principles of developing consciousness from the beginning.

Of course, but I don't think anybody was ever suggesting that AI arising on its own would come about by simply adding CPU power to existing systems. I've always understood that particular AI genesis path to result from things like increasingly complex network architectures and software agents; for example having a self-learning data-mining agent (or really any other function) that progressively increases its own complexity with consciousness being a possible by-product after enough successive generations even though its learning framework wasn't specifically designed for produce consciousness.

The other scenario I've heard suggested by some; one that is a bit more 'out there' and certainly far less likely imo; is the internet itself achieving consciousness by essentially being structured like a planet-sized brain. While this is unlikely to just 'happen', it's not actually inconceivable. If consciousness is indeed an emergent property of complex organizational systems of data exchange (as suggested by everything we know about our brains and own consciousness), then the internet may already have enough complexity/capacity to achieve consciousness (although depending on the metric used, this would be a very recent thing... as in this year). The internet's backbones already exceed the maximum theoretical bandwidth of the combined neurons of the human brain. The internet's complexity has surpassed that of the human brain in every metric by now. Of course, that doesn't necessarily mean it has or will ever develop consciousness; but then again, the same could've been said about our distant ancestors; why did we develop it in the first place? Lots of animals with large and complex brains lack the kind of consciousness that we have; what specifically was the initial evolutionary trigger for already complex brains/biological systems to develop consciousness, and what makes it so that this couldn't also, hypothetical, happen to a complex system like the internet?

Now to reiterate, I don't think this second scenario is at all likely... but I can't dismiss it as implausible either. It falls within the realm of possibility. The framework appears to be in place, but it's lacking the right catalyst.


To be a truly conscious AI, it would have to be able to fool everyone all the time,

Why? Humans may not even be able to do that all the time. Plus, the requirement you are positing here means it must behave like a *human*; but why should that be the requirement? Plenty of bird species have been demonstrated to have intelligence and self-awareness on par with that of human children, but this doesn't mean that these birds can convince anyone and everyone of that fact all the time. Machine consciousness/behavior may simply not be recognizable/understandable for most people in the same way that most people won't understand that the bird's behavior demonstrates advanced problem-solving intelligence, emotion and self-awareness. Most people just see a bird being a bird.
 
How do we know that it's conscious? What test could demonstrate that a conscious being is conscious?
 
bleh...
 

Attachments

  • a_bunch_of_rocks.png
    a_bunch_of_rocks.png
    199.3 KB · Views: 7
Last edited:
Along with sensory input, wouldn't it have to be inquisitive? Babies are inquisitive all on their own. Not only do I want to look at the tennis ball, I want to reach out and feel it. It's fuzzy. Holy crap! It bounces.
 
It wouldn't be a consciousness. Consciousness has physical properties which are separate from the data that neural nets process. An artificial neuron fires based on 1s and 0s, it experiences no qualia. It's a data processing abstraction, rather than a consciousness.
There is absolutely no evidence whatsoever supporting your interpretation of consciousness.As far as we can tell consciousness is an emergent property of neural activity, and we have no reason to hypothesize that this emergent property can not be recreated through artificial neurons or other means. What you are proposing is mind/body dualism; which is not scientifically supportable.
Uhhhh... Do you think a whole bunch of electrons = 1, and the lack of electrons = 0? Think about Babbage's water powered computer. Applying the same principles, we could create human powered computers by having humans walk into and out of rooms following specific instruction sets.

The humans aren't necessarily going to be consciously aware of the calculation they are completing by following the instructions. Only when the organized (non-conscious) data is interpreted by an actual consciousness is a consciousness involved. At no time is a consciousness not involved, however the individual consciousnesses involved aren't aware of all the data at all times.

In other words, there are data abstraction layers.

The rule set the humans follow isn't conscious. The data carriers in the abstraction layer (humans) are not necessarily aware of everything going on at once. They aren't conscious of the calculations they do by following the rule set. They are simply conscious of their experiences and maybe the data they carry, and (maybe) the rules they follow, not the continuous whole product of those rules at all times, which an AI would be. Of course, if the highest level neuron in the neural net is fed directly back into a consciousness, it's reconnected.

I'm wondering if you'll be able to grasp the concept of data abstraction layers.

We don't have to be aware of every bit of information, or every single electron individually, for a computer to work. Lots of electrons in a certain place doesn't always = 1.

https://xkcd.com/505/
 
No. Conciousnes is much more.

Not without first hand experience/perception/awareness....which underpins everything else: the various attributes and features of consciousness, including whatever 'you' are conscious of.

No. Anything you are concious of is not the hard problem. The hard problem is the experience of the conciousness. Everything you are concious of can be represented by electronics/software. But the actual firsthand experience? Not yet anyway...
 
Not without first hand experience/perception/awareness....which underpins everything else: the various attributes and features of consciousness, including whatever 'you' are conscious of.

No. Anything you are concious of is not the hard problem. The hard problem is the experience of the conciousness. Everything you are concious of can be represented by electronics/software. But the actual firsthand experience? Not yet anyway...

Try to separate, as a theoretical exercise, the experience of consciousness from consciousness and see what you have left.

If there is no first hand experience of consciousness, there is no conscious activity, yet all of the related information is present within the brain.
 
No. Anything you are concious of is not the hard problem. The hard problem is the experience of the conciousness. Everything you are concious of can be represented by electronics/software. But the actual firsthand experience? Not yet anyway...

Try to separate, as a theoretical exercise, the experience of consciousness from consciousness and see what you have left.

A lot. You see the problem here is how we can simulate a concious persion. There is no reason why cannot do that, including an I that reports that it is concious and can hear its thoughts "in the minds ear" etc. That is not a hard problem becsuse we really only need to create an agent that behaves as if. The problems if vision recognition, rational choices, creativity, intension etc is difficult but not really the hard problem of actually ligting up the inner theater.
 
Try to separate, as a theoretical exercise, the experience of consciousness from consciousness and see what you have left.

A lot. You see the problem here is how we can simulate a concious persion. There is no reason why cannot do that, including an I that reports that it is concious and can hear its thoughts "in the minds ear" etc. That is not a hard problem becsuse we really only need to create an agent that behaves as if. The problems if vision recognition, rational choices, creativity, intension etc is difficult but not really the hard problem of actually ligting up the inner theater.

That is the point. That is the conscious/aware aspect, to be conscious/aware of colours, shapes, etc, and their significance on the basis of recognition, an aspect of conscious awareness. Which may or may not include self awareness. Some animals perhaps conscious of their environment, but not self conscious.
 
A lot. You see the problem here is how we can simulate a concious persion. There is no reason why cannot do that, including an I that reports that it is concious and can hear its thoughts "in the minds ear" etc. That is not a hard problem becsuse we really only need to create an agent that behaves as if. The problems if vision recognition, rational choices, creativity, intension etc is difficult but not really the hard problem of actually ligting up the inner theater.

That is the point. That is the conscious/aware aspect, to be conscious/aware of colours, shapes, etc, and their significance on the basis of recognition, an aspect of conscious awareness. Which may or may not include self awareness. Some animals perhaps conscious of their environment, but not self conscious.

No. Awareness has nothing to do with it. It is very simple to do an aware agent (including self awarenes). To be aware is a basic requirement.
 
I'm wondering if you'll be able to grasp the concept of data abstraction layers.

I understand the concept of data abstraction layers just fine, thanks. It isn't particularly relevant to the post you were replying to, nor does it particularly apply to the topic at all in the way you're trying to apply it. You're still proposing a kind of dualism which isn't in evidence; asserting that consciousness has certain (undefined) physical properties that can not be achieved by the organization of data alone. You have not presented anything to suggest this is actually the case, simply blabbered on about data abstraction in a way that doesn't really apply because you've started from certain unsupported assumptions about the nature of consciousness. Again, what we know is that consciousness is an emergent property of neural activity; which is itself a really just a complex system of data exchange between individual connections. This implies that any sufficiently complex system of data exchange could conceivably achieve the same effect. We do not have any reason to suspect there must be some sort of direct "connection" between the mechanism and the produced consciousness. There's really no reason why that human powered computer of yours could not theoretically in some sense produce consciousness; assuming such a system could be made complex and efficient enough (which is impossible in that particular example); you're simply asserting this is impossible because the calculations that human powered computer is doing are abstract without a direct physical connection between the calculations and the produced result. This is not, by the way, what a data abstraction layer is, but that's really besides the point; the point is you have done nothing but assert that the 'neurons' must be connected in some way to the consciousness itself, but have failed to explain why this is necessary. It may seem logical at first glance, but that is not sufficient to assert it as truth. You've also failed to explain why this wouln't in fact be the case anyway with an AI (which kind of makes everything you're saying pointless); clearly the artificial neurons in a simulation are directly connected to the simulated consciousness, so the connection issue you've described does not actually apply... and whether or not the artificial neurons are physical or not is completely irrelevant (and even if a physical existence *were* necessary, nothing says we can't simply create physical artificial neurons)

The rule set the humans follow isn't conscious. The data carriers in the abstraction layer (humans) are not necessarily aware of everything going on at once. They aren't conscious of the calculations they do by following the rule set. They are simply conscious of their experiences and maybe the data they carry, and (maybe) the rules they follow, not the continuous whole product of those rules at all times, which an AI would be.

There is no reason to assume an AI would be conscious of all of that. Nor any reason to assume that whether or not it is is at all relevant to the question of whether it is conscious.
 
I don't have a clue what consciousness is but I'm sure I could create it in a robot.

Human vanity has no limits.
 
Back
Top Bottom