• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

human consciousness itself exists in what one could consider to be a data abstraction layer as well
I don't see how this could possibly make sense. The kind of consciousness I for one subjectively experience exists by itself and I know it does precisely because I experience it. Thus, it can only exist exactly as I know it, no more, no less. A data abstraction layer is a mere abstraction, i.e. it doesn't exist by itself. Instead, what we know exists is the mental concept we have of it (when we have one). To assume that data abstraction layers exist by themselves, as you seem to be doing, is just an unnecessary inflation of the ontology of the universe.
EB
 
So even if we didn't have evidence for the physical explanation, it'd still be the only credible explanation we have.
It's not even credible. For all we know at present, it's just a non-starter.


But of course, we do actually have evidence; evidence which has already been presented (such as the observable link between brain damage and changes in conscious functioning).
All the evidence there is--of a kind which actually human beings already had long before modern science made its first baby steps--is that of a correlation between what we experience and what we believe is happening in the material or physical world. Today, science is merely discovering more correlations and more basic ones. Apparently most people throughout the ages failed to be convinced that those correlations amounted to any kind of "emergence". The problem really is that we seem unable to show that the nature, or quality, of subjective consciousness is identical, or even similar, to that of what we think of as physical stuff. Maybe it is but we have no evidence that it is and we are at a loss to think how we could go about it. You just seem to be so unaware of what the crux of the matter is. Again, the evidence we have is evidence of correlation, not evidence that the nature, or quality, of subjective consciousness is the same at all as that of material or physical stuff.
EB
 
They invent emergent macro rules that hold together for a while, finally being overthrown, then have to go back and take another tack while the reductionists are still plodding ahead with uninterrupted advances
Yes.
EB







PS. Hey, it's funny too because in fact it's also true that science generally is quite accurately described by this process. Scientists invent "laws of nature" until Mother Nature kicks them in the groin and they have to go back to the drawing board. The real difference then is that reductionists go straight for the most microscopic law, or rule really, they can think of, while others, like behaviourists for instance :biggrina:, prefer to think in terms of, as you say, "macro rules". Although to be fair, the macro-rule types started investigating the complete human brain really at the dawn of humanity while the micro-rule types are still not investigating anything so big as a brain. The breakthroughs seem to be made by the micro-rule types but I believe that the macro-type ideas are not altogether useless. They're doing what is essentially philosophy plus experimentation except they actually learn nothing from being shown wrong since they will keep at the same method again and again.
 
The AI could, in fact, be conscious of the fact that it is not human, and be able to express this consciousness in human interactions, while still convincing them that it is conscious. Of course, someone with an unrealistic expectation that a consciousness must be human would actually be fooling themselves, so being confronted by such an obstinate human would not refute the consciousness of the AI.
Since Michael Graziano's article is about a robot emulating human consciousness as we subjectively experience it ("brain experiences its own data") it seems you are following a derail.

My statement was in response to Dystopian's post in which he said "Plus, the requirement you are positing here means it must behave like a *human*; but why should that be the requirement?"

I was merely trying to point out that I did not think that a consciousness would need to behave like a human. It may be something of a derail from discussion of the article, but then again, the same can be said of other responses in this thread.

Secondarily, what would there be for us to test if the robot has some kind of consciousness unlike our own? For that matter, ants have something, and who could say it's not some kind of non-human consciousness?
EB

Well, that's the big question now, isn't it? Some people have a bias toward thinking of consciousness as a property possessed only by humans, which is understandable, but it seems to me that some of the other great apes are likely to be conscious, as well as other mammals, like dolphins. But how does one objectively identify that to be the case? Much of this thread has been spent arguing over whether that is even possible.
 
Unfamiliar with Tegmark's mathematical universe hypothesis, are you?
Ohh, I'm familiar with it all right...
Metaphyiscal-1, 5 regular polyhedra is only possible set. Thx Max Tegmark.
The physical universe IS mathemathics, the perception that we exist in a 'real' world that is somehow seperate from math is entirely subjective.
Yeah. That's a neat way of looking at it, but complete bullshit. You can assign numeric values to blue, but blue isn't a number, it's something numbers apparently did to your and Max's (thx Max Tegmark) minds.
As some physicists would state, it may be less misleading to state that the universe is made out of numbers than to state it is made out of matter.
That's a pretty stupid thing to state.
Even if our own universe is truly physical and its physicality not a mathematical construction, this in no way implies that mathematics can not give rise to consciousnessness within its own mathematical existence.
No. The fact that mathematics is an abstraction which describes physical reality, rather than reality itself pretty much implies that mathematics alone cannot give rise to consciousness. In fact, mathematics needs a consciousness to perceive it, so.....

The question of whether or not reality is mathematics or whether mathematics just describes reality but is separate from it, remains an open question.
No it doesn't. That's a stupid dichotomy. Math describes parts of reality and is not separate from reality.
 
That's a pretty stupid thing to state.

Take it up with Briane Greene, he's the one who said it. He hardly strikes me as the sort to state stupid things.


No. The fact that mathematics is an abstraction which describes physical reality, rather than reality itself pretty much implies that mathematics alone cannot give rise to consciousness. In fact, mathematics needs a consciousness to perceive it, so.....

--

No it doesn't. That's a stupid dichotomy. Math describes parts of reality and is not separate from reality.

Oh gee, thanks for just asserting the same thing you asserted before and not actually substantiating it with even a basic argument beyond just asserting the same thing once again. Whatever would I do without you around to assert things? :rolleyes:
 
Take it up with Briane Greene, he's the one who said it. He hardly strikes me as the sort to state stupid things.

Just to clarify, are you suggesting that he is not a human; or are you merely using an unreasonably and unrealistically narrow definition of what the 'sort to state stupid things' includes?
 
Take it up with Briane Greene, he's the one who said it. He hardly strikes me as the sort to state stupid things.


No. The fact that mathematics is an abstraction which describes physical reality, rather than reality itself pretty much implies that mathematics alone cannot give rise to consciousness. In fact, mathematics needs a consciousness to perceive it, so.....

--

No it doesn't. That's a stupid dichotomy. Math describes parts of reality and is not separate from reality.

Oh gee, thanks for just asserting the same thing you asserted before and not actually substantiating it with even a basic argument beyond just asserting the same thing once again. Whatever would I do without you around to assert things? :rolleyes:
What? Was this a comment after you read a couple of your posts?

I have seen nothing but you offering baseless assertions repeatedly. I'm still waiting for the definition of consciousness you are using so I can figure out if your assertions have any bearing on that definition.
 
Take it up with Briane Greene, he's the one who said it. He hardly strikes me as the sort to state stupid things.

Just to clarify, are you suggesting that he is not a human; or are you merely using an unreasonably and unrealistically narrow definition of what the 'sort to state stupid things' includes?

I think he meant that although Brian Greene doesn't strike him very often, when he does, he strikes dystopian as if dystopian were the type to say stupid things.

I hope this conversation doesn't lead to dystopian being beat up by a physicist.
 
I have seen nothing but you offering baseless assertions repeatedly.

The assertions that I have made are indeed unproven assertions. Unlike the poster I was responding to, however, I have at least attempted to support my assertions with arguments instead of just reiterating the same assertion again. You're of course free to disagree with the degree to which my arguments support the assertions I've made; but don't pretend like I haven't at least tried to provide arguments for them.

I'm still waiting for the definition of consciousness you are using so I can figure out if your assertions have any bearing on that definition

As I already explained, it doesn't really matter how you define consciousness since it doesn't alter the fundamental argument; namely that a perfect simulation of a human's physical existence should produce the same level of consciousness. So long as you define consciousness in a way that humans exhibit, then the argument remains the same.
 
Of course sufficiently advanced AIs that interact with their physical substrate could develop consciousness. Of course AIs that develop in hardware that can support consciousness can become conscious...

Equally true is the fact that AIs that develop in substrates that do not form consciousnesses or allow the AIs to modify their substrate will not form consciousnesses.


From the article:
aeon.co said:
As long as scholars think of consciousness as a magic essence floating inside the brain, it won’t be very interesting to engineers. But if it’s a crucial set of information, a kind of map that allows the brain to function correctly, then engineers may want to know about it. And that brings us back to artificial intelligence. Gone are the days of waiting for computers to get so complicated that they spontaneously become conscious. And gone are the days of dismissing consciousness as an airy-fairy essence that would bring no obvious practical benefit to a computer anyway. Suddenly it becomes an incredibly useful tool for the machine.


Building a functioning attention schema is within the range of current technology. It would require a group effort but it is possible. We could build an artificial brain that knows what consciousness is, believes that it has it, attributes it to others, and can engage in complex social interaction with people. This has never been done because nobody knew what path to follow. Now maybe we have a glimpse of the way forward.
 
Last edited:
We would need to know what Sperry, Graziano's mentor, meant by experiencing its own data. A I see it Sperry was trying to revive dualism by posing an intervening variable 'experiencing itself' when articulating, communicating, would work just as well. A less elegant way to express the idea is to consider it reverberation.
 
They invent emergent macro rules that hold together for a while, finally being overthrown, then have to go back and take another tack while the reductionists are still plodding ahead with uninterrupted advances
Yes.
EB







PS. Hey, it's funny too because in fact it's also true that science generally is quite accurately described by this process. Scientists invent "laws of nature" until Mother Nature kicks them in the groin and they have to go back to the drawing board. The real difference then is that reductionists go straight for the most microscopic law, or rule really, they can think of, while others, like behaviourists for instance :biggrina:, prefer to think in terms of, as you say, "macro rules". Although to be fair, the macro-rule types started investigating the complete human brain really at the dawn of humanity while the micro-rule types are still not investigating anything so big as a brain. The breakthroughs seem to be made by the micro-rule types but I believe that the macro-type ideas are not altogether useless. They're doing what is essentially philosophy plus experimentation except they actually learn nothing from being shown wrong since they will keep at the same method again and again.

The only thing you miss is that reductionists have had about 600 years of continued advances while macro-theorists recycle to the latest reductionist baseline about every 20 years or so. It all boils down to how far we are from input of stimulus to output of data we desire. It if is one or two deterministic steps the theory tend to hold up, get modified and advance along the same trend line whilst if inputs are five or ten deterministic steps ( measurable) from output several intervening variables need be propped up which usually fail in a few years. Very little was was changed when Einstein adjusted physical theory from Force to energy taking into account Newton plus advances since Newton to the new global theory.
 
As I already explained, it doesn't really matter how you define consciousness since it doesn't alter the fundamental argument; namely that a perfect simulation of a human's physical existence should produce the same level of consciousness. So long as you define consciousness in a way that humans exhibit, then the argument remains the same.

It seems you've consistently been either unable or unwilling to actually understand what's being said to you; arguing against strawmen instead.

The point is that something that exists in a data abstraction layer (in a simulation) cannot become conscious unless it is somehow fed into a physical substrate that allows for consciousness.

Simulations themselves are run in data abstraction layers. One would hardly say that a test of quantum mechanics or gravity is a simulation instead of an experiment. One would hardly say that a breeding program is a simulation, instead of an experiment.

So if one is running something in physical reality, instead of a data abstraction layer, one would call it an experiment, instead of a simulation. While a simulation can be an experiment, not all experiments are simulations. There is a difference.
 
In other words, it's the only logical position because it flows logically from your prior belief in materialism.

No, said prior "belief" is the only observed reality. The fact that, for instance, a nuclear power plant actually works, shows the validity of the nuclear theory behind its operation (which in turn demonstrates materialism).

No, it doesn't. Physical processes work every bit as well under dualism as under materialism (they even work under idealism).

Sure, one can argue that maybe it just *seems* like it works according to the physics we understand to be behind it and it actually works through some non-materialist explanation... but that is a desperate...

...misunderstanding on your part as to what 'materialism' actually means.
If you say so. A dualist would disagree.

Their agreement or disagreement is not relevant. You're proposing that their notions aren't an appeal to magic because they don't think it's magic themselves.

Nope, I'm saying it's not an appeal to magic, because you are unable to substantiate such a claim. It doesn't magically become magic by virtue of you disagreeing with it.

Not even then. Look, imagine you had a conscious mind you could fully control and experiment with. How would you measure it's subjective experience?

This is getting into that solipsism territory again.

No, you're just avoiding the question. Say that we all believe solipsism to be false. How do you measure the internal subjective experience of your new mind, or even of existing minds?

you accept that other humans are also conscious. If you can accept that other humans are conscious, you can accept the same thing about an AI simulation of a human.

Of course you can believe what you want. But the question being asked is whether you can, through measurement, determine this to be true. I can believe an AI is conscious. I can also believe a bean-filled puppet is conscious. Many scary movies are written on that very premise. We can measure the internal processes of each. But the question, is whether you can measure whether each of these are conscious.

If you accept that its conscious, then you measure its subjective experience by simply recording the neural activity and asking it about the subjective experience; which isn't at all different from what we already do with human beings today.
And the acknowledged problem in the scientific literature today is that we don't have a means of measuring or testing for a subject's internal subjective experience. Since you're claiming that this isn't a problem, I'm trying to work out what you've got that the scientists in the profession don't have.

You've misunderstood the argument. The point of the cognitive zombie is to illustrate that we can't in practice measure subjective experience. Not because it might be a hoax, or a deliberate deception, which is an entirely different problem, but because all of the measureable facets of human behaviour could quite happily carry on without subjective experience.

I haven't misunderstood the argument at all. I understand perfectly well that that's what the p-zombie argument is supposed to show. It doesn't actually succesfully show this, however. It's a circular argument. It proposes the existence of something that is physically completely identical to a human being (a P-zombie), but which lacks subjective experiences. In doing so, it makes the assumption that the physical makeup of human beings does not cause consciousness,

No, it doesn't.

and then concludes the very same thing.

No it doesn't.

It doesn't assume a darned thing. It simply points out that the physically measureable evidence for the normal person and the zombie are identical. That there is no measurement, even in theory, that can tell them apart. This remains true irrespective of whether physical make-up causes consciousness or not, and forms the basis of the hard problem.

If you think it does depend on that assumption, then prove it. Assume that physical make-up does cause consciousness, and then describe how you would tell the two apart.

Saying that just want to ignore the problem is not a means of solving it.
 
bilby wrote 'believe it is conscious' (with my addition) say, like we do. The problems of sensing, aggregating, filtering, feeling, representing, and communicating are all already solved. All that remains is to do it like living things do it as a reactive system and make it self centered. I suggest a reactive programming approach.
We should decide if it's going to be a girl or a boy. :love:
EB

- - - Updated - - -

..Building a brain that self-reports isn't the same thing at all.
I don't think some of the people here understand this. They believe that data in microchips is somehow integrated into a consciousness in the same way that our minds are integrated with our brains.
There's only one explanation for this dreadful mishap, they're dead zombies! :sadyes:
EB
 
Clearly, there's nothing like "could conceivably" in the case of computer doing consciousness although of course it depends on what you mean by consciousness. In reality, what you mean is a robot doing whatever physical things we do, just as well or even better than we do them, like, say, translating a novel from Chinese to English, supervising political processes around the world, doing science, even writing books, perhaps even novels, even with some humour in them, painting masterpieces, creating new cities, teaching children, perhaps acting as a substitute for a dead person to ease the pain of the family etc. etc. If you call that "consciousness" you're home safe and dry. Me, I call that objective consciousness because personally, I also experience consciousness from a subjective perspective, so I call that subjective consciousness and most people understand what that mean because presumably they also experience consciousness from a subjective perspective.

If you can accept that other people experience consciousness from a subjective perspective, then you have no basis to reject the possibility of an AI experiencing subjective consciousness.
I can conceive of a robot giving all the appearence of objective consciousness on the ground that if nature did it at least there has been a pathway to do it. Also, I'm not overly impressed by what sapiens sapiens actually does. As other people have said, it's the easy problem.

As to subjective consciousness, I don't have to accept or refuse that other people experience consciousness from a subjective perspective. Maybe they do, maybe they don't. The thing is, as far as I know, I haven't the beginning of clue as to how I could prove they do. For practical purposes, all I need is to take for granted their more or less impressive display of objective consciousness capabilities, like linguistic communication, memory, logic etc. More problematic, you seem to miss the point about subjective consciousness. The fact is, I am the only person I know to have it. The same from your perspective of course. I actually came to the conclusion that some, possibly many people don't have it at all. So, I'm not particularly inclined to entertain high hopes for machines. More to the point, I have never ever heard of the beginning of a convincing explanation as to how objective consciousness could possibly give rise to subjective consciousness. The paucity of you own arguments and your appeal to the idea of complexity are both terribly telling. Also, I've heard so many idiot claims for so long about this possibility, starting with the Turing Test, you get to understand it's essentially like the junk e-mails we get every day. Blah-blah-blah.
EB
 
But here you're doing something else again. The two important words here are "could" and "conceivably". If we're not too demanding then, Ok, an idiot may want to conceive of some mumbo-jumbo scenario whereby some sooo incredibly complex electronic brain just gives rise to consciousness. The idiot would ignore all the necessary details as to how that could effectively happen. And if an idiot can do it I guess most people should be able to do it as well. Now, if you understand the words you're using, then you'd understand that "could" really suggests you know it could happen and "conceivably" suggests you could even explain to us in sufficient details how it would happen. Human beings could travel to Pluto. It's probably not going to happen any time soon and maybe never but it's conceivable. In fact we would just need that the economy gets going again for long enough.

My, that's quite a few leaps you've made there. Of course, I've never actually suggested that because we don't know the specific mechanism for consciousness to arise and that all paths that have not been shown false could conceivably lead to consciousness, that therefore it's a simple matter of going from conceivability to reality. What I've done instead is quite different. Perhaps it would help you if I explained it again in different terms?

Imagine you find yourself in a room with an undefined number of extruded squares on the walls with no memory of anything before you got there. You don't know anything at all, in fact. At this point, it is perfectly valid for you to say that it might be the case that the room is the entirety of existence; you after all have no knowledge of anything that lies outside it, having lost your memory of anything and everything. Yet, because you exist in a room, you can understand the concept of such a thing and therefore it is also perfectly valid for you to say that it might be the case there are other rooms beyond the one you are in. It is at that point that you suddenly become aware of a tiny sliver of memory returning to you: those extruded squares on the walls are doors; and doors are things that allow passage from one space to another. However, you have no way of knowing what lies beyond the doors without actually passing through them. So you pass through one, and find yourself in a tunnel that winds and loops back to another door, and you exit back into the same room you came from. At this point, it is perfectly valid for you to say that conceivably all doors, except the two you passed through, lead to an exit; just as its conceivable that none of the doors lead to an exit. But more than that, if there *is* an exit which leads to another space, it is perfectly reasonable to state that there's an infinite number of possible configurations for that space. Of course it's true that by existing it must actually have a specific configuration... but since you don't know whether it exists or not; and do not know its configuration in the event that it does exist; it is perfectly valid for you to imagine its configuration according to your own whims, including configurations that violate the laws of physics that you don't know about.
Reasonableness can only be assessed in relation to expected costs and benefits. In particular, only idiots would accept a long-shot possibility that would cost them a lot up front. The only thing reasonable here is to infer from your musings that it doesn't cost you a dime to suggest humanity should go on a goose chase.

Doing so does not make you an idiot. It does not require you to be able to explain how that configuration functions or how that other space came to be. It is simply the acceptance of *possibilities* in the face of unknown realities.
There is nothing to accept in possibilities without proper assessment. Acceptance here is such a vacuous word without assessment. Just rethoric. All we can do is to conceive of possibilities and we certainly do. It is precisely the overabundance of them that disqualifies all but the small number, if any, that we can accept as properly justified.
EB
 
So, I would agree that humans might, in some distant future,

People often like to add the notion of a technological achievement coming only in the "distant future". This is due to an innate human inability to instinctually grasp exponential growth. People are usually proven wrong pretty quickly about such "it won't happen for a long time" claims when it comes to technological developments that are actually possible; this is because technological growth isn't linear; it experiences exponential development phases. Humans can't quite grasp that. We think that because it took us X years to go from simple transistors to modern day computers, that it will therefore take at least as long to go from today's computers to something capable of say hosting artificial consciousness. The flaw in this assumption is self-evident to anyone who looks at historical technological development scales.
Hey, I can still remember the overinflated claims about AI made in the sixties!!! We're not even there yet!!!

What people also can't grasp is that sometimes the past is not indicative of the future. This is most in evidence where greed can give an operational handle on people's behaviour.
EB
 
Back
Top Bottom