• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

...

what?

I don't actually understand what this sentence is supposed to mean. For one, the consciousness would still be an emergent property of the complex system of data exchange; and two, emergent properties are *not* in conflict with any principles of science; nor is the concept of emergent minus anything else.


One cannot run an experiment if that experiment generates emergent,

Is this even a proper sentence? You can't use the word 'emergent' in that fashion.

Anything else is psychological bullshit or magic if you will. The fool who spouted "The sum is greater than the sum of its parts" had a pfart problem.

By calling emergent properties 'psychological bullshit or magic', you strongly suggest you don't actually know what you're talking about. The sum being greater than the sum of its part; ie, as in emergent properties, is a scientifically valid notion that is observed in a great number of physical systems. It is neither bullshit nor magic; it is a simple logical observation that if one configures the individual parts in the right way, functions can arise that can not be achieved by these parts on their own. Take apart a car engine, and you just have a bunch of parts that aren't very useful on their own. Put them together in the right way, and you create something far more capable/useful than if you just took all the parts and randomly taped them together: in other words, the sum of the engine is greater than the sum of its parts. Following that basic fact, we can apply this thinking to consciousness. We already know (from observing the connection between human consciousness and brains) that complex data exchange systems (such as neural networks like that of the brain) are a necessary part of forming consciousness. Since we do not, however, know exactly how to configure such a system to produce consciousness, and we have thus far not been able to demonstrate any configurations to be dead ends, we can say that all such complex systems could *conceivably* (again, operative word) give rise to consciousness. This is not even remotely 'bullshit or magic', it's a simple logically consistent extrapolation of empirical observation.

Wow. I'm gonna be gone for a few hours. When I come back be ready to compare articles.

Place held.

My articles are: Reductionism redux: http://www.idt.mdh.se/kurser/ct3340/archives/ht02/Reductionism_Redux.pdf

Read the highlighted parts to get the basics for why this is the right topic for parts equal to the whole.

Reductionism, Emergence,and Effective Field Theories http://arxiv.org/pdf/physics/0101039.pdf

This article breaks down the current arguments in physics about reductionism and other than reduction boils down to a funds competition by those who aren't trying, as yet, to relate their science to physics, I understand and sympathize, but, government laziness and AAAS nearsightedness aren't sufficient reasons to overthrow a model that consistently, when related to other disciplines, relates those systems and systems to the physical science. A technical good read and a clever argument, but, one without substance beyond energy effect boundary conditions. We just found the Higgs Boson, ferchrissake, using a machine entirely built depending on sum is equal to parts with methods based on sum equal to parts, ie: entirely reductionist.

Also I'd like to add that we find emergence because we don't have a complete list of parts* which is why those other scientists want to go their own way.* They don't have enough information so they want to develop other schemes whereby they can make sense of whats going on, a desirable goal, but,because they don't have the physics at hand they are told "get it". Actually that argument has another drawback. They invent emergent macro rules that hold together for a while, finally being overthrown, then have to go back and take another tack while the reductionists are still plodding ahead with uninterrupted advances. If you don't believe look at the history of neuroscience and psychology and see whether threads based on physical roots are bouncing around or whether those that aren't so base are. Believe me the latter are bouncing around yield a new thread of schools about every twenty years or so.

*Think of their problem as one where we more or less completely understand whats going on at the entrance to the ear so we understand in physical terms what the first neurons are doing. Whereas, on the other hand others are looking at the medial lateral frontal lobe and speculating based on physical change in oxygen uptake by those cells what the conscious brain is doing with that known stuff we found at the cochlear nucleus.

Finally a third article to illustrate that those who claim emergence are doing so because they don't have a firm grip on the parts. Immune Privilege and the Philosophy of Immunology http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3959614/

Like other scientists, immunologists use two types of approaches to research: one reduces the problem to its parts; the other studies the emergent phenomenon produced by the parts. Scientists that reduce the problem to its parts are sometimes called reductionists. The conclusions of reductionist experiments are often applied to the greater whole, when in actuality they may only apply to that particular experimental set. We, reductionists, are the ones who think our immune behavior exists solely because of genes, the presence of TGFβ, the presence of inflammatory cytokines, and appearance of a receptor.

Please note the problem is not that the methods won't work, its that they are too narrowly position to explain a discipline that goes way beyond than what they understand. So they invent a new approach that doesn't have those firm roots and they sally forth 'finding' emergence's everywhere in a set pf parts largely unknown. Its good they do the research. It's wrong that they think they are explaining. They should be comparing with what's known and using those emergence's to find ways to make their knowledge actually more complete.

Your turn.

... the hell is this shit even? None of this argues against anything I've been saying. First off, you're apparently understanding the term 'emergent property' to mean something it doesn't actually mean or at least a meaning that isn't being used here. Secondly, you then cite some stuff where peope are arguing against *specific* things (such as immune behavior) being an emergent property, and incidentally in doing so appear to actually not be arguing against immune behavior being an emergent behavior at all because they're still saying it takes a number of different things, the *combination* of which produces immune behavior where the individual parts don't accomplish this just on their own... which is the very definition of an emergent property. Even if we argued that this immune behavior is *not* an emergent property on the logic that the individual parts produce similar effects and the combination thereof doesn't produce a new effect simply amplifies existing ones... this in no way demonstrates that, for instance, the patterns of ice crystals that form snowflakes aren't an emergent property.

Of course, the problem here is that the notion of emergence/emergentism isn't strictly defined, and there are certainly ways to use it that lean more towards the 'magical bulshit' side of the spectrum. But I don't see how anything I've said leads one toward that conclusion, since that certainly can't be said about the statement that the interaction of individual properties can lead to the emergence of patterns/systems that the properties do not exhibit on their own... which really isn't a controversial subject in science and something quite easily demonstrated.

So the only response to your post I have to offer is a confused 'wat? what are you even on about?' :rolleyes:
 
This conversation is splashing around a bit, so I'm trying to focus it down a bit - if I've left something important out, please let do bring it up.

The point I'm making linked to the OP. Solving the hard problem means identifying why we have subjective experiences at all. Building a brain that self-reports isn't the same thing at all.

What you've been arguing is that we can reject dualism on the basis of evidence, which is fantastic news. Because if that is the case, then we can just keep on tinkering with physical systems until we hit upon a system that produces subjective experience. Which is great, except how do we tell it's a system that produces subjective experience? How do we measure or detect it?

This problem isn't going to go away. If we can't answer it, we can't use science to answer the hard problem.

So maybe we can get some inspiration from the evidence you found to reject dualism. After all, if we have evidence that there are no non-physical events, then that rules out a lot. But I don't think you did rule out dualism based on evidence. I think you ruled out dualism based on parsimony, utility, and a host of other reasons, all of which are no doubt well and good, but aren't the evidence you claimed they were.

Hm.. I do, you may not.

What I'm saying is that there is no evidence (your criterion) separating your view from that of a dualist

...this is utter nonsense. Emergent properties are not dualist in nature.

I didn't say they were. I said that what separates your opinion from that of a dualist is not evidence. it's a priori belief. It may be very sensible belief, I'm not saying I disagree with that belief, but it is a priori nonetheless.

Yes, you seem to think I'm somehow proposing that properties don't emerge. I'm not, I'm saying that what divides your opinion (consciousness is an emergent property) from other rival opinions (consciousness is not an emergent property) is not evidence.

The rival opinions appear to come in the form of theism "it's a soul, stupid!" or doubt of consciousness as an emergent property of physical systems; which strikes me as having motivations similar to that of theism. In any case, neither of these represent an actual working explanation for consciousness:

To be fair, the reason why we have the 'hard problem' is because materialism doesn't explain it either. Again, I'm not claiming anything about these rival opinions other than that you don't have evidence against them.

So even if we didn't have evidence for the physical explanation, it'd still be the only credible explanation we have. But of course, we do actually have evidence; evidence which has already been presented (such as the observable link between brain damage and changes in conscious functioning).

Great, but we also have observable links between mental processes and physical actions. That doesn't prove that everything is mental, so how do your observations demonstrate that consciousness is emergent, or rule out dualism? (Dualism involving both mental and physical processes)

'Because I'm a materialist' isn't really a reason, any more than 'because I'm a 'Christian' is a reason. I'm not saying it's an unreasonable position to hold, I'm saying that it is a position you have chosen to hold.

It is the only logical position. If the universe and everything in it is materialistic in nature,

In other words, it's the only logical position because it flows logically from your prior belief in materialism.

It is not a position I have "chosen" to hold, it's the only position I *can* hold; since any other position would require the active rejection of an objective reality.
No, you could be a dualist and have objective mental events as well as physical. It only contradicts materialism.

No, but then if you're arguing with a dualist, then it isn't fucking magic to them either. You really can't argue that dualism fails because it's not materialism - that's totally missing the point.

Dualism proposes that consciousness is somehow separate from physical existence; that it is not subject to physical processes or that it can exist independently of physical reality. Even if they don't call it that; it's still basically just "magic".

If you say so. A dualist would disagree. The above deduction is still not evidence however. It's a position you've arrived at through reason, not observation.

No, of course, not, you can create something first and then argue if it is conscious. But given that this is an internet discussion, unless you believe the patterns of our lengthy posts will suddenly awaken and becomes sentient, then the first step we can reasonably accomplish here is to work out what the frag we're talking about.

Which appears impossible until we actually have a conscious mind that we can fully control and experiment with.

Not even then. Look, imagine you had a conscious mind you could fully control and experiment with. How would you measure it's subjective experience?

Only if we make certain materialist assumptions a priori.

Which I have no problem with; since those assumptions are the only ones that have thus far actually allow us to do anything at all in the world.

That's an excellent reason. It's still not the evidence that you claimed you had.

No, it leads to some topics not being resolvable through scientific inquiry. it would only be solipsism if it were claimed that it's impossible to measure anything, rather than only some things.

No, no. It'd still lead to solipsim as the logical conclusion. Solipsism claims you can only be certain that your own mind exists; and that all other knowledge is suspect. It's the same logic that is in play with the philosophical zombie argument: if we accept that because philosophical zombies CAN exist, we therefore can't conclude a physical origin of consciousness;

Not quite. Solipsism is the rejection of all but one's own mental experiences, including other mind and the physical world. The cognitive zombie is the idea that other people's mental experience is not measureable, even in theory. It's not so much the 'same logic' as it is that all logical reasoning on the limits of knowledge leads to solipsism. http://www.iep.utm.edu/solipsis/

The point of the zombie thought experiment is that there is no measurement you can make to tell something with conscious experience from something without conscious experience. That doesn't mean that consciousness is or isn't physical, merely that we can't tell if it's physical or not. It's not a denial of other minds, or a rejection of the physical. It's a practical measurement problem. You can reject solipsism entirely, and still be left with no way of actually measuring someone else's mental experience.

No, you're confusing two different problems. The brain in the vat thought experiment is about what we can be sure about. The measurement problem is about what we can empirically control. Not all things are measureable. We can't measure Watford's potential to win the cup, the desirability of life insurance, or whether it's better to open eggs from the big end or little end. That's not because of solipsism.

I'm really not confusing anything here. I'm pointing out the absurdity in claiming that if a philosophical zombie *could* exist, that therefore we can't measure consciousness on the basis that it might a given consciousness might just be an elaborate hoax (ie; p-zombie); that argument is the exact same argument one would use to reject the world we experience on the basis that we *could* just be a brain in a vat.

You've misunderstood the argument. The point of the cognitive zombie is to illustrate that we can't in practice measure subjective experience. Not because it might be a hoax, or a deliberate deception, which is an entirely different problem, but because all of the measureable facets of human behaviour could quite happily carry on without subjective experience. That's why we see so many people who end up denying that we should be concerned with subjective experience, declaring it to be an illusion, or irrelevant to science. That's why this is the 'hard problem'. Not because reality is suspect, not because we might be being deceived, but because even in a physical world where solipsism is dead and buried, we're still have no way of measuring subjective experience, even in theory.
 
So maybe we can get some inspiration from the evidence you found to reject dualism. After all, if we have evidence that there are no non-physical events, then that rules out a lot. But I don't think you did rule out dualism based on evidence. I think you ruled out dualism based on parsimony, utility, and a host of other reasons, all of which are no doubt well and good, but aren't the evidence you claimed they were.

They're evidence; just not direct proof/evidence in the shape of empirical data. I don't see the problem? :confused:

Great, but we also have observable links between mental processes and physical actions.

We do not have evidence of physical existences being created purely by mental processes. The comparison is invalid.

In other words, it's the only logical position because it flows logically from your prior belief in materialism.

No, said prior "belief" is the only observed reality. The fact that, for instance, a nuclear power plant actually works, shows the validity of the nuclear theory behind its operation (which in turn demonstrates materialism). Sure, one can argue that maybe it just *seems* like it works according to the physics we understand to be behind it and it actually works through some non-materialist explanation... but that is a desperate argument which fails to actually explain anything at all and which doesn't help us in any way.

No, you could be a dualist and have objective mental events as well as physical. It only contradicts materialism.

That would require the dualist position to actually be in evidence, which it is not. Nothing we know and observe about the universe supports a dualist position. The only arguments against physicalism/materialism are hypothetical thought experiments, and they fail to be very convincing.


If you say so. A dualist would disagree.

Their agreement or disagreement is not relevant. You're proposing that their notions aren't an appeal to magic because they don't think it's magic themselves. By that same logic a Christian who claims that god made the world in 6 days isn't appealing to magic either. In order for me to accept a dualist explanation of consciousness as non-magical in nature, they must propose an actual mechanism that doesn't read like theism.


Not even then. Look, imagine you had a conscious mind you could fully control and experiment with. How would you measure it's subjective experience?

This is getting into that solipsism territory again. You either believe you and you alone are the only conscious entity in existence, or you accept that other humans are also conscious. If you can accept that other humans are conscious, you can accept the same thing about an AI simulation of a human. If you accept that its conscious, then you measure its subjective experience by simply recording the neural activity and asking it about the subjective experience; which isn't at all different from what we already do with human beings today. The only difference is that our ability to control the study better with a simulation.



That's an excellent reason. It's still not the evidence that you claimed you had.

It is, actually. The ability of a theory's principles to be applied in the real world to produce real effects that are consistent with the theory is itself a form of evidence. You're confusing me using the word 'evidence' for 'empirical data' (or further than that, proof). Evidence comes in many different shapes and strengths.


You've misunderstood the argument. The point of the cognitive zombie is to illustrate that we can't in practice measure subjective experience. Not because it might be a hoax, or a deliberate deception, which is an entirely different problem, but because all of the measureable facets of human behaviour could quite happily carry on without subjective experience.

I haven't misunderstood the argument at all. I understand perfectly well that that's what the p-zombie argument is supposed to show. It doesn't actually succesfully show this, however. It's a circular argument. It proposes the existence of something that is physically completely identical to a human being (a P-zombie), but which lacks subjective experiences. In doing so, it makes the assumption that the physical makeup of human beings does not cause consciousness, and then concludes the very same thing. It's circular.
 
I don't have a clue what consciousness is but I'm sure I could create it in a robot.

Human vanity has no limits.
Ok, human vanity has no limit so it couldn't possibly be contained within a finite-sized robot brain; you also probably agree that robot brains can only be finite; ergo, either human vanity has some limit or humans cannot create a conscious robot. ergo-ergo you just contradicted yourself!

But, hey, it just shows you are truly a human being!

Congratulation, you passed the test.
Yours, truly,
EB
 
That is a basic logical error. There is no such implication.

Only if you assume that I was saying it implies that any sufficiently complex system *would* achieve the same effect. Well, granted, I suppose I should specify that the sufficiently complex system is structured in a way that actually facilitates the means through which consciousness operates; rather than just say any complex system at all. However, since we don't actually know what range of structures can and can not give rise to consciousness, it is perfectly logical to state that any sufficiently complex system could *conceivably* (operative word) give rise to consciousness.
You originally used the word "imply" and that's just a big mistake, with or without “conceivably”.


But here you're doing something else again. The two important words here are "could" and "conceivably". If we're not too demanding then, Ok, an idiot may want to conceive of some mumbo-jumbo scenario whereby some sooo incredibly complex electronic brain just gives rise to consciousness. The idiot would ignore all the necessary details as to how that could effectively happen. And if an idiot can do it I guess most people should be able to do it as well. Now, if you understand the words you're using, then you'd understand that "could" really suggests you know it could happen and "conceivably" suggests you could even explain to us in sufficient details how it would happen. Human beings could travel to Pluto. It's probably not going to happen any time soon and maybe never but it's conceivable. In fact we would just need that the economy gets going again for long enough.

Clearly, there's nothing like "could conceivably" in the case of computer doing consciousness although of course it depends on what you mean by consciousness. In reality, what you mean is a robot doing whatever physical things we do, just as well or even better than we do them, like, say, translating a novel from Chinese to English, supervising political processes around the world, doing science, even writing books, perhaps even novels, even with some humour in them, painting masterpieces, creating new cities, teaching children, perhaps acting as a substitute for a dead person to ease the pain of the family etc. etc. If you call that "consciousness" you're home safe and dry. Me, I call that objective consciousness because personally, I also experience consciousness from a subjective perspective, so I call that subjective consciousness and most people understand what that mean because presumably they also experience consciousness from a subjective perspective. So, I would agree that humans might, in some distant future, achieve the creation of an objectively conscious robot with higher performances than us humans. However, I certainly don't believe that anyone has the faintest idea how we could bestow subjective consciousness on a robot brain. Maybe it's a possibility. Maybe even ants have it. But the point is that we really don't know how it works, what it is, and how we would go about testing that something has it. As a matter of fact, we actually don't even know that other people have it, so robots?
EB
 
They weren't all conscious states. I wasn't even trying to go there, to tell you the truth. The point was that current generation AIs that I am aware of (neural nets) exist as abstract data, and don't necessarily feel anything (they aren't necessarily conscious).

I can't digest 'creative' solutions when well functioning experiential solutions exist.
So you're saying that DALs are not necessary to get to where we (or maybe just you) need to go?

I agree on both you points with my usual caveats. Those nets can be computationally voiced and they can interact with humans as successful agents mimicking humans. I'm suggesting neural nets or any other nets or whatever one chooses to replace nets to serve the integrative and deciding processes aren't necessary. Some very strong arm well designed conventional iftheelse trees could serve the same purpose if one included programs elements meant to take such outputs and 'communicate with humans were added. Break the complex software up into simpler modules combine and mix as required and you have a human mimicking cake. I very much favor a reactive design approach to the 'believing it is conscious' problem.
 
Only if you assume that I was saying it implies that any sufficiently complex system *would* achieve the same effect. Well, granted, I suppose I should specify that the sufficiently complex system is structured in a way that actually facilitates the means through which consciousness operates; rather than just say any complex system at all. However, since we don't actually know what range of structures can and can not give rise to consciousness, it is perfectly logical to state that any sufficiently complex system could *conceivably* (operative word) give rise to consciousness.
You originally used the word "imply" and that's just a big mistake, with or without “conceivably”.


But here you're doing something else again. The two important words here are "could" and "conceivably". If we're not too demanding then, Ok, an idiot may want to conceive of some mumbo-jumbo scenario whereby some sooo incredibly complex electronic brain just gives rise to consciousness. The idiot would ignore all the necessary details as to how that could effectively happen. And if an idiot can do it I guess most people should be able to do it as well. Now, if you understand the words you're using, then you'd understand that "could" really suggests you know it could happen and "conceivably" suggests you could even explain to us in sufficient details how it would happen. Human beings could travel to Pluto. It's probably not going to happen any time soon and maybe never but it's conceivable. In fact we would just need that the economy gets going again for long enough.

Clearly, there's nothing like "could conceivably" in the case of computer doing consciousness although of course it depends on what you mean by consciousness. In reality, what you mean is a robot doing whatever physical things we do, just as well or even better than we do them, like, say, translating a novel from Chinese to English, supervising political processes around the world, doing science, even writing books, perhaps even novels, even with some humour in them, painting masterpieces, creating new cities, teaching children, perhaps acting as a substitute for a dead person to ease the pain of the family etc. etc. If you call that "consciousness" you're home safe and dry. Me, I call that objective consciousness because personally, I also experience consciousness from a subjective perspective, so I call that subjective consciousness and most people understand what that mean because presumably they also experience consciousness from a subjective perspective. So, I would agree that humans might, in some distant future, achieve the creation of an objectively conscious robot with higher performances than us humans. However, I certainly don't believe that anyone has the faintest idea how we could bestow subjective consciousness on a robot brain. Maybe it's a possibility. Maybe even ants have it. But the point is that we really don't know how it works, what it is, and how we would go about testing that something has it. As a matter of fact, we actually don't even know that other people have it, so robots?
EB

bilby wrote 'believe it is conscious' (with my addition) say, like we do. The problems of sensing, aggregating, filtering, feeling, representing, and communicating are all already solved. All that remains is to do it like living things do it as a reactive system and make it self centered. I suggest a reactive programming approach.
 
..Building a brain that self-reports isn't the same thing at all.
I don't think some of the people here understand this. They believe that data in microchips is somehow integrated into a consciousness in the same way that our minds are integrated with our brains.
 
You originally used the word "imply" and that's just a big mistake, with or without “conceivably”.

If you say so.


But here you're doing something else again. The two important words here are "could" and "conceivably". If we're not too demanding then, Ok, an idiot may want to conceive of some mumbo-jumbo scenario whereby some sooo incredibly complex electronic brain just gives rise to consciousness. The idiot would ignore all the necessary details as to how that could effectively happen. And if an idiot can do it I guess most people should be able to do it as well. Now, if you understand the words you're using, then you'd understand that "could" really suggests you know it could happen and "conceivably" suggests you could even explain to us in sufficient details how it would happen. Human beings could travel to Pluto. It's probably not going to happen any time soon and maybe never but it's conceivable. In fact we would just need that the economy gets going again for long enough.

My, that's quite a few leaps you've made there. Of course, I've never actually suggested that because we don't know the specific mechanism for consciousness to arise and that all paths that have not been shown false could conceivably lead to consciousness, that therefore it's a simple matter of going from conceivability to reality. What I've done instead is quite different. Perhaps it would help you if I explained it again in different terms?

Imagine you find yourself in a room with an undefined number of extruded squares on the walls with no memory of anything before you got there. You don't know anything at all, in fact. At this point, it is perfectly valid for you to say that it might be the case that the room is the entirety of existence; you after all have no knowledge of anything that lies outside it, having lost your memory of anything and everything. Yet, because you exist in a room, you can understand the concept of such a thing and therefore it is also perfectly valid for you to say that it might be the case there are other rooms beyond the one you are in. It is at that point that you suddenly become aware of a tiny sliver of memory returning to you: those extruded squares on the walls are doors; and doors are things that allow passage from one space to another. However, you have no way of knowing what lies beyond the doors without actually passing through them. So you pass through one, and find yourself in a tunnel that winds and loops back to another door, and you exit back into the same room you came from. At this point, it is perfectly valid for you to say that conceivably all doors, except the two you passed through, lead to an exit; just as its conceivable that none of the doors lead to an exit. But more than that, if there *is* an exit which leads to another space, it is perfectly reasonable to state that there's an infinite number of possible configurations for that space. Of course it's true that by existing it must actually have a specific configuration... but since you don't know whether it exists or not; and do not know its configuration in the event that it does exist; it is perfectly valid for you to imagine its configuration according to your own whims, including configurations that violate the laws of physics that you don't know about.

Doing so does not make you an idiot. It does not require you to be able to explain how that configuration functions or how that other space came to be. It is simply the acceptance of *possibilities* in the face of unknown realities.







Clearly, there's nothing like "could conceivably" in the case of computer doing consciousness although of course it depends on what you mean by consciousness. In reality, what you mean is a robot doing whatever physical things we do, just as well or even better than we do them, like, say, translating a novel from Chinese to English, supervising political processes around the world, doing science, even writing books, perhaps even novels, even with some humour in them, painting masterpieces, creating new cities, teaching children, perhaps acting as a substitute for a dead person to ease the pain of the family etc. etc. If you call that "consciousness" you're home safe and dry. Me, I call that objective consciousness because personally, I also experience consciousness from a subjective perspective, so I call that subjective consciousness and most people understand what that mean because presumably they also experience consciousness from a subjective perspective.

If you can accept that other people experience consciousness from a subjective perspective, then you have no basis to reject the possibility of an AI experiencing subjective consciousness.

So, I would agree that humans might, in some distant future,

People often like to add the notion of a technological achievement coming only in the "distant future". This is due to an innate human inability to instinctually grasp exponential growth. People are usually proven wrong pretty quickly about such "it won't happen for a long time" claims when it comes to technological developments that are actually possible; this is because technological growth isn't linear; it experiences exponential development phases. Humans can't quite grasp that. We think that because it took us X years to go from simple transistors to modern day computers, that it will therefore take at least as long to go from today's computers to something capable of say hosting artificial consciousness. The flaw in this assumption is self-evident to anyone who looks at historical technological development scales.


However, I certainly don't believe that anyone has the faintest idea how we could bestow subjective consciousness on a robot brain.

We've already explained how to do it in this very thread, so to say nobody has the faintest idea is just ignorant: Whole Brain Emulation. Rejecting this as a valid path to subjective consciousness requires you to explain how human consciousness is both subjective *and* non-physical in nature. There are currently at least two major scientific projects underway in the world aimed at whole brain emulation with large amounts of funding behind them. This isn't "far off future" stuff. This isn't even "maybe in our lifetimes" stuff. We're talking about full human brain emulation by the mid 2020's.
 
I'm suggesting neural nets or any other nets or whatever one chooses to replace nets to serve the integrative and deciding processes aren't necessary.
Are you saying that human level consciousness don't require brains to exist, or to perceive organized thought?

hahaha.. organized...
 
Back
Top Bottom