• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

Which is not, actually, relevant at all. Indeed, human consciousness itself exists in what one could consider to be a data abstraction layer as well; so I fail to see how data abstraction is an argument against machine consciousness.
In your computer, do you think that there is a single consciousness computing 1's and 0's or electrons moving around according to rules in a DAL?

Until the information in the DAL is reintegrated with your or another person's consciousness, visually or otherwise, it's just a bunch of electrons moving around QEDally. It's just electrons in various positions, following rules, until a consciousness observes the product.
Do you think the following produces a consciousness, or a consciousness observes the product of the DAL:


Obviously not, as should be self-evident by the very sentence you're responding to where I explicitly state a human powered computer (ie; humans moving about in the physical world in order to create a computational system through their physical actions) could not possibly achieve enough complexity or efficiency to allow for that unless you throw away the laws of physics.

Of course not. However, if this does not produce a tiny bit of consciousness, then a lot more of it isn't going to produce any consciousness either. It affects your consciousness when the product of the DAL is funneled into your consciousness.
 
In your computer, do you think that there is a single consciousness computing 1's and 0's or electrons moving around according to rules in a DAL?

Until the information in the DAL is reintegrated with your or another person's consciousness, visually or otherwise, it's just a bunch of electrons moving around QEDally. It's just electrons in various positions, following rules, until a consciousness observes the product.
Do you think the following produces a consciousness, or a consciousness observes the product of the DAL:


Obviously not, as should be self-evident by the very sentence you're responding to where I explicitly state a human powered computer (ie; humans moving about in the physical world in order to create a computational system through their physical actions) could not possibly achieve enough complexity or efficiency to allow for that unless you throw away the laws of physics.

Of course not. However, if this does not produce a tiny bit of consciousness, then a lot more of it isn't going to produce any consciousness either. It affects your consciousness when the product of the DAL is funneled into your consciousness.


Oh, by consciousness you mean articulated.......
 
No old man, I mean something like you, with a sense of humor.
 
No old man, I mean something like you, with a sense of humor.


Thanks?

I actually meant to clarify your "the product of the DAL is funneled into your consciousness"

I'll accept the product of DAL as consciousness. Funneling seems redundant unless you mean something like saying something or otherwise demonstrating your DAL product, which as I said, is consciousness as far as I'm concerned. Demonstrations are only necessary in science and horse shoes./drydrydrypuleezsome'motionkid
 
I actually meant to clarify your "the product of the DAL is funneled into your consciousness"

I'll accept the product of DAL as consciousness. Funneling seems redundant unless you mean something like saying something or otherwise demonstrating your DAL product, which as I said, is consciousness as far as I'm concerned.

While a consciousness can be a Data Abstraction Layer, not all DALs are consciousnesses. Say you build a computer out of humans following certain rule sets. The humans just move around according to rules, they don't know the data they convey by their movements, or necessarily the final product of the rule sets unless they are shown it. For example:

Imagine you could pass around information to humans in the "human LCD stadium".

Say certain people were instructed to pass out individual instruction manuals to certain people. The information these people have is which person is to be given each particular instruction set. This is one DAL.

A second DAL consists of specific instruction booklets that are passed to specific individuals. These are physical booklets, with instructions written in them. The written instructions are another DAL: the ink itself is not the data, although it is part of the DAL.

Another DAL is the finished product, the humans behaving according to the instructions to create images in the stadium bleachers. This DAL is not necessarilly a consciousness itself, however when one of us views it, it is "fed" or "funneled" into our consciousness. Which might not be the greatest of terminology.


Hey.. umm... emotion? Here?
 
That is the point. That is the conscious/aware aspect, to be conscious/aware of colours, shapes, etc, and their significance on the basis of recognition, an aspect of conscious awareness. Which may or may not include self awareness. Some animals perhaps conscious of their environment, but not self conscious.

No. Awareness has nothing to do with it. It is very simple to do an aware agent (including self awarenes). To be aware is a basic requirement.

You see, to me, your wording appears to be quite problematic.

When you say ''It is very simple to do an aware agent (including self awarenes)'' - I have no idea what you mean by 'to do an aware agent (including self awareness).

Does 'to do' mean that we can easily design and manufacture an 'aware' or 'self aware' agent? Or did you mean something else?
 
.....

Again, what we know is that consciousness is an emergent property of neural activity; which is itself a really just a complex system of data exchange between individual connections. This implies that any sufficiently complex system of data exchange could conceivably achieve the same effect. We do not have any reason to suspect there must be some sort of direct "connection" between the mechanism and the produced consciousness.

.....

If just a complex system of data exchange (processing, whatever) could achieve the same effect there is no need for emergent which conflicts with principles of science.

...

what?

I don't actually understand what this sentence is supposed to mean. For one, the consciousness would still be an emergent property of the complex system of data exchange; and two, emergent properties are *not* in conflict with any principles of science; nor is the concept of emergent minus anything else.


One cannot run an experiment if that experiment generates emergent,

Is this even a proper sentence? You can't use the word 'emergent' in that fashion.

Anything else is psychological bullshit or magic if you will. The fool who spouted "The sum is greater than the sum of its parts" had a pfart problem.

By calling emergent properties 'psychological bullshit or magic', you strongly suggest you don't actually know what you're talking about. The sum being greater than the sum of its part; ie, as in emergent properties, is a scientifically valid notion that is observed in a great number of physical systems. It is neither bullshit nor magic; it is a simple logical observation that if one configures the individual parts in the right way, functions can arise that can not be achieved by these parts on their own. Take apart a car engine, and you just have a bunch of parts that aren't very useful on their own. Put them together in the right way, and you create something far more capable/useful than if you just took all the parts and randomly taped them together: in other words, the sum of the engine is greater than the sum of its parts. Following that basic fact, we can apply this thinking to consciousness. We already know (from observing the connection between human consciousness and brains) that complex data exchange systems (such as neural networks like that of the brain) are a necessary part of forming consciousness. Since we do not, however, know exactly how to configure such a system to produce consciousness, and we have thus far not been able to demonstrate any configurations to be dead ends, we can say that all such complex systems could *conceivably* (again, operative word) give rise to consciousness. This is not even remotely 'bullshit or magic', it's a simple logically consistent extrapolation of empirical observation.
 
No. Awareness has nothing to do with it. It is very simple to do an aware agent (including self awarenes). To be aware is a basic requirement.

You see, to me, your wording appears to be quite problematic.

When you say ''It is very simple to do an aware agent (including self awarenes)'' - I have no idea what you mean by 'to do an aware agent (including self awareness).

Does 'to do' mean that we can easily design and manufacture an 'aware' or 'self aware' agent? Or did you mean something else?

Of course. What is the problem?
 
Togo said:
There is absolutely no evidence whatsoever supporting your interpretation of consciousness. If you want consciousness as an emergent property to be accepted, and any other interpretation to be rejected, you at the very least some kind of reason.
I... don't think you understand what you're saying here.

Hm.. I do, you may not.

What I'm saying is that there is no evidence (your criterion) separating your view from that of a dualist

You do realize that emergence refers to the process whereby larger entities or patterns arise through interactions among simpler entities that do not on their own exhibit such properties, right?

Yes, although it really doesn't matter for the point I was making, which is that the difference between the two views isn't a matter of evidence.

Literally nobody in either philosophy (except some of those who are of the theistic persuasion) or science posits anything other than the notion that consciousness is an emergent property. So... what the hell are you even talking about?

Chalmers, and other dualists. You appear to think they don't exist.

When you claim there's no evidence supporting my interpretation of consciousness, you're demonstrating that at best you simply don't know what the term 'emergent property' means and at worst that you're actively suggesting a supernatural explanation for consciousness. I prefer the middle road though, where either or both of us is simply misinterpreting what the other's argument is.

Yes, you seem to think I'm somehow proposing that properties don't emerge. I'm not, I'm saying that what divides your opinion (consciousness is an emergent property) from other rival opinions (consciousness is not an emergent property) is not evidence.

And no reason to hypothesise that it can.

Nonsense. We have lots of reasons to hypothesize exactly that. By accepting that we live in a materialistic universe, we find ourselves forced to conclude

'Because I'm a materialist' isn't really a reason, any more than 'because I'm a 'Christian' is a reason. I'm not saying it's an unreasonable position to hold, I'm saying that it is a position you have chosen to hold.

that it is plausible that any process within it can be replicated since any such processes will be subject to certain basic natural laws and are not fucking magic.

No, but then if you're arguing with a dualist, then it isn't fucking magic to them either. You really can't argue that dualism fails because it's not materialism - that's totally missing the point.

Which is why the first hurdle is to define what we're trying to prove. Traditionally, attempts to form scientific hypotheses about consciousness have foundered on one of two rocks, either the 'we can't measure this' rock, or on the 'we've found something to measure, but no one really thinks it's consciousness'. This is why it's called the 'hard problem' of consciousness. Because there are lots of easy problems to solve, just by redefining conscious experience as something that's simple to measure.

Except this is not actually the issue at all if we're talking about creating artificial consciousness. You don't need to define something in order to create it; nor do you explicitly need to understand or measure it first.... ...Artificial consciousness would still be conscious regardless of whether or not we can recognize it.

No, of course, not, you can create something first and then argue if it is conscious. But given that this is an internet discussion, unless you believe the patterns of our lengthy posts will suddenly awaken and becomes sentient, then the first step we can reasonably accomplish here is to work out what the frag we're talking about.

In order for science to be useful here, we need something we can measure. Or we need to prove that there is no possible difference between A and B. What we can't do is declare we're only interested in measureable things, say that the difference is not measureable, and then claim that because it's not measureable it somehow doesn't exist.

Which is where the simulation comes in. Since we know human consciousness to be a product of the brain (we don't need to understand in exacting detail how consciousness functions to know this, just like you don't need to understand the physical processes involved with smoke resulting from a fire to make the connection between the two) then we can reasonably conclude that a simulation of said brain; with a high enough resolution; is in fact conscious when it behaves similarly to a real brain.

Only if we make certain materialist assumptions a priori. Again, I'm not arguing they're unreasonable assumptions, merely that making them isn't really a refutation of dualism.

It wasn't programmed, after all, to pretend to be conscious; its consciousness is the result of a simulated version of the exact same processes that appear produce our own consciousness. So, at that point we can start to actually experimentally understand consciousness in ways that are not possible at present, by altering bits and pieces of the simulation in order to see what changes.

Yes, this is something I've actually done. Of course, unless you have a definition of consciousness, you have to guess at what to measure (or measure everything, and then publish whatever is fashionable at the time.)

It's in effect claiming that any mechanism that is sufficiently complicated to duplicate the behaviour of a conscious person, develops consciousness.

...no, it's really not.

Hm.. Fair point.

that a 'cognitive zombie' would be logically impossible. If it's not logically impossible, then there's still no way of measuring whether consciousness is present or not.

This is not really a good argument since this kind of logic invalidates any and all measurements period. It leads to solipsism.

No, it leads to some topics not being resolvable through scientific inquiry. it would only be solipsism if it were claimed that it's impossible to measure anything, rather than only some things.

Which is exactly why a great many human e It is not logically impossible that you are actually a brain in a vat and that everything you've ever experienced is a lie;

No, you're confusing two different problems. The brain in the vat thought experiment is about what we can be sure about. The measurement problem is about what we can empirically control. Not all things are measureable. We can't measure Watford's potential to win the cup, the desirability of life insurance, or whether it's better to open eggs from the big end or little end. That's not because of solipsism.

The basic problem is that consciousness, as ordinarily conceived, is not condusive to measurement. If we're going to use science to solve the hard problem, we need to measure something. What should we measure?
 
You see, to me, your wording appears to be quite problematic.

When you say ''It is very simple to do an aware agent (including self awarenes)'' - I have no idea what you mean by 'to do an aware agent (including self awareness).

Does 'to do' mean that we can easily design and manufacture an 'aware' or 'self aware' agent? Or did you mean something else?

Of course. What is the problem?
I suspect you have designed a self-conscious robot already, good for you.
 
  • Like
Reactions: DBT
I actually meant to clarify your "the product of the DAL is funneled into your consciousness"

I'll accept the product of DAL as consciousness. Funneling seems redundant unless you mean something like saying something or otherwise demonstrating your DAL product, which as I said, is consciousness as far as I'm concerned.

While a consciousness can be a Data Abstraction Layer, not all DALs are consciousnesses. Say you build a computer out of humans following certain rule sets. The humans just move around according to rules, they don't know the data they convey by their movements, or necessarily the final product of the rule sets unless they are shown it. For example:

Imagine you could pass around information to humans in the "human LCD stadium".

Say certain people were instructed to pass out individual instruction manuals to certain people. The information these people have is which person is to be given each particular instruction set. This is one DAL.

A second DAL consists of specific instruction booklets that are passed to specific individuals. These are physical booklets, with instructions written in them. The written instructions are another DAL: the ink itself is not the data, although it is part of the DAL.

Another DAL is the finished product, the humans behaving according to the instructions to create images in the stadium bleachers. This DAL is not necessarilly a consciousness itself, however when one of us views it, it is "fed" or "funneled" into our consciousness. Which might not be the greatest of terminology.


Hey.. umm... emotion? Here?

Consciousness is one of those things that comes in individual and group plans. Consciousness in humans is very individual in that whatever genetics exist for that individual leading to any of a multitude of conscious outcomes which may be similar to, but different from, any other individual. Given that all conscious states you describe may or may not be consciousness. The one where rules are in place are usually called instinctive or reflexes, but, the individual is aware that it is being exercised. Rather than an LCD model I'd suggest an LSD model since it is chemically motivated as are the human neural systems. Obviously this could be very emotional.

Your first two models i see as being examples of individual personal and group considered consciousnesses.

Yeah, I know I didn't follow Sutherland's rules and keep out that emotional chancy chemical intervention stuff. So what? Its all consciousness. More or less mechanical activity of which the participant is aware is consciousness. Dealing with individuals via program is particularized consciousness with respect to, say, kin. Collective consciousness which is what we call the free will arena where one schemes about what one has done to leave indications one is doing what he hopes is not a threat tho others.

None of this seem to be a difficult problem for programmers except that, well, programmers are programmers.

let me give you and instance in which I was pitted against a well respected AI design person at Boeing. Our task to come up with a system that aided pilots in making decisions in critical situations where things were going bad. His plan included modules that evaluated conditions that ended in decision makers that provided prioritized instructions to pilots. My plan consisted of menus which pilots relied upon to implement instructions to the aircraft control systems. Both plans did about the same thing. His plan cost billions and my plan used existing technology and research to provide information to decision makers in selected already existing organized lists, costing merely tens of millions with an already proven track record.

That seems to be a problem with programmers in general. They'd rather scrap what exists and come up with a pure solution. As one who managed those people and performed as one before that I chose to use what existed rather than re-invent the wheel.

Two additional bits of information both approaches have been tried. What is essentially my approach is in place and it is contributing to A/C safety. His approach is now guiding ES and AI and consciousness-crats to treat pure systems as if either emotion or rationalism is basically excluded and it is still in what is called the concept stage (DSARC I).

Sorry if I don't bring a smile. Just a problem of mine. I can't digest 'creative' solutions when well functioning experiential solutions exist. That is not to say that I'd rather fly by the seat of my pants and look out the window to see what armament I had remaining. I'd rather depend on proven assisted systems and displays to keep my head and mind on focus when in combat which is demonstrated to increase wins and save lives.
 
If just a complex system of data exchange (processing, whatever) could achieve the same effect there is no need for emergent which conflicts with principles of science.

...

what?

I don't actually understand what this sentence is supposed to mean. For one, the consciousness would still be an emergent property of the complex system of data exchange; and two, emergent properties are *not* in conflict with any principles of science; nor is the concept of emergent minus anything else.


One cannot run an experiment if that experiment generates emergent,

Is this even a proper sentence? You can't use the word 'emergent' in that fashion.

Anything else is psychological bullshit or magic if you will. The fool who spouted "The sum is greater than the sum of its parts" had a pfart problem.

By calling emergent properties 'psychological bullshit or magic', you strongly suggest you don't actually know what you're talking about. The sum being greater than the sum of its part; ie, as in emergent properties, is a scientifically valid notion that is observed in a great number of physical systems. It is neither bullshit nor magic; it is a simple logical observation that if one configures the individual parts in the right way, functions can arise that can not be achieved by these parts on their own. Take apart a car engine, and you just have a bunch of parts that aren't very useful on their own. Put them together in the right way, and you create something far more capable/useful than if you just took all the parts and randomly taped them together: in other words, the sum of the engine is greater than the sum of its parts. Following that basic fact, we can apply this thinking to consciousness. We already know (from observing the connection between human consciousness and brains) that complex data exchange systems (such as neural networks like that of the brain) are a necessary part of forming consciousness. Since we do not, however, know exactly how to configure such a system to produce consciousness, and we have thus far not been able to demonstrate any configurations to be dead ends, we can say that all such complex systems could *conceivably* (again, operative word) give rise to consciousness. This is not even remotely 'bullshit or magic', it's a simple logically consistent extrapolation of empirical observation.

Wow. I'm gonna be gone for a few hours. When I come back be ready to compare articles.
 
AI is a branch of computer science, not neuroscience. It tells us little about how the human brain thinks and does what it does. Does our ability to build a robot dog that barks tell us why and how dogs bark? No. It tells us that computer programmers can write code that mimics the externally observable actions of a dog.

There are infinite paths between any stimulus-response pair. Just because you can build a program that mimics S-R pairs similar to humans doesn't mean that it does so via the same pathway as humans. In fact, the odds are 1/infinity that it does.

The thread question is "We could build an artificial brain that believes itself to be conscious. Does that mean that we have solved the hard problem?"
Presuming the first statement is actually true, the answer would still be "No". It is quite possible that we could figure out a system that give rise to real consciousness without actually know how it does it. Neuroscience (which unlike AI studies human consciousness), is advancing in identifying what it calls "neural correlates of consciousness". Notice the reference to correlation rather than causation. Technically, these neural systems could be argued to be necessary and sufficient to give rise to consciousness. But the is still nothing close to a viable theory as to how or why these collections of systems could do this. It is analogous to an engineer who could investigate a physical disc brake system in a car, determine all the necessary and sufficient arrangement of physical components that allow the system to work and allow stepping on the pedal to slow the car down. Yet, they can do this without understanding anything about the physics of how or how the system produces that outcome.
The analogy isn't perfect because we do know the physics behind this outcome, but that knowledge of the physical forces was not required nor a product of us being able to deconstruct and reconstruct the observable parts of a brake system and get it to produce a particular outcome given a particular input.

Even if we can actually figure out the arrangement of physical parts that produce human consciousness, we will still be far from knowing how this system actually accomplishes this outcome. And AI isn't even trying to figure out th e systems that produces human consciousness. It is figuring out how to mimic that outward appearance of consciousness. That is not the same thing as building a system that produces actual consciousness. First they must show that the AI actually "believes it is conscious" in the qualitatively same way that humans do. I haven't seen anything showing they are even approaching a demonstration of that. Then, even if they show that, they are unlikely to have produced this consciousness via the same causal pathway via which human consciousness is produced, unless they start the whole process based upon components that mimic how the organic brain operates and transmits info.


Can we mimic the outward behaviors associated with consciousness? Yes, that is what AI is approaching.
Can we figure out the arrangement of neural parts that wind up producing human consciousness? Yes. That it what neuroscience is approaching.

But these positive achievements are not the same as and may not get us close at all to the real "hard problems", which are:

Exactly how does the arrangement of parts that produce human consciousness accomplish this?
Is any AI system actually conscious? And if so, is it conscious via the same causal pathway as human consciousness?
 
I... don't think you understand what you're saying here.

Hm.. I do, you may not.

What I'm saying is that there is no evidence (your criterion) separating your view from that of a dualist

...this is utter nonsense. Emergent properties are not dualist in nature.


Chalmers, and other dualists. You appear to think they don't exist.

I'm not sure what point you're trying to make; Chalmers doesn't deny consciousness as being an emergent property; he even acknowledges that consciousness is caused by physical processes; he just believes there's something else going in addition to the physical.

Yes, you seem to think I'm somehow proposing that properties don't emerge. I'm not, I'm saying that what divides your opinion (consciousness is an emergent property) from other rival opinions (consciousness is not an emergent property) is not evidence.

The rival opinions appear to come in the form of theism "it's a soul, stupid!" or doubt of consciousness as an emergent property of physical systems; which strikes me as having motivations similar to that of theism. In any case, neither of these represent an actual working explanation for consciousness: the former is just superstition, and the latter is simply expressing doubt without providing credible mechanisms to replace it. So even if we didn't have evidence for the physical explanation, it'd still be the only credible explanation we have. But of course, we do actually have evidence; evidence which has already been presented (such as the observable link between brain damage and changes in conscious functioning).

'Because I'm a materialist' isn't really a reason, any more than 'because I'm a 'Christian' is a reason. I'm not saying it's an unreasonable position to hold, I'm saying that it is a position you have chosen to hold.

It is the only logical position. If the universe and everything in it is materialistic in nature, then it should theoretically be possible to reproduce consciousness because it would be subject to physical laws and rules which exist in the same framework as we do. It's no different than saying that it is theoretically possible for me to replicate your exact omelette recipe; it's simply a matter of gathering the right ingredients and working out the steps. It is not a position I have "chosen" to hold, it's the only position I *can* hold; since any other position would require the active rejection of an objective reality.

No, but then if you're arguing with a dualist, then it isn't fucking magic to them either. You really can't argue that dualism fails because it's not materialism - that's totally missing the point.

Dualism proposse that consciousness is somehow seperate from physical existence; that it is not subject to physical processes or that it can exist independently of physical reality. Even if they don't call it that; it's still basically just "magic".
No, of course, not, you can create something first and then argue if it is conscious. But given that this is an internet discussion, unless you believe the patterns of our lengthy posts will suddenly awaken and becomes sentient, then the first step we can reasonably accomplish here is to work out what the frag we're talking about.

Which appears impossible until we actually have a conscious mind that we can fully control and experiment with. If one has no knowledge whatsoever of human biology, then one can make some basic inferrences about a human's internal structure by observing it in action... but if you want to actually understand its anatomy, you must take a look inside. One would expect the same thing to be true with something like consciousness. We can infer things about its nature from observing entities we agree are conscious (humans); but we're going to need to dissect a functioning consciousness if we want to go beyond that. We'd need to break some pretty big ethical rules in order to experiment with a human's brain/consciousness to the extent that we can systematically begin to understand consciousness... so it'd probably be easier to do it on a simulation. At least until we decide that its unethical to experiment on digital consciousnesses.



Only if we make certain materialist assumptions a priori.

Which I have no problem with; since those assumptions are the only ones that have thus far actually allow us to do anything at all in the world. It'd really be quite a bother if we started assuming the world doesn't run according to materialist principles; it'd make empirical testing practically impossible.

No, it leads to some topics not being resolvable through scientific inquiry. it would only be solipsism if it were claimed that it's impossible to measure anything, rather than only some things.

No, no. It'd still lead to solipsim as the logical conclusion. Solipsism claims you can only be certain that your own mind exists; and that all other knowledge is suspect. It's the same logic that is in play with the philosophical zombie argument: if we accept that because philosophical zombies CAN exist, we therefore can't conclude a physical origin of consciousness; then we must also accept that because it's possible that we're just a brain in a vat that therefore all of our experiences are lies and that therefore we might be the only thing that exists... ie solipsism. That is the logical outcome; even if one doesn't like that outcome and so doesn't commit all the way.


No, you're confusing two different problems. The brain in the vat thought experiment is about what we can be sure about. The measurement problem is about what we can empirically control. Not all things are measureable. We can't measure Watford's potential to win the cup, the desirability of life insurance, or whether it's better to open eggs from the big end or little end. That's not because of solipsism.

I'm really not confusing anything here. I'm pointing out the absurdity in claiming that if a philosophical zombie *could* exist, that therefore we can't measure consciousness on the basis that it might a given consciousness might just be an elaborate hoax (ie; p-zombie); that argument is the exact same argument one would use to reject the world we experience on the basis that we *could* just be a brain in a vat.
 
We could build an artificial brain that believes itself to be conscious. Does...

"Self-aware" only means that the robot/agent has a representation of it self.. That aint harder than representing other agents.
In that case a book having a picture of a book inside is self-aware.

Yes, but a book is not an autonomous agent and doesnt have much behavior to talk about. I thought that being autnomous was a requirement?
 
...

what?

I don't actually understand what this sentence is supposed to mean. For one, the consciousness would still be an emergent property of the complex system of data exchange; and two, emergent properties are *not* in conflict with any principles of science; nor is the concept of emergent minus anything else.


One cannot run an experiment if that experiment generates emergent,

Is this even a proper sentence? You can't use the word 'emergent' in that fashion.

Anything else is psychological bullshit or magic if you will. The fool who spouted "The sum is greater than the sum of its parts" had a pfart problem.

By calling emergent properties 'psychological bullshit or magic', you strongly suggest you don't actually know what you're talking about. The sum being greater than the sum of its part; ie, as in emergent properties, is a scientifically valid notion that is observed in a great number of physical systems. It is neither bullshit nor magic; it is a simple logical observation that if one configures the individual parts in the right way, functions can arise that can not be achieved by these parts on their own. Take apart a car engine, and you just have a bunch of parts that aren't very useful on their own. Put them together in the right way, and you create something far more capable/useful than if you just took all the parts and randomly taped them together: in other words, the sum of the engine is greater than the sum of its parts. Following that basic fact, we can apply this thinking to consciousness. We already know (from observing the connection between human consciousness and brains) that complex data exchange systems (such as neural networks like that of the brain) are a necessary part of forming consciousness. Since we do not, however, know exactly how to configure such a system to produce consciousness, and we have thus far not been able to demonstrate any configurations to be dead ends, we can say that all such complex systems could *conceivably* (again, operative word) give rise to consciousness. This is not even remotely 'bullshit or magic', it's a simple logically consistent extrapolation of empirical observation.

Wow. I'm gonna be gone for a few hours. When I come back be ready to compare articles.

Place held.

My articles are: Reductionism redux: http://www.idt.mdh.se/kurser/ct3340/archives/ht02/Reductionism_Redux.pdf

Read the highlighted parts to get the basics for why this is the right topic for parts equal to the whole.

Reductionism, Emergence,and Effective Field Theories http://arxiv.org/pdf/physics/0101039.pdf

This article breaks down the current arguments in physics about reductionism and other than reduction boils down to a funds competition by those who aren't trying, as yet, to relate their science to physics, I understand and sympathize, but, government laziness and AAAS nearsightedness aren't sufficient reasons to overthrow a model that consistently, when related to other disciplines, relates those systems and systems to the physical science. A technical good read and a clever argument, but, one without substance beyond energy effect boundary conditions. We just found the Higgs Boson, ferchrissake, using a machine entirely built depending on sum is equal to parts with methods based on sum equal to parts, ie: entirely reductionist.

Also I'd like to add that we find emergence because we don't have a complete list of parts* which is why those other scientists want to go their own way.* They don't have enough information so they want to develop other schemes whereby they can make sense of whats going on, a desirable goal, but,because they don't have the physics at hand they are told "get it". Actually that argument has another drawback. They invent emergent macro rules that hold together for a while, finally being overthrown, then have to go back and take another tack while the reductionists are still plodding ahead with uninterrupted advances. If you don't believe look at the history of neuroscience and psychology and see whether threads based on physical roots are bouncing around or whether those that aren't so base are. Believe me the latter are bouncing around yield a new thread of schools about every twenty years or so.

*Think of their problem as one where we more or less completely understand whats going on at the entrance to the ear so we understand in physical terms what the first neurons are doing. Whereas, on the other hand others are looking at the medial lateral frontal lobe and speculating based on physical change in oxygen uptake by those cells what the conscious brain is doing with that known stuff we found at the cochlear nucleus.

Finally a third article to illustrate that those who claim emergence are doing so because they don't have a firm grip on the parts. Immune Privilege and the Philosophy of Immunology http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3959614/

Like other scientists, immunologists use two types of approaches to research: one reduces the problem to its parts; the other studies the emergent phenomenon produced by the parts. Scientists that reduce the problem to its parts are sometimes called reductionists. The conclusions of reductionist experiments are often applied to the greater whole, when in actuality they may only apply to that particular experimental set. We, reductionists, are the ones who think our immune behavior exists solely because of genes, the presence of TGFβ, the presence of inflammatory cytokines, and appearance of a receptor.

Please note the problem is not that the methods won't work, its that they are too narrowly position to explain a discipline that goes way beyond than what they understand. So they invent a new approach that doesn't have those firm roots and they sally forth 'finding' emergence's everywhere in a set pf parts largely unknown. Its good they do the research. It's wrong that they think they are explaining. They should be comparing with what's known and using those emergence's to find ways to make their knowledge actually more complete.

Your turn.
 
Given that all conscious states you describe may or may not be consciousness. The one where rules are in place are usually called instinctive or reflexes, but, the individual is aware that it is being exercised.
They weren't all conscious states. I wasn't even trying to go there, to tell you the truth. The point was that current generation AIs that I am aware of (neural nets) exist as abstract data, and don't necessarily feel anything (they aren't necessarily conscious).

I can't digest 'creative' solutions when well functioning experiential solutions exist.
So you're saying that DALs are not necessary to get to where we (or maybe just you) need to go?
 
Back
Top Bottom