• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Consciousness -- What is it? How does it work?

The human brain is made only from a handful of well understood elements.

By "elements", do you mean carbon, hydrogen, oxygen, calcium, etc?
Amongst others, yes.
Given those understandings, who has claimed to understand how memory functions?
So far? Nobody. Are you suggesting that this means it cannot be understood? That it involves some kind of additional 'magic' we don't and can never understand?
Then, in the manner of thesis, antithesis, synthesis, how the brain melds seemingly unrelated memories and forms what has not previously existed?

Again, good luck.

you are not smart enough nor well informed enough to dictate the form of debate on this topic.

Never mind thesis, antithesis, synthesis; these are not magical components of the mind. All is atoms and their patterns of interaction; there is not one shred of evidence that a human brain cannot be artificially emulated.
 
Just a subroutine.

What? A subroutine? How does "a subroutine" explain the inner experience?

That's what they're trying to find out.

Science works this way, with ignorance, not absolute truths set in stone beforehand.

____________________

And this has officially degenerated in another consciousness thread. Good fukking grief.
 
Just a subroutine.

What? A subroutine? How does "a subroutine" explain the inner experience?

This is hackish but it demonstrates the point:

Qbo has several stored answers and behaviors in an internal knowledge base, that we upgrade as the projects evolves, to make questions or orders to Qbo such as “What it this? or “Do this”. Qbo interprets the object “Myself” as a an ordinary object, for which it has special answers in its internal knowledge base such as “Woah. I’m learning myself” or “Oh. This is me. Nice”. Qbo selects its reflection in the mirror in the image that he sees using the stereoscopic vision, and one of our engineers interacts (speaks) to him so that Qbo can learn to recognize himself as another object.

http://spectrum.ieee.org/automaton/...qbo-passes-mirror-test-is-therefore-selfaware


And you might find this interesting:

Using the robot, Justin seeks to “emulate forms of self-awareness developed during human infancy. In particular, we are interested in the ability to reason about the robot’s embodiment and physical capabilities, with the robot building a model of itself through its experiences.” Programmed to observe its own body as it moves through space, Nico learns the relationship of its end-effectors (grippers, for example) and sensors (stereoscopic cameras) to each other and the environment. It combines models of its perceptual and motor capabilities, to learn where its body parts exist with respect to each other and will soon learn how those body parts are able to cause changes by interacting with objects in the environment.


One of his papers on this topic, “Mirror Perspective-Taking with a Humanoid Robot,” was recently accepted for presentation at the 26th Annual Conference on Artificial Intelligence, to be held in July in Toronto, Canada. “Mirror Perspective-Taking” describes how the robot is able to use self-knowledge regarding its body and senses to interpret what it sees when it interacts with a mirror, “allowing the interpretation of reflections in the mirror.” Nico, using knowledge that it has learned about itself, is able to use a mirror as an instrument for spatial reasoning, allowing it to accurately determine where objects are located in space based on their reflections, rather than naively believing them to exist behind the mirror.
http://www.yale.edu/graduateschool/...5/computer-science-robots-self-awareness.html
 
Yes. But that doesnt answer the objection: how does it create the inner experience of it? We are not only action but we also have this inner experience.

I have an inner experience. I assume that you do too, but that is just an assumption. If a machine were to behave in a similarly complex way to a human - perhaps including telling me that it has inner experience - Then why should I not make the same assumption for the machine?

There is no way to verify 'inner experience' except for oneself; asking that we do so for machines that appear to be self-aware, before we can consider them 'genuinely' self aware is unjustified.

First: self-awareness is just the agent taking itself into consideration. That has nothing to do with inner experience. It is easy to imagine a creature that has inner experience but is not self-aware. .

Second: even if there is no way to verify "inner experience" from outer behaviour of machines it is justified to assume that they have not.

Computers can mimic human behaviour pretty well without that any part of that system is designed to create a inner experience. Judging that an inner experience is present from behaviour in a system designed to mimic behaviour from a system with inner experience is circular reasoning.
 
I have an inner experience. I assume that you do too, but that is just an assumption. If a machine were to behave in a similarly complex way to a human - perhaps including telling me that it has inner experience - Then why should I not make the same assumption for the machine?

There is no way to verify 'inner experience' except for oneself; asking that we do so for machines that appear to be self-aware, before we can consider them 'genuinely' self aware is unjustified.

First: self-awareness is just the agent taking itself into consideration. That has nothing to do with inner experience. It is easy to imagine a creature that has inner experience but is not self-aware. .

Second: even if there is no way to verify "inner experience" from outer behaviour of machines it is justified to assume that they have not.

Computers can mimic human behaviour pretty well without that any part of that system is designed to create a inner experience. Judging that an inner experience is present from behaviour in a system designed to mimic behaviour from a system with inner experience is circular reasoning.

Define precisely what this "inner experience" is.
 
All is atoms and their patterns of interaction; there is not one shred of evidence that a human brain cannot be artificially emulated.

I felt no need to use the word "magic" in this discussion. I therefore did not use it.
Given the following:
1) the seemingly unlimited faith of Big Bang cosmologists in their ex nihilo conclusion,
2) their seemingly unlimited supply of hypotheses,
3) their decades of failure or refusal to submit their hypotheses to falsification, and
4) the cautions implied by Heisenberg's Uncertainty and by Godel's Indeterminacy,
I do have some skepticism.
I admit to limits on the horizons I see; in choosing what to read I prefer non-fiction to science fiction.
There might also be a shortage of money for the work.
 
First: self-awareness is just the agent taking itself into consideration. That has nothing to do with inner experience. It is easy to imagine a creature that has inner experience but is not self-aware. .

Second: even if there is no way to verify "inner experience" from outer behaviour of machines it is justified to assume that they have not.

Computers can mimic human behaviour pretty well without that any part of that system is designed to create a inner experience. Judging that an inner experience is present from behaviour in a system designed to mimic behaviour from a system with inner experience is circular reasoning.

Define precisely what this "inner experience" is.
That is notoriously hard to do.
You see: most discussions on this matter confuses the "inner experience" with the information handling of the nervous system. Self- awareness, intention, perception, conciousness etc is some of these confused terms. You do not need inner experience to be self-aware or to understand or show intention. Inner experience is what seems to be an "inner theatre". That you hear your own thoughts. That you see the (heavily filtered and adjusted) visual input.
It is the fact that you "do experience". Not what you experience, not the data you experience but the experience in itself.
 
Define precisely what this "inner experience" is.
That is notoriously hard to do.
You see: most discussions on this matter confuses the "inner experience" with the information handling of the nervous system. Self- awareness, intention, perception, conciousness etc is some of these confused terms. You do not need inner experience to be self-aware or to understand or show intention. Inner experience is what seems to be an "inner theatre". That you hear your own thoughts. That you see the (heavily filtered and adjusted) visual input.
It is the fact that you "do experience". Not what you experience, not the data you experience but the experience in itself.

This seems fairly simple as well. Inner experience then is not real time sensory input, but rather a routine that reviews stored real time input. The reason to do this could be very simple as well: to discard a bunch of worthless sensory input from temporary storage. Run the review routine through some of the same circuits used for real time input. There you have your "inner theater". It wouldn't hurt to toss in some fuzzy logic and complexity of an order of magnitude.
 
That is notoriously hard to do.
You see: most discussions on this matter confuses the "inner experience" with the information handling of the nervous system. Self- awareness, intention, perception, conciousness etc is some of these confused terms. You do not need inner experience to be self-aware or to understand or show intention. Inner experience is what seems to be an "inner theatre". That you hear your own thoughts. That you see the (heavily filtered and adjusted) visual input.
It is the fact that you "do experience". Not what you experience, not the data you experience but the experience in itself.

This seems fairly simple as well. Inner experience then is not real time sensory input, but rather a routine that reviews stored real time input. The reason to do this could be very simple as well: to discard a bunch of worthless sensory input from temporary storage. Run the review routine through some of the same circuits used for real time input. There you have your "inner theater". It wouldn't hurt to toss in some fuzzy logic and complexity of an order of magnitude.

If it's so simple, then design an experiment to prove it and publish your findings.
 
This seems fairly simple as well. Inner experience then is not real time sensory input, but rather a routine that reviews stored real time input. The reason to do this could be very simple as well: to discard a bunch of worthless sensory input from temporary storage. Run the review routine through some of the same circuits used for real time input. There you have your "inner theater". It wouldn't hurt to toss in some fuzzy logic and complexity of an order of magnitude.

If it's so simple, then design an experiment to prove it and publish your findings.

How would you falsify it? To me it just seems like humans thinking they are special. The definitions are eerily familiar to someone describing the soul.
 
That is notoriously hard to do.
You see: most discussions on this matter confuses the "inner experience" with the information handling of the nervous system. Self- awareness, intention, perception, conciousness etc is some of these confused terms. You do not need inner experience to be self-aware or to understand or show intention. Inner experience is what seems to be an "inner theatre". That you hear your own thoughts. That you see the (heavily filtered and adjusted) visual input.
It is the fact that you "do experience". Not what you experience, not the data you experience but the experience in itself.

This seems fairly simple as well. Inner experience then is not real time sensory input, but rather a routine that reviews stored real time input. The reason to do this could be very simple as well: to discard a bunch of worthless sensory input from temporary storage. Run the review routine through some of the same circuits used for real time input. There you have your "inner theater". It wouldn't hurt to toss in some fuzzy logic and complexity of an order of magnitude.

What? No, i am not talking about any "review" that folters input. I am talking about the direct inner experience that we, naively, sees as colors and sounds etc. The canvas on which all our experience is played out. It is not our personality or soul or anything woo like that.
 
If it's so simple, then design an experiment to prove it and publish your findings.

How would you falsify it? To me it just seems like humans thinking they are special. The definitions are eerily familiar to someone describing the soul.

Of cause they are, it is this experience that has made people believe there is a soul. I dont say that there is a soul but that there is a weird metalevel experience that is extremely hard to grasp.
 
The flaws in humans come from the humans that made them too.
Sure but as you astutely pointed out yourself human flaws are not made at all like machine flaws are made. Humans are evolved while machine are designed.

There is nothing special about humans; there is nothing special about human flaws.
There is to the extent that humans are evolved while machine are designed.

And no, safety critical software is not flawless.
It depends if you take software to be purely logical or as material compound. Although if even at the logical level software cannot be regarded as absolutely safe it can be made so safe that no one will ever find a fault. However, once it runs on a machine then all sorts of flaws may appear and we can even use probabilities to predict some will occur.

It is generally better than the cheaper stuff. But no matter how much money is thrown at it, it is never flawless.
At the logical level you may not be able to prove that a piece of software is flawed and there is no general principle saying it has to be. But I agree that the machine running the software will always have flaws, at least those based on current technology.

The kinds of flaws humans have are the result of the development path that leads to humans - the absence of a purposeful designer. What you are identifying is not a difference between humans and machines per se, but a difference between evolved systems and designed ones.
Yes.

Given the sheer complexity of designing truly flexible software, many researchers are using evolutionary algorithms these days, with great success. The kinds of flaws you see as definitive in separating humans from machines are now being introduced into software.
I don't see them as definitive in principle but I don't believe we'll be able in practice to emulate human beings or replicate a similar evolutionnary process to produce them.

A human is a machine, for any useful definition of the two words. There is no supernatural component; any aspect of a human could be emulated by a similarly complex machine.
I agree that humans are natural and that there is therefore no supernatural component in human beings, at least relevant to any objective characteristic of humans, such as flaws are. However, it is definitely misleading and unhelpful to characterise human beings as machines. There's no good reason to doubt that nature made human beings and there is no good reason to believe that we should be able one day to design machines essentially similar to human beings. Hence, it remains to be seen if humans are machines. And if so, it will be a sticking point making the acceptance of machines problematic.
EB
 
And this has officially degenerated in another consciousness thread. Good fukking grief.
Yeah and the reason is always the same that the self-styled scientists here keep insisting that it is good science to claim that subjective experience either doesn't exist or is reducible to physics. You should keep instead to things like how the brain works, how human beings behave, how we are able to remember, detect, speak and understand each other speaking, etc. Maybe subjective experience can be reduced to physics but it's not true that you knows that. If scientists limited their claims to current science that would improve communication.
EB
 
Humans are rearranged food.

Awareness cannot possibly be purely material; and there is no woo; so awareness must be a dynamic arrangement of unaware matter. It's all just patterns - so it is necessarily reducible to math.
Nah, math is a model, a description, and "Awareness" is not. Thus awareness can not be reduced to math.
 
I'm mostly bored by the "consciousness", "awareness", and "inner mirror" discussions. If you have some hard science give me the links. Every time I've looked into in the past I found a bunch of philosophical stuff. I'd be happy if my AI bots never develop consciousness. Undoubtedly, someone is gonna do it just to do it. Before we figure out what conscious is, we will probably have a trial to determine if it's ok to pull a the plug on a sophisticated AI. You know someone is gonna claim that this thing has rights and 1000 attorneys will jump at the chance to be a part of legal history.
 
Humans are rearranged food.

Awareness cannot possibly be purely material; and there is no woo; so awareness must be a dynamic arrangement of unaware matter. It's all just patterns - so it is necessarily reducible to math.
Nah, math is a model, a description, and "Awareness" is not. Thus awareness can not be reduced to math.

hmmmm.

Awareness is equal to story ready from cohered* attends. Consciousness is decide if what I'm doing is safe and continue
or decide what I'm doing not not safe and change story to one more safe by rearranging attends to match sensed threats

The math is just a few decision algorithms and a few threat search tree routines based on collected data of those around one (present or memory or woo woo).

*attends sorted to match to situation game outcome based on input from others in threat scenarios
 
I'm mostly bored by the "consciousness", "awareness", and "inner mirror" discussions. If you have some hard science give me the links.


Abstract

''This introductory chapter attempts to clarify the philosophical, empirical, and theoretical bases on which a cognitive neuroscience approach to consciousness can be founded. We isolate three major empirical observations that any theory of consciousness should incorporate, namely (1) a considerable amount of processing is possible without consciousness, (2) attention is a prerequisite of consciousness, and (3) consciousness is required for some specific cognitive tasks, including those that require durable information maintenance, novel combinations of operations, or the spontaneous generation of intentional behavior. We then propose a theoretical framework that synthesizes those facts: the hypothesis of a global neuronal workspace. This framework postulates that, at any given time, many modular cerebral networks are active in parallel and process information in an unconscious manner. An information becomes conscious, however, if the neural population that represents it is mobilized by top-down attentional amplification into a brain-scale state of coherent activity that involves many neurons distributed throughout the brain. The long-distance connectivity of these ‘workspace neurons’ can, when they are active for a minimal duration, make the information available to a variety of processes including perceptual categorization, long-term memorization, evaluation, and intentional action. We postulate that this global availability of information through the workspace is what we subjectively experience as a conscious state. A complete theory of consciousness should explain why some cognitive and cerebral representations can be permanently or temporarily inaccessible to consciousness, what is the range of possible conscious contents, how they map onto specific cerebral circuits, and whether a generic neuronal mechanism underlies all of them. We confront the workspace model with those issues and identify novel experimental predictions. Neurophysiological, anatomical, and brain-imaging data strongly argue for a major role of prefrontal cortex, anterior cingulate, and the areas that connect to them, in creating the postulated brain-scale workspace.''



Conscious experience;
''Somewhere between 300 to 400 milliseconds after a sensory input - looking at a shelf full of salad dressing bottles in our example - you become consciously aware of what your brain has been working on for almost half a second. This conscious thought may come to you as a solution ("there's the one I want"), as an obstacle to achieving your goal ("where's the Newman's Own Caesar?"), as a choice that requires further deliberation ("do I go with the tasty hi-cal dressing or the lo-cal substitute?"), or as any of a million other thoughts ("I really need to pick up some shampoo").

What is important is that, unbeknownst to you, your brain has already narrowed down 11 million bits of informational input to 40 bits of consciousness in your working memory, and you are on your way to making a decision and a purchase. Activity now shifts to your frontal cortex, where conscious thought, deliberation, planning, and "that voice in your head" all take place.

The indisputable fact is this: even our most "rational" conscious processes are sigiificantly influenced by forces we do not consciously perceive. To understand why we do what we do as decision-makers, consumers, even as citizens, we need to understand how our brains are processing stimuli below the threshold of consciousness.''
 
Back
Top Bottom