• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Consciousness -- What is it? How does it work?

NobleSavage

Veteran Member
Joined
Apr 28, 2003
Messages
3,079
Location
127.0.0.1
Basic Beliefs
Atheist
Consciousness -- Qhat is it? How does it work?

I never though that this is something special. We humans tend to think of it as something magical. The way I see it, take real time input and add memory, and some kind of feed back loop. Some would argue that there is plant consciousness, I'd say there is insect consciousnesses and moving up the line to fish consciousness, to doggie consciousnesses, etc.

Right now most computer input is human input. Start giving them sensors. This seems to be coming with the "internet of things" and allow them to make decisions based on that input buffered by a memory feed back loop. From there it's just more complexity and algorithms.

Split from this thread by requesthttp://talkfreethought.org/showthread.php?3199-Cool-things-science-hasn-t-figured-out-yet
 
Last edited by a moderator:
Consciousness -- Qhat is it? How does it work?

I never though that this is something special. We humans tend to think of it as something magical. The way I see it, take real time input and add memory, and some kind of feed back loop. Some would argue that there is plant consciousness, I'd say there is insect consciousnesses and moving up the line to fish consciousness, to doggie consciousnesses, etc.

Right now most computer input is human input. Start giving them sensors. This seems to be coming with the "internet of things" and allow them to make decisions based on that input buffered by a memory feed back loop. From there it's just more complexity and algorithms.

Really? But "inside view"? What about that?
 
You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.
 
I never though that this is something special. We humans tend to think of it as something magical. The way I see it, take real time input and add memory, and some kind of feed back loop. Some would argue that there is plant consciousness, I'd say there is insect consciousnesses and moving up the line to fish consciousness, to doggie consciousnesses, etc.

Right now most computer input is human input. Start giving them sensors. This seems to be coming with the "internet of things" and allow them to make decisions based on that input buffered by a memory feed back loop. From there it's just more complexity and algorithms.

Really? But "inside view"? What about that?

Just a subroutine.

- - - Updated - - -

You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.

What so special about that?
 
You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.

If you have enough memory to remember past events, enough sensors and self awareness to be conscious of the concept of a desired state and enough processing power to plan/want to direct the world towards your goals, I think we have to call that rudimentary consciousness.

I suspect it is entirely chemical cause/effect in plants and probably ants, but cats and dogs are as capable of affecting their simple world as I am mine. If you only allow the sort of planning and anticipation that lets people be 20 moves ahead in chess then most humans don't qualify. (and some machines do, for that scenario)
 
You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.

What so special about that?
It's the sine qua non of sentience, of personhood.
 
You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.

I consider your comment the most insightful on this thread, seyorni.
As I read it I realized that a machine that mimics the actions of various life forms would have to recognize a threat to its continued existence--its survival--and would have to be able to escape or remove that threat.
Then I asked, "Can any human being write the instructions that would enable the machine to do both of those?"
My answer is, "I very much doubt that any human could write those instructions."
 
You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.

I consider your comment the most insightful on this thread, seyorni.
As I read it I realized that a machine that mimics the actions of various life forms would have to recognize a threat to its continued existence--its survival--and would have to be able to escape or remove that threat.
Then I asked, "Can any human being write the instructions that would enable the machine to do both of those?"
My answer is, "I very much doubt that any human could write those instructions."

Sure they could; set a variable called 'adrenalin' that is incremented by certain types of sensory input - proximity to a deadfall and the height of the drop; stuff that looks similar to a database of snake or spider shaped objects, or that moves in the way those objects are recorded as moving; people who are looking at you but are not in your database of 'friends', etc., and that declines slowly in the absence of any such inputs. Then have a short list of responses eg 'attack', 'run away', etc.. with some simple criteria to select one of the short list rapidly based on some easily determined characteristics. So:

Code:
IF adrenalin >95% THEN 
    IF stimulus = mobile THEN 
        IF stimulus < mySize / 10 THEN
            CALL subHitWithShoe (Target=stimulus)
        END IF
    END IF
ELSE
    CALL subRunLikeFuck(Direction=Away)
END IF

There is a little more to how most humans recognise and escape from threats, but not a lot. You can even establish costly additional sensor scans or power up weapon and defensive systems incrementally as the value of 'adrenalin' increases, to preserve resources while maintaining an appropriate level of safety.

I threw that together in a few seconds; Given a day and an expense account for lunch, I reckon you could get any competent programmer (that's me out) to put together a bit of software that could get a life-form mimicking robot to do at least as well as the average human or other animal at self preservation. Real animals learn from external information input (dad says don't go near the edge in case you fall); in built default information (Arrgh! A spider! kill it!); and learned behaviour (I saw one of those yesterday and it inflicted damage on me. Run away!!).

There is no reason why a machine couldn't be programmed to do all of those things.
 
I consider your comment the most insightful on this thread, seyorni.
As I read it I realized that a machine that mimics the actions of various life forms would have to recognize a threat to its continued existence--its survival--and would have to be able to escape or remove that threat.
Then I asked, "Can any human being write the instructions that would enable the machine to do both of those?"
My answer is, "I very much doubt that any human could write those instructions."

Sure they could; set a variable called 'adrenalin' that is incremented by certain types of sensory input - proximity to a deadfall and the height of the drop; stuff that looks similar to a database of snake or spider shaped objects, or that moves in the way those objects are recorded as moving; people who are looking at you but are not in your database of 'friends', etc., and that declines slowly in the absence of any such inputs. Then have a short list of responses eg 'attack', 'run away', etc.. with some simple criteria to select one of the short list rapidly based on some easily determined characteristics. So:

Code:
IF adrenalin >95% THEN 
    IF stimulus = mobile THEN 
        IF stimulus < mySize / 10 THEN
            CALL subHitWithShoe (Target=stimulus)
        END IF
    END IF
ELSE
    CALL subRunLikeFuck(Direction=Away)
END IF

There is a little more to how most humans recognise and escape from threats, but not a lot. You can even establish costly additional sensor scans or power up weapon and defensive systems incrementally as the value of 'adrenalin' increases, to preserve resources while maintaining an appropriate level of safety.

I threw that together in a few seconds; Given a day and an expense account for lunch, I reckon you could get any competent programmer (that's me out) to put together a bit of software that could get a life-form mimicking robot to do at least as well as the average human or other animal at self preservation. Real animals learn from external information input (dad says don't go near the edge in case you fall); in built default information (Arrgh! A spider! kill it!); and learned behaviour (I saw one of those yesterday and it inflicted damage on me. Run away!!).

There is no reason why a machine couldn't be programmed to do all of those things.

Yes. But that doesnt answer the objection: how does it create the inner experience of it? We are not only action but we also have this inner experience.
 
Sure they could; set a variable called 'adrenalin' that is incremented by certain types of sensory input - proximity to a deadfall and the height of the drop; stuff that looks similar to a database of snake or spider shaped objects, or that moves in the way those objects are recorded as moving; people who are looking at you but are not in your database of 'friends', etc., and that declines slowly in the absence of any such inputs. Then have a short list of responses eg 'attack', 'run away', etc.. with some simple criteria to select one of the short list rapidly based on some easily determined characteristics. So:

Code:
IF adrenalin >95% THEN 
    IF stimulus = mobile THEN 
        IF stimulus < mySize / 10 THEN
            CALL subHitWithShoe (Target=stimulus)
        END IF
    END IF
ELSE
    CALL subRunLikeFuck(Direction=Away)
END IF

There is a little more to how most humans recognise and escape from threats, but not a lot. You can even establish costly additional sensor scans or power up weapon and defensive systems incrementally as the value of 'adrenalin' increases, to preserve resources while maintaining an appropriate level of safety.

I threw that together in a few seconds; Given a day and an expense account for lunch, I reckon you could get any competent programmer (that's me out) to put together a bit of software that could get a life-form mimicking robot to do at least as well as the average human or other animal at self preservation. Real animals learn from external information input (dad says don't go near the edge in case you fall); in built default information (Arrgh! A spider! kill it!); and learned behaviour (I saw one of those yesterday and it inflicted damage on me. Run away!!).

There is no reason why a machine couldn't be programmed to do all of those things.

Yes. But that doesnt answer the objection: how does it create the inner experience of it? We are not only action but we also have this inner experience.

I have an inner experience. I assume that you do too, but that is just an assumption. If a machine were to behave in a similarly complex way to a human - perhaps including telling me that it has inner experience - Then why should I not make the same assumption for the machine?

There is no way to verify 'inner experience' except for oneself; asking that we do so for machines that appear to be self-aware, before we can consider them 'genuinely' self aware is unjustified.
 
seyorni said:
You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.
What so special about that?
It's the sine qua non of sentience, of personhood.
A machine would have machinehood.

Machines might become one day more sentient than us even if they're still a long way away from that. More memory storage, faster thinking, more intelligence, more efficient perception organs etc. They could also learn how to become more acceptable to us in every which way necessary.

I doubt that a machine will ever be accepted by all human beings as one of us but that's not much of an argument since we are already pretty good at excluding each other from humanhood.

The Turing test is also nonsense. I think the only serious criterion is whether a human society can get to trust sentient machines on a routine basis to perform crucial tasks, such as caring for medical patients, old folks and children, advising on policies at all levels, acting as counsel on law, love and life-style, that sort of things. If at all possible that would be a very long way away but not inconceivable.

Something else would be subjective experience. I don't see any good reason to believe a machine could have the kind of subjective experience I have but I also don't see any good reason to reject the idea that it might have its own brand of it, one that I can't even conceive of since I'm not a machine. But we might be able one day to build a human being from basic materials (not necessarily clay though) like carbon, oxygen, hydrogen, nitrogen etc. Such a thing could be regarded as both a machine and a human being and there would be no reason to reject the idea of it having subjective experience. It might still not be accepted as one of us but that says more about us than anything about sentient machines.

We may not ever know for sure but since this thread is about the science of it, I think that society will broadly accept that if it walks and quacks like a duck then it's a duck.

Maybe one item will be difficult for machines. All human beings are flawed. Could machines emulate that?
EB
 
seyorni said:
You can build machines with sensors, feedback loops and memory; machines that mimic the actions of various life forms, but are these machines actually conscious?
I realize there may be a spectrum of consciousness, but I think most people would be dubious about an ant or turnip being conscious in the way we usually conceive it, and didn't Descartes famously consider even dogs and cats mindless automatons?

When I think of consciousness I'm thinking sentience -- self awareness, intentionality, anticipation of futurity, &c.
What so special about that?
It's the sine qua non of sentience, of personhood.
A machine would have machinehood.

Machines might become one day more sentient than us even if they're still a long way away from that. More memory storage, faster thinking, more intelligence, more efficient perception organs etc. They could also learn how to become more acceptable to us in every which way necessary.

I doubt that a machine will ever be accepted by all human beings as one of us but that's not much of an argument since we are already pretty good at excluding each other from humanhood.

The Turing test is also nonsense. I think the only serious criterion is whether a human society can get to trust sentient machines on a routine basis to perform crucial tasks, such as caring for medical patients, old folks and children, advising on policies at all levels, acting as counsel on law, love and life-style, that sort of things. If at all possible that would be a very long way away but not inconceivable.

Something else would be subjective experience. I don't see any good reason to believe a machine could have the kind of subjective experience I have but I also don't see any good reason to reject the idea that it might have its own brand of it, one that I can't even conceive of since I'm not a machine. But we might be able one day to build a human being from basic materials (not necessarily clay though) like carbon, oxygen, hydrogen, nitrogen etc. Such a thing could be regarded as both a machine and a human being and there would be no reason to reject the idea of it having subjective experience. It might still not be accepted as one of us but that says more about us than anything about sentient machines.

We may not ever know for sure but since this thread is about the science of it, I think that society will broadly accept that if it walks and quacks like a duck then it's a duck.

Maybe one item will be difficult for machines. All human beings are flawed. Could machines emulate that?
EB

Only the very simplest and most banal of machines have ever been flawless. If computer software could be made flawless I wouldn't have a job. I'm not concerned about losing my job any time soon due to the flawlessness of machines.

The idea that machines don't have failings and weaknesses is just another human conceit - like the claim that only humans can be sentient.
 
I impute sentience in others, and have been doing so so regularly that it's become pretty much a default assumption, but that's just a convenience. It doesn't really address Descartes' epistemological conundrum.

Perhaps in the future, when even toasters routinely pass Turing tests, people will interact with machines as just another category of sentient things. An assumption of sentience would naturally follow.
If it looks, walks and quacks like a duck, treating it as such would be the most convenient approach, it seems to me.

Tom in Napa
I consider your comment the most insightful on this thread, seyorni.
As I read it I realized that a machine that mimics the actions of various life forms would have to recognize a threat to its continued existence--its survival--and would have to be able to escape or remove that threat.
Then I asked, "Can any human being write the instructions that would enable the machine to do both of those?"
My answer is, "I very much doubt that any human could write those instructions."
This reminds me of Asimov's third law of robotics -- conceived of in 1942:;)

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
Only the very simplest and most banal of machines have ever been flawless. If computer software could be made flawless I wouldn't have a job. I'm not concerned about losing my job any time soon due to the flawlessness of machines.
The idea that machines don't have failings and weaknesses is just another human conceit - like the claim that only humans can be sentient.
I take your point but flaws in software mostly come from economic choices. Put a safety criteria on them and then people find solutions to prevent most flaws. So you could say that most flaws in machines come from the flaws in human beings...

I guess that what I was trying to suggest was that our flaws play a major role in the way society is organised, in our conception of life, and in how we relate to each other. For the moment, the kind of flaws machines have identify them as machines, not as humans.
EB
 
I consider your comment the most insightful on this thread, seyorni.
As I read it I realized that a machine that mimics the actions of various life forms would have to recognize a threat to its continued existence--its survival--and would have to be able to escape or remove that threat.
Then I asked, "Can any human being write the instructions that would enable the machine to do both of those?"
My answer is, "I very much doubt that any human could write those instructions."

Sure they could; set a variable called 'adrenalin' that is incremented by certain types of sensory input - proximity to a deadfall and the height of the drop; stuff that looks similar to a database of snake or spider shaped objects, or that moves in the way those objects are recorded as moving; people who are looking at you but are not in your database of 'friends', etc., and that declines slowly in the absence of any such inputs. Then have a short list of responses eg 'attack', 'run away', etc.. with some simple criteria to select one of the short list rapidly based on some easily determined characteristics. So:

Code:
IF adrenalin >95% THEN 
    IF stimulus = mobile THEN 
        IF stimulus < mySize / 10 THEN
            CALL subHitWithShoe (Target=stimulus)
        END IF
    END IF
ELSE
    CALL subRunLikeFuck(Direction=Away)
END IF

There is a little more to how most humans recognise and escape from threats, but not a lot. You can even establish costly additional sensor scans or power up weapon and defensive systems incrementally as the value of 'adrenalin' increases, to preserve resources while maintaining an appropriate level of safety.

I threw that together in a few seconds; Given a day and an expense account for lunch, I reckon you could get any competent programmer (that's me out) to put together a bit of software that could get a life-form mimicking robot to do at least as well as the average human or other animal at self preservation. Real animals learn from external information input (dad says don't go near the edge in case you fall); in built default information (Arrgh! A spider! kill it!); and learned behaviour (I saw one of those yesterday and it inflicted damage on me. Run away!!).

There is no reason why a machine couldn't be programmed to do all of those things.

I used to write code, so I ask "Does anyone want to bet?"

The first IF in the above code requires a signal from one of a series of subroutines to visual sensors, each of which recognizes a specific danger and, with few false positives and NO false negatives, generates that signal for a subroutine that increases the adrenaline level.

Then, if the adrenaline level does exceed 95%, the above code requires another series of subroutines that will generate signals that tell the robot what to do, how to do it, when to do it, and importantly when to stop doing it with few false positives and no false negatives. Good luck.
 
Only the very simplest and most banal of machines have ever been flawless. If computer software could be made flawless I wouldn't have a job. I'm not concerned about losing my job any time soon due to the flawlessness of machines.
The idea that machines don't have failings and weaknesses is just another human conceit - like the claim that only humans can be sentient.
I take your point but flaws in software mostly come from economic choices. Put a safety criteria on them and then people find solutions to prevent most flaws. So you could say that most flaws in machines come from the flaws in human beings...

I guess that what I was trying to suggest was that our flaws play a major role in the way society is organised, in our conception of life, and in how we relate to each other. For the moment, the kind of flaws machines have identify them as machines, not as humans.
EB
The flaws in humans come from the humans that made them too.

There is nothing special about humans; there is nothing special about human flaws.

And no, safety critical software is not flawless.

It is generally better than the cheaper stuff. But no matter how much money is thrown at it, it is never flawless.

The kinds of flaws humans have are the result of the development path that leads to humans - the absence of a purposeful designer. What you are identifying is not a difference between humans and machines per se, but a difference between evolved systems and designed ones.

Given the sheer complexity of designing truly flexible software, many researchers are using evolutionary algorithms these days, with great success. The kinds of flaws you see as definitive in separating humans from machines are now being introduced into software.

A human is a machine, for any useful definition of the two words. There is no supernatural component; any aspect of a human could be emulated by a similarly complex machine.
 
Sure they could; set a variable called 'adrenalin' that is incremented by certain types of sensory input - proximity to a deadfall and the height of the drop; stuff that looks similar to a database of snake or spider shaped objects, or that moves in the way those objects are recorded as moving; people who are looking at you but are not in your database of 'friends', etc., and that declines slowly in the absence of any such inputs. Then have a short list of responses eg 'attack', 'run away', etc.. with some simple criteria to select one of the short list rapidly based on some easily determined characteristics. So:

Code:
IF adrenalin >95% THEN 
    IF stimulus = mobile THEN 
        IF stimulus < mySize / 10 THEN
            CALL subHitWithShoe (Target=stimulus)
        END IF
    END IF
ELSE
    CALL subRunLikeFuck(Direction=Away)
END IF

There is a little more to how most humans recognise and escape from threats, but not a lot. You can even establish costly additional sensor scans or power up weapon and defensive systems incrementally as the value of 'adrenalin' increases, to preserve resources while maintaining an appropriate level of safety.

I threw that together in a few seconds; Given a day and an expense account for lunch, I reckon you could get any competent programmer (that's me out) to put together a bit of software that could get a life-form mimicking robot to do at least as well as the average human or other animal at self preservation. Real animals learn from external information input (dad says don't go near the edge in case you fall); in built default information (Arrgh! A spider! kill it!); and learned behaviour (I saw one of those yesterday and it inflicted damage on me. Run away!!).

There is no reason why a machine couldn't be programmed to do all of those things.

I used to write code, so I ask "Does anyone want to bet?"

The first IF in the above code requires a signal from one of a series of subroutines to visual sensors, each of which recognizes a specific danger and, with few false positives and NO false negatives, generates that signal for a subroutine that increases the adrenaline level.

Then, if the adrenaline level does exceed 95%, the above code requires another series of subroutines that will generate signals that tell the robot what to do, how to do it, when to do it, and importantly when to stop doing it with few false positives and no false negatives. Good luck.
I tell you what; you find me a human that can meet your standard, and then we can start discussing whether machines that don't meet the standard are falling short of true sentience.

If you think humans are perfect at identifying mortal dangers with no false negatives, and at taking the necessary action to escape, without ever making their situation worse, then you clearly have never read about the Darwin Awards.

I reckon software can be made as good as, perhaps better than, a human at this task; I am certain that neither can ever be perfect at it.

Your argument from ignorance is not improved by setting a double-standard for success.

Humans are not special. There is no magic, immaterial essence in the brain that we can call 'the mind'. If it exists, we can emulate it. If it is emulated with a sufficient degree of fidelity, it will be indestinguishable from the sentient original.

There is no class of algorithm that can, as a matter of physical law, only be solved by a human brain. The human brain is made only from a handful of well understood elements. The arrangement and interactions of those elements are hugely complex, but contain no element nor configuration that is in principle impossible to reproduce artificially.
 
The human brain is made only from a handful of well understood elements.

By "elements", do you mean carbon, hydrogen, oxygen, calcium, etc?

Given those understandings, who has claimed to understand how memory functions?

Then, in the manner of thesis, antithesis, synthesis, how the brain melds seemingly unrelated memories and forms what has not previously existed?

Again, good luck.
 
Back
Top Bottom