• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Subjective experience v. Self-awareness

Conscious States: Where Are They In The Brain And What Are Their Necessary Ingredients?
(2013: William Hirstein, PhD, Department of Philosophy, Elmhurst College, Elmhurst, Illinois, USA)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653223/

Author makes the distinction between 'bare consciousness' and a more sophisticated 'self consciousness' and suggests that candidates for the former might include, "states of coma, states in which subjects are absorbed in a perceptual task, states in brains with damaged prefrontal lobes, states of meditation and conscious states of some infants and animals".
 
Putting objective and subjective in proper perspective. What a meter reads of light entering it's receptacle is an objective bit of information. It matters not that the human who created the meter used information gained from human experience to do so. The facts are that experience is materially related to light parameters and they are the only relevant data. So when I say subjective experience of light of humans is in line with attributes of sources and objects reflective attributes I'm making an objective statement about the relation of light with human color qualia.

Forget the subjective is basis for objective straw man dragged across the state to confuse. Even give that subjective experience provides the basis for objective experience. The objective observation that light perception is related to the proportional reception of light by red, yellow, and blue cones stands on it's own merits as a material fact. It is true that these cone receptor elements respond to specific light frequencies.

Individual variation from these general standards are minimal. So it is safe to say red experience from me is essentially the same as red experience from you. I submit I can and do know what your experience is of red.

If we presumed subjective was the source therefore subjective is the experience we are forgetting that we communicate, cooperate, elaborate, and we do experiment. So the distance for the modern educated person between communicated objective and subjective experience approaches zero.

That's one. Rational reasoning should lead us to conclude that not only is subjective experience pretty much objective but that qualia are actually pretty much the same across all of us who see normally. Further that through education qualia of even the blind and partial color deficient correspond to those of the general population.
 
Conscious States: Where Are They In The Brain And What Are Their Necessary Ingredients?
(2013: William Hirstein, PhD, Department of Philosophy, Elmhurst College, Elmhurst, Illinois, USA)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653223/

Author makes the distinction between 'bare consciousness' and a more sophisticated 'self consciousness' and suggests that candidates for the former might include, "states of coma, states in which subjects are absorbed in a perceptual task, states in brains with damaged prefrontal lobes, states of meditation and conscious states of some infants and animals".

Why is the most recent koch crick article reference 1990?

I just popped up these in 2003

Coonsciousness and Neuroscience https://authors.library.caltech.edu/40355/1/feature_article.pdf

and

A framework for consciousness http://www.wisebrain.org/media/Papers/AFrameworkforConsciousness.pdf
 
Conscious States: Where Are They In The Brain And What Are Their Necessary Ingredients?
(2013: William Hirstein, PhD, Department of Philosophy, Elmhurst College, Elmhurst, Illinois, USA)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653223/

Author makes the distinction between 'bare consciousness' and a more sophisticated 'self consciousness' and suggests that candidates for the former might include, "states of coma, states in which subjects are absorbed in a perceptual task, states in brains with damaged prefrontal lobes, states of meditation and conscious states of some infants and animals".

And I would add potentially both the short interval between fainting and becoming effectively unconscious and the short interval between waking up and becoming fully conscious.
EB
 
Putting objective and subjective in proper perspective. What a meter reads of light entering it's receptacle is an objective bit of information. It matters not that the human who created the meter used information gained from human experience to do so. The facts are that experience is materially related to light parameters and they are the only relevant data. So when I say subjective experience of light of humans is in line with attributes of sources and objects reflective attributes I'm making an objective statement about the relation of light with human color qualia.

Forget the subjective is basis for objective straw man dragged across the state to confuse. Even give that subjective experience provides the basis for objective experience. The objective observation that light perception is related to the proportional reception of light by red, yellow, and blue cones stands on it's own merits as a material fact. It is true that these cone receptor elements respond to specific light frequencies.

Individual variation from these general standards are minimal. So it is safe to say red experience from me is essentially the same as red experience from you. I submit I can and do know what your experience is of red.

If we presumed subjective was the source therefore subjective is the experience we are forgetting that we communicate, cooperate, elaborate, and we do experiment. So the distance for the modern educated person between communicated objective and subjective experience approaches zero.

That's one. Rational reasoning should lead us to conclude that not only is subjective experience pretty much objective but that qualia are actually pretty much the same across all of us who see normally. Further that through education qualia of even the blind and partial color deficient correspond to those of the general population.

I don't have any issue with that as far as practical human tasks are concerned. That's basically our default belief system you're describing here and it seems to have worked out good enough for us so far. I myself trust my senses when I endeavour to go buy some food out there in the wild world and I'm still alive apparently.

For a more sophisticated but less lenient perspective, I've already explained all you need to know about my views and what I said seems enough of a reply.
EB
 
I subscribe to the notion that vertebrates have located and demonstrated multilevel systems underlying sonmelence, wakefulness, arousal, attention, awareness, and self states that are elicited while almost never demonstrated as emitted I find it quaint that one would suppose one has some directive or assertive control of these conditions.

Rather, it is in the selection of that which is received for use and articulation that one finds some sliver of possibility for central motivation. Central control, if it were to exist, seems a poor choice for a behavioral model underlying any basis for fitness based consequent actions beyond the false image portrayal I described describe above.
 
I just popped up these in 2003

Coonsciousness and Neuroscience https://authors.library.caltech.edu/40355/1/feature_article.pdf

and

A framework for consciousness http://www.wisebrain.org/media/Papers/AFrameworkforConsciousness.pdf


Thanks. I have only read the first of those (so far) and I admit my brain hurt about halfway through. But still great reading. Outside my area of expertise, obviously, but fascinating nonetheless. From my pov, interesting that they consider 'self consciousness' to be a sophisticated version or subset of consciousness and are seeking, in the first instance, to explore and understand 'bare' consciousness (for which they also think language is unnecessary). Which makes sense (to me) as an approach, in reductionist/scientific terms (and philosophically, imo). Start at the bottom, or the origins, and so on. Do the rest later, the more complicated and difficult stuff, after establishing a base.

One thing which drew my attention was their question, 'why are we conscious (at all, and not just philosophical zombies)?

They appear to take it that parts of our brains, some sub-systems, are zombie-like, so sometimes we see stuff (they have chosen vision as a way of exploring the subject) and act on the information without ever experiencing it consciously, but that overall, a brain made up of such 'zombie subsytems' would, they think, be inefficient.

From the first of the two papers:

"We suggest that such an arrangement [a brain made up solely of zombie-like subsystems] is inefficient when very many such systems are required. Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, in our view, is what seeing is about. As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation."

So I'm not really following the reasoning here. Why would even 'bare' consciousness (and I do not mean self-consciousness and nor do they) be required even for efficiency of that more complex function?

I do not expect there is a categorical, established or detailed answer, obviously.
 
Last edited:
I just popped up these in 2003

Coonsciousness and Neuroscience https://authors.library.caltech.edu/40355/1/feature_article.pdf

and

A framework for consciousness http://www.wisebrain.org/media/Papers/AFrameworkforConsciousness.pdf


Thanks. I have only read the first of those (so far) and I admit my brain hurt about halfway through. But still great reading. Outside my area of expertise, obviously, but fascinating nonetheless. From my pov, interesting that they consider 'self consciousness' to be a sophisticated version or subset of consciousness and are seeking, in the first instance, to explore and understand 'bare' consciousness (for which they also think language is unnecessary). Which makes sense (to me) as an approach, in reductionist/scientific terms (and philosophically, imo). Start at the bottom, or the origins, and so on. Do the rest later, the more complicated and difficult stuff, after establishing a base.

One thing which drew my attention was their question, 'why are we conscious (at all, and not just philosophical zombies)?

They appear to take it that parts of our brains, some sub-systems, are zombie-like, so sometimes we see stuff (they have chosen vision as a way of exploring the subject) and act on the information without ever experiencing it consciously, but that overall, a brain made up of such 'zombie subsytems' would, they think, be inefficient.

From the first of the two papers:

"We suggest that such an arrangement [a brain made up solely of zombie-like subsystems] is inefficient when very many such systems are required. Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, in our view, is what seeing is about. As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation."

So I'm not really following the reasoning here. Why would even 'bare' consciousness (and I do not mean self-consciousness and nor do they) be required even for efficiency of that more complex function?

I do not expect there is a categorical, established or detailed answer, obviously.

It's Ramachandran who's doing the heavy lifting here and it's simply that that is what promulgating the information across the brain feels like. I think the state of the art has moved on substantially and an error correction version of that psychoactives paper you posted the other week is closer to the nearest thing we have to a consensus right now. We know that we end up with chords or even, for want of a better word, choirs of agonistic and antagonistic (in the chemical sense) activation. The cutting edge is at trying to work out how these bind - it's much like the old vision problem of needing to know rather a lot about the visual field, size and distance shadow and boundary, for example, before it is processed to be able to process it. Here, the problem is how the hell does the brain know what to unify when you have multimodal inputs coming in with different delays and processing burdens, yet with no obvious timestamps that have been spotted yet. One tempting direction is firing rates, but it's all a bit up in the air at the moment. We crack that and work out how to code it and I think we might be looking at a bit of a breakthrough both in psychology and AI.
 
So I'm not really following the reasoning here. Why would even 'bare' consciousness (and I do not mean self-consciousness and nor do they) be required even for efficiency of that more complex function?

I find it pretty reasonable to presume the continued existence of 'other' awareness given the awareness/consciousness venture bagan back near the timer Manta Rays appeared on the scene. I favor layering models building or adaption existing capabilities wherever possible as an evolutionary economical move. Why go back and rebuild the entire system every time there is an advance in capapability. We still have general responses to stimuli using ancient pathways similar to frogs's peripheral vision attention directing mechanisms for instance. Check out your reaction to a bus pulling up on your right while you are at a red light.
 
It's Ramachandran who's doing the heavy lifting here and it's simply that that is what promulgating the information across the brain feels like. I think the state of the art has moved on substantially and an error correction version of that psychoactives paper you posted the other week is closer to the nearest thing we have to a consensus right now. We know that we end up with chords or even, for want of a better word, choirs of agonistic and antagonistic (in the chemical sense) activation. The cutting edge is at trying to work out how these bind - it's much like the old vision problem of needing to know rather a lot about the visual field, size and distance shadow and boundary, for example, before it is processed to be able to process it. Here, the problem is how the hell does the brain know what to unify when you have multimodal inputs coming in with different delays and processing burdens, yet with no obvious timestamps that have been spotted yet. One tempting direction is firing rates, but it's all a bit up in the air at the moment. We crack that and work out how to code it and I think we might be looking at a bit of a breakthrough both in psychology and AI.

Ah. I wish I could remember where I posted that psychoactives paper. :(

Anyhows, I was particularly interested in the authors' respectful-ish sideswipe at philosophers in the above paper, an (unsurprising) emphasis by the authors on doing experiments and the reminder that 'an analogy is just an analogy' (in turn reminding me that I have Lakoff's, 'Metaphors We Live By' waiting in my pile of unread books) and the general issue of the limitations of fallible introspection and other non-empirical approaches.

I also enjoyed his comparison (I think it was essentially an analogy but we'll let him off given that he's a philosopher, even if of the applied sort) between the 'hard problem' of how something at least seemingly non-physical can arise from the physical (brain) and the generally agreed to be less difficult issue of accepting that another apparent 'oddity' (life) can readily emerge from 'dead' molecules. His (optimistic) suggestion was that when we understand more (mainly via science, in his view) about consciousness, the way we do about life, the emergence 'problem' will not seem so odd for the latter.

From the paper:

"Some philosophers (Searle, 1984; Dennett, 1996) are rather fond of this analogy between ‘livingness’ and ‘consciousness’, and so are we; but, as Chalmers (1995) has emphasized, an analogy is only an analogy."

https://authors.library.caltech.edu/40355/1/feature_article.pdf

I suppose he might be being a tad harsh on Dennett, who after all, is very keen to naturalise 'oddities'.
 
Last edited:
So I'm not really following the reasoning here. Why would even 'bare' consciousness (and I do not mean self-consciousness and nor do they) be required even for efficiency of that more complex function?

I find it pretty reasonable to presume the continued existence of 'other' awareness given the awareness/consciousness venture bagan back near the timer Manta Rays appeared on the scene. I favor layering models building or adaption existing capabilities wherever possible as an evolutionary economical move. Why go back and rebuild the entire system every time there is an advance in capapability. We still have general responses to stimuli using ancient pathways similar to frogs's peripheral vision attention directing mechanisms for instance. Check out your reaction to a bus pulling up on your right while you are at a red light.

Sure.

So at the 'bottom' end (of a spectrum) you can have the biological equivalent of a motion-sensor light which detects movement and switches the light on. Flowers orienting themselves towards the sun for example. This would presumably all be automatic, with no consciousness as we conceive of it.

At at the 'top' end you have (apparently) us, complete, at least when the system is running in its most sophisticated mode, with a robust sense of self.

Somewhere in between is, presumably, the threshold where 'it feels like something' to the entity, even if just a pain sensation, so the entity can scuttle or swim away, in a manner we might also think of as 'automatic'.

I'm still pondering why it would be more efficient for the entity/system to actually feel the pain in a conscious way compared to just being like a mechanical machine that we might design today to scuttle away from, say, heat, but I guess there might likely be a survival efficiency.

It's these nether regions of proto-conscious that interest me most, for some reason. My guess, and it is just an uninformed guess, is that consciousness exists on a continuum which includes at the 'lower' end things that we would have trouble calling consciousness at all, them being so miniscule and faint. I have no idea which organism I should speculatively cite for this. But in a way that's beside the point. What is the extra efficiency in the organism feeling anything versus just acting/reacting without feeling anything?
 
It's Ramachandran who's doing the heavy lifting here and it's simply that that is what promulgating the information across the brain feels like. I think the state of the art has moved on substantially and an error correction version of that psychoactives paper you posted the other week is closer to the nearest thing we have to a consensus right now. We know that we end up with chords or even, for want of a better word, choirs of agonistic and antagonistic (in the chemical sense) activation. The cutting edge is at trying to work out how these bind - it's much like the old vision problem of needing to know rather a lot about the visual field, size and distance shadow and boundary, for example, before it is processed to be able to process it. Here, the problem is how the hell does the brain know what to unify when you have multimodal inputs coming in with different delays and processing burdens, yet with no obvious timestamps that have been spotted yet. One tempting direction is firing rates, but it's all a bit up in the air at the moment. We crack that and work out how to code it and I think we might be looking at a bit of a breakthrough both in psychology and AI.

Ah. I wish I could remember where I posted that psychoactives paper. :(

Anyhows, I was particularly interested in the authors' respectful-ish sideswipe at philosophers in the above paper, an (unsurprising) emphasis by the authors on doing experiments and the reminder that 'an analogy is just an analogy' (in turn reminding me that I have Lakoff's, 'Metaphors We Live By' waiting in my pile of unread books) and the general issue of the limitations of fallible introspection and other non-empirical approaches.

I also enjoyed his comparison (I think it was essentially an analogy but we'll let him off given that he's a philosopher, even if of the applied sort) between the 'hard problem' of how something at least seemingly non-physical can arise from the physical (brain) and the generally agreed to be less difficult issue of accepting that another apparent 'oddity' (life) can readily emerge from 'dead' molecules. His (optimistic) suggestion was that when we understand more (mainly via science, in his view) about consciousness, the way we do about life, the emergence 'problem' will not seem so odd for the latter.

From the paper:

"Some philosophers (Searle, 1984; Dennett, 1996) are rather fond of this analogy between ‘livingness’ and ‘consciousness’, and so are we; but, as Chalmers (1995) has emphasized, an analogy is only an analogy."

https://authors.library.caltech.edu/40355/1/feature_article.pdf

I suppose he might be being a tad harsh on Dennett, who after all, is very keen to naturalise 'oddities'.

I don't know; is it an analogy? I see it as more an example of something where there's some sort of, what shall we call it? odd emergence? constitution addition (in the Rudder Baker sense)? I don't know. However here's this thing 'life' that we don't feel is too weird (compared to consciousness) but is still in the same ball park.

As for Dennett, I completely agree, his program, since 'Real Patterns' and his slide away from instrumentalism at least, is always focussed on deflation and naturalisation.
 
I don't know; is it an analogy? I see it as more an example of something where there's some sort of, what shall we call it? odd emergence? constitution addition (in the Rudder Baker sense)? I don't know. However here's this thing 'life' that we don't feel is too weird (compared to consciousness) but is still in the same ball park.

As for Dennett, I completely agree, his program, since 'Real Patterns' and his slide away from instrumentalism at least, is always focussed on deflation and naturalisation.

We might even, might we (?) add 'gravity' to the list of 'odd things' that do not seem to exercise us so much as consciousness. We might even add lots of other things to that list.

This, if anything, points to the suggestion that we don't so much need a complete explanation (for anything perhaps) so much as one that satisfies (our ape curiosity), one where we feel we can be comfortable with the explanation, and just get on with using it. That the hard problem is only hard because it puzzles us unduly. :)

Hence I suspect that Dennett is essentially right to say that all that needs to be done is to naturalise/deflate. We are unlikely ever to get beyond 'useful explanations'.

This of course is the complete opposite of untermenche's suggestion (that we need to know before we can study or understand) and more like the way we ever get to understand almost anything we have ever tried to understand.
 
I don't know; is it an analogy? I see it as more an example of something where there's some sort of, what shall we call it? odd emergence? constitution addition (in the Rudder Baker sense)? I don't know. However here's this thing 'life' that we don't feel is too weird (compared to consciousness) but is still in the same ball park.

As for Dennett, I completely agree, his program, since 'Real Patterns' and his slide away from instrumentalism at least, is always focussed on deflation and naturalisation.

We might even, might we (?) add 'gravity' to the list of 'odd things' that do not seem to exercise us so much as consciousness.

This, if anything, points to the suggestion that we don't so much need a complete explanation (for anything perhaps) so much as one that satisfies, one where we feel we can be comfortable with the explanation, and just get on with using it. :)

Hence I suspect that Dennett is essentially right to say that all that needs to be done is to naturalise/deflate. We are unlikely ever to get beyond 'useful explanations'.

One can hope... My preferred one is the wetness of water. Beautifully odd and yet absolutely everyday. I used to like gravity more, but since the confirmation of Higgs' it's closer to magnetism for me.
 
I subscribe to the notion that vertebrates have located and demonstrated multilevel systems underlying sonmelence, wakefulness, arousal, attention, awareness, and self states that are elicited while almost never demonstrated as emitted I find it quaint that one would suppose one has some directive or assertive control of these conditions.

Rather, it is in the selection of that which is received for use and articulation that one finds some sliver of possibility for central motivation. Central control, if it were to exist, seems a poor choice for a behavioral model underlying any basis for fitness based consequent actions beyond the false image portrayal I described describe above.

Sorry, I'm lost. I don't see from where this "central control" could possibly come.

Tell me when you're awake.
EB
 
Conscious States: Where Are They In The Brain And What Are Their Necessary Ingredients?
(2013: William Hirstein, PhD, Department of Philosophy, Elmhurst College, Elmhurst, Illinois, USA)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653223/

Author makes the distinction between 'bare consciousness' and a more sophisticated 'self consciousness' and suggests that candidates for the former might include, "states of coma, states in which subjects are absorbed in a perceptual task, states in brains with damaged prefrontal lobes, states of meditation and conscious states of some infants and animals".

And I would add potentially both the short interval between fainting and becoming effectively unconscious and the short interval between waking up and becoming fully conscious.
EB

And perhaps dreaming, too, may be a case of bare consciousness. While dreaming, we're definitely not self-aware, we don't have any perceptions or sensations, and we don't seem to have access to our memory store.

And come to think of, like in my experience of fainting, we can sometimes remember some of our dream once we've regained consciousness.
EB
 
And perhaps dreaming, too, may be a case of bare consciousness. While dreaming, we're definitely not self-aware, we don't have any perceptions or sensations, and we don't seem to have access to our memory store.

And come to think of, like in my experience of fainting, we can sometimes remember some of our dream once we've regained consciousness.
EB

Dreams are interesting. From the aspect of the topic, most dreams are experienced from the 1st person perspective, visually, or at least this is how they are generally remembered afterwards (when 'self' is present and awake). It is as if, in the memory of the dream at least, there was still a viewer looking at the dream scenes from roughly behind or through a viewer's eyes. When we recall dreams, we tend to ascribe that viewer to our self, but it's not clear if that is actually the experience at the time of dreaming.

As to recollecting dreams, there are techniques one can practice so as to remember much more and one can end up remembering a lot of dream material after every sleep.
 
I don't know; is it an analogy? I see it as more an example of something where there's some sort of, what shall we call it? odd emergence? constitution addition (in the Rudder Baker sense)? I don't know. However here's this thing 'life' that we don't feel is too weird (compared to consciousness) but is still in the same ball park.

As for Dennett, I completely agree, his program, since 'Real Patterns' and his slide away from instrumentalism at least, is always focussed on deflation and naturalisation.

We might even, might we (?) add 'gravity' to the list of 'odd things' that do not seem to exercise us so much as consciousness. We might even add lots of other things to that list.

This, if anything, points to the suggestion that we don't so much need a complete explanation (for anything perhaps) so much as one that satisfies (our ape curiosity), one where we feel we can be comfortable with the explanation, and just get on with using it. That the hard problem is only hard because it puzzles us unduly. :)

Hence I suspect that Dennett is essentially right to say that all that needs to be done is to naturalise/deflate. We are unlikely ever to get beyond 'useful explanations'.

This of course is the complete opposite of untermenche's suggestion (that we need to know before we can study or understand) and more like the way we ever get to understand almost anything we have ever tried to understand.

I don't expect that we'll ever understand consciousness but we might at least get a better notion of the relationship between consciousness and the physical world.

I also suspect that this will likely involve a completely different perspective on the material world, where we still have progress to make, whereas I don't think there's anything fundamental about consciousness that we could get to know beyond what we already know.
EB
 
... I don't think there's anything fundamental about consciousness that we could get to know beyond what we already know.
EB
Dunno. There is the prospect of understanding the neural correlates better.

And of course the neural mechanisms,

I'd say we are right in the middle of a revolution in the way that we think about the direction(s) of travel in the head. The idea of afferent, afferent, afferent, mystery, efferent, efferent, efferent are dead as a dodo. I'm not convinced the the obituary been taken on board yet.

Then there's the possibility, raised by the Churchlands, among others, that the very conceptualisation of conscious experience and pretty well every other part of our mental life is simply wrong.

That's quite a lot of quite fundamental going on right there.
 
Back
Top Bottom