• Welcome to the Internet Infidels Discussion Board.

“The relativity of consciousness”

pood

Contributor
Joined
Oct 25, 2021
Messages
6,772
Basic Beliefs
agnostic
This article appeared a couple of years ago in a neuroscience magazine; the full paper to which the article refers is here. I have read the former but not the latter.

The article describes a paper that claims to have solved the hard problem of consciousness, which is that while we have a neural representation of brain states, nothing about that representation explains the subjective experience of qualia and self-awareness.

Although the work in question is a science paper by scientists, it seems more properly a topic of philosophy. (I know, some people think philosophy is useless; they are wrong. There is no science without philosophy.)

The paper basically argues that consciousness is a relative phenomenon, like space and time in Einstein’s theory of relativity. The actual paper (which, again, I have not read) purports to have a mathematical description of its claims.

In Einstein’s relativity, an observer on a train platform claims a train is moving past him at constant velocity and he himself is at rest. The train passenger, however, is perfectly entitled to clam that she is at rest and the platform and the man on it are in constant uniform motion past her. Einstein showed there is no objective fact of the matter about who is right; both are actually correct, but only in their own frames of reference.

In the relativity of consciousness concept, my own frame of reverence must always be subjective, experiential, and involve qualia; but from my frame, the consciousness of others must always be a functionalist neural representation only. Observers in both frames are correct, each in his own frame, and the two observations are of a single underlying reality, so the “hard problem” is claimed to dissolve.

I have mingled feelings about this. For one thing, I don’t see how it amounts to much more than the rather obvious truism that I can only experience what I experience and not what you experience because I am me and not you. However, I suppose the difference is that the authors claim that you cannot, even in principle, give an experiential account of consciousness as opposed to an experiential one from your frame for someone else; a different cognitive frame from yours can only yield a functionalist account.

But I also have one strong objection in that I think what is being contended here is not really analogous to Einstein’s relativity theory for one rather obvious reason, but I thought I’d put the paper and the article about it out there to see what others think.

I hope an interesting discussion might ensue (ať Talk Rational, where I found the article, discussion hit 62 pages), but I would hope to avoid getting into arguments that only “shut up and calculate” is valid and that philosophy is useless and that this claim is “too philosophical.” In any event, as mentioned, the authors do claim to have a mathematical model of their claims, but then again maths are not the same as science either.
 
I don't see how this adresses the problem.

Einstein's thought experiment ignores the Hard Problem, and just accepts as a fact that the observers are aware, within their own reference frames.

We already knew that. The question is how/why this is the case. Saying "we only experience our own conciousness, and not that of others, because we are in a different reference frame", is just a different way to say "...because we are ourselves, and are not other(s)", which is a re-statement of the problem, not a solution to it.
 
If you had a ringworld type construction and set it spinning to give its inhabitants a feeling of gravity, does that rescind their right to claim that the ringworld is the resting body and the rest of the universe is spinning around it?
I don’t think changing the rest frame perspective is going to eliminate centrifugal force.
 
I don't see how this adresses the problem.

Einstein's thought experiment ignores the Hard Problem, and just accepts as a fact that the observers are aware, within their own reference frames.

We already knew that. The question is how/why this is the case. Saying "we only experience our own conciousness, and not that of others, because we are in a different reference frame", is just a different way to say "...because we are ourselves, and are not other(s)", which is a re-statement of the problem, not a solution to it.

I was inclined to think that too, but I think the difference here is that, according to the authors, we are constrained by our particular “cognitive frame” only to describe how others think with a functionalist account. As I read it, they are saying that because of this, trying to account for qualia, subjectivity, etc. in others can’t be done.
 
I don't see how this adresses the problem.

Einstein's thought experiment ignores the Hard Problem, and just accepts as a fact that the observers are aware, within their own reference frames.

We already knew that. The question is how/why this is the case. Saying "we only experience our own conciousness, and not that of others, because we are in a different reference frame", is just a different way to say "...because we are ourselves, and are not other(s)", which is a re-statement of the problem, not a solution to it.

I was inclined to think that too, but I think the difference here is that, according to the authors, we are constrained by our particular “cognitive frame” only to describe how others think with a functionalist account. As I read it, they are saying that because of this, trying to account for qualia, subjectivity, etc. in others can’t be done.
So, they are answering the question by declaring it to be unanswerable? I suspect that they are right, but it's not a very satisfying conclusion.

And it still leaves open the Hard Problem for the observer himself (or should we say "myself"). Even if it is impossible to me to ever account for qualia in others, I still want to know how to account for them in myself.

On the other hand, I have always held that unsolvable problems are the easiest kind of problem - because we can achieve the optimum result with zero effort.
 
I don't see how this adresses the problem.

Einstein's thought experiment ignores the Hard Problem, and just accepts as a fact that the observers are aware, within their own reference frames.

We already knew that. The question is how/why this is the case. Saying "we only experience our own conciousness, and not that of others, because we are in a different reference frame", is just a different way to say "...because we are ourselves, and are not other(s)", which is a re-statement of the problem, not a solution to it.

I was inclined to think that too, but I think the difference here is that, according to the authors, we are constrained by our particular “cognitive frame” only to describe how others think with a functionalist account. As I read it, they are saying that because of this, trying to account for qualia, subjectivity, etc. in others can’t be done.
So, they are answering the question by declaring it to be unanswerable? I suspect that they are right, but it's not a very satisfying conclusion.

And it still leaves open the Hard Problem for the observer himself (or should we say "myself"). Even if it is impossible to me to ever account for qualia in others, I still want to know how to account for them in myself.

On the other hand, I have always held that unsolvable problems are the easiest kind of problem - because we can achieve the optimum result with zero effort.

I guess what they are saying, or reported to be saying, is that Bob, in his cognitive frame, measures qualia, self-awareness, etc., for himself, but does not experience the underlying neural map behind these experiences. At the same time, he views the underlying neural map of Alice in her separate cognitive frame, but never her subjective experience. And, everything is exactly vice versa, just like in the relativistic train experiment where there is no test you can devise to say who is *really* in motion and who is *really* at rest.

That said, before commenting further, I believe I should read the actual paper. Reportage on such papers, even by science journalists, is often subtly wrong and sometimes importantly wrong. Besides, there is supposed to be some mathematical description of the claims not given in the article, so we’ll see what that’s all about.
 
I mean, yes. This is kind of how I already have described consciousness, but I placed it in different, albeit interchangable, terms:

You can look at it in terms of the physical inputs and outputs, or you can look at it in terms of memory. Physical output would be someone saying "I like bread". In terms of memory, this would express as a construction of contingent action that biases the system towards "bread" selections. In humans there is also a part actually reporting the bias to the selection model and having the selection model evaluate its own biases, so it can report to "what are your biases", with "I like bread".

In a more contentious example which I'm fact heavily leans on the OP paper...

I can open up a 787, and find the part of its "brain" specifically that is thinking about the state of the pitot tube, for instance.

It's report about the clogging the pitot tube does not relate most of these thoughts. Prior to the light and HUD message/warnings, it's thinking about how much pressure it's detecting and whether it is cold, and whether the warmer is working, and what that temperature means with respect to the expected pressure, and the variability of that pressure. It's all happening smoothly and seamlessly. It's awareness of the pitot tube is even separate from the awareness of the altitude, just as our hearing is separate from our awareness of our ears, since it goes through a secondary node (avionics calculation process) to convert apparent pressure and temperature. We feel our ears buzz and we feel pressure on them, but the "hearing" comes from a subtly different direction inside our heads just ad the pitot gets pressure and temperature out, but something else is responsible for interpreting that internally.

I can see it as "having those thoughts" and identify the thoughts themselves and details about those thoughts only because I have over 20 years of experience and education in translating thoughts back and forth from mechanical behavior to verbal description, learning how to relate to a non-human machine. I'm not sure how much I can expect from a neuroscientist in terms of understanding that, to be honest, or even most software engineers; most software engineers are taught, out of hand, to come to wrong conclusions on these matters.

I can say most machines have the same experience of red only because they experience it mostly as r:255,b:0,g:0. Sometimes they experience it in CMYK, and sometimes in different bit depth, and sometimes in entirely different ways with even more colors. Each of these is "red" but expressed a different way and reifies to the actual color red through the chemicals of the monitor or capture device connected to the originator of that definition of red that actually interact with the photons that were differentiated as such.

I can say they appear that they would have different, but translatable, experiences of it on that basis.

If it could report some experience of the experience, meta-consciousness, it could teport how it experiences the color... And often they do. You can right click and ask the computer "what color is this" and it will report the color as an experience in both terms, depending on the interface.

The ways that the computer normally reports things is just with its monitors or lights. It can't speak it's thoughts... There's no assembly within it to normally allow this through it's normal channels. You can't walk up to it and ask it, beyond what it tells you, and the part that commands this from the inside has no mouth with which to speak these experiences otherwise, where humans do have a mouth and linkage with which to communicate our thoughts after-the-fact, so we can talk about what we feel rather than being forced to infer it, as we do with a computer, from literal mind-reading.

Of course all this language embeds an uncomfortable assumption: that computers are thinking and that we can know their thoughts.

Fortunately, nowhere in that mess is any thought of the sort that would oppose our leverage over the system*.

And there is sense in this in that there is value in a system telling itself what it's thoughts were, so that if the results of the thought process are unsuccessful, it can use this to identify a correct thought process through slow analysis and back-propagate until the thought process conforms to the identified correct thought process.

But that isn't fundamental to consciousness... Rather the thing fundamental to consciousness appears to be the same thing fundamental to free will: the switch, given connection to some state, gives consciousness of the state. Logic gates of switches give consciousness of logically derived facts about collections of states, and this is consciousness of the state of the collection.

It doesn't really matter what the state is or where that is being measured by the switch. It could be environmental, and then it would be consciousness of that environmental state at whatever location. This consciousness happens at a specific location in the system.

Some locations may be conscious of many things, given the system's ability to balance logic on that many variables at the same time. Some locations may be conscious of only one small thing. Some locations may be conscious of things others are ignorant of and which they have no ability to understand in the first place.

One part is conscious of manu the things it itself recently was conscious of and this part is the "inner you", the ego, which like most people for most computers, lacks the ability to look inside and read the inner thoughts of the other parts... Though it does not necessarily lack the ability to learn the language of the other parts and understand what they say... They just lack any channel to receive that input absent certain chemical changes which mutate the network connectivity (re: drugs).

Sure, this doesn't address many or even most of the things people would expect from consciousness: it doesn't address learning in any way, nor ethics, not philosophical worth. It would in many ways be panpsychist: in more fundamental terms, this can be used as a paradigm for describing force and action in general, and I think the fact that it can describe force and action in terms of awareness as much as freedom and will means, at least to me, that there is a solidity there in the foundational intuitions and that we were wrong in assuming that it is mere metaphor or analogy to use the terms "awareness" to describe what a computer has of its own states at times and places where such a report is made and consumed.

And most importantly, this allows building things which think "thoughts", and even control of what those "thoughts" will look like.

The whole point of formalizing terms in language is so that you can use them to define requirements in engineering and meet them with those terms. We discuss this not to twiddle our butts with how "smart" we are but rather so we can just shut up and STOP twiddling our butts over things and build it.

This kind of use of language is what you get when you reject anthropocentric thinking and instead enforce mechanocentric views of thought, and instead of thinking of computers in terms of humans, thinking of humans in terms of computers.

*With LLMs your mileage on this may vary.
 
I can say most machines have the same experience of red only because they experience it mostly as r:255,b:0,g:0. Sometimes they experience it in CMYK, and sometimes in different bit depth, and sometimes in entirely different ways with even more colors. Each of these is "red" but expressed a different way and reifies to the actual color red through the chemicals of the monitor or capture device connected to the originator of that definition of red that actually interact with the photons that were differentiated as such.

B-b-but all this rumination about the machine experiencing “red” in different ways does not translate to the human “experience of red”. At least not until you can get the machine to tell you what its FAVORITE “red” is, and why.
 
Back
Top Bottom