• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The "me-ness" of being me

PyramidHead

Contributor
Joined
Aug 14, 2005
Messages
5,080
Location
RI
Basic Beliefs
Marxist-Leninist
The philosopher Thomas Nagel had this to say about being himself (1965):

[By the way, "token-reflexive" expressions are just those that can only be understood in the context of the person expressing them: this, here, now, mine, today, yesterday, etc. are all token-reflexive.]

Thomas Nagel said:
The problem can be shown to be general in the following way: consider everything that can be said about the world without employing any token-reflexive expressions. This will include a description of all its physical contents and their states. . . It will also include a description of all persons in the world and their histories, memories, thoughts, sensations, perceptions, intentions, and so forth. I can describe without token-reflexives the entire world and everything in it—and this will include a description of Thomas Nagel and what he is thinking and feeling. But there seems to remain one thing I cannot say in this fashion—namely, which of the various persons in the world I am. And when everything that can be said in the specified manner has been said, and the world in a sense has been completely described, there seems to remain one fact that has not been expressed, and that is the fact that I am Thomas Nagel. This is the fact that I am the subject of these experiences; this body is my body; the subject or the center of my world is this person, Thomas Nagel.

For the past couple of years now, this additional fact has fascinated me and I'm still not sure what to make of it. Is it purely an invention of language, or is it something that can be coherently referred to as an object of inquiry? Joe Kern explores this issue in a book he's still working on:

Now imagine [an] alternate present in which you don’t exist but a lot of other people who are not you do exist. Imagine that one of those other people who are not you is a lot like you. In fact, essentially exactly like you. Let’s say that this person is like an identical twin, with the same DNA as you, though we’ll add the one stipulation that they weren’t produced from the splitting of the zygote that produced you. We’ll say that the zygote that produced you never existed in this situation we’re imagining. But this person just happens to exist who has the same DNA as you in this situation of a present in which you don’t exist. You don’t need to imagine the technical details of how this could come about. Just erase yourself from the world, and put someone else with your same DNA into it.

This should not be controversial, but for some I think it will be. But really, it should not be. It is easy to imagine yourself not existing. And it is easy to imagine people who are not you existing. You don’t even have to imagine for the latter; they are already all around you. And these people who are not you could have all manner of DNA. And they could have DNA that is quite close to yours, and still not be you any more than those whose DNA is quite distant. And they could have DNA that is exactly like yours and still not be you any more than those whose DNA is quite distant. None of this should be controversial.

And if you can imagine all this, then you should be able to take these final steps. Imagine someone that is not you but that is exactly like you in every possible way, not just in DNA, but in every physical structure. And imagine this person in a present in which you don’t exist. Finally, imagine this person occupying the exact same location in space and time as you do now, doing exactly what you are doing now, and thinking exactly the thoughts you are now, including having all of the same memories you have now. You don’t exist, but this person does, exactly like you in every possible way, doing and thinking just what you are doing and thinking now. Just erase yourself from existence, and put this other person who is not you in your place.

Have you done this? Good. I call this person who just took your place your perfect doppelgänger. This person performs a very important function for you: he or she clarifies what you are actually referring to when you say “I exist”. The thing you are referring to, the thing you are pointing to, is the sole difference throughout the universe between actual reality and the alternate possibility in which your perfect doppelgänger exists in your stead. That thing, that sole difference, is your existence, what you are referring to when you say “I exist”.

Is this an actual difference between the two hypothetical universes? Of course, later in the paper Kern concludes it is not, because you exist in any universe that includes consciousness, as all of the conscious beings simultaneously and across spacetime. This is one way of solving the problem, but are there others? Arnold Zuboff (1990) arrives at the same conclusion through statistical reasoning:

Arnold Zuboff said:
Imagine that you and a duplicate of yourself are lying unconscious, next to each other, about to undergo a complete step-by-step exchange of bits of your bodies. It certainly seems that at no stage in this exchange of bits will you have thereby switched places with your duplicate. Yet it also seems that the end-result, with all the bits exchanged, will be essentially that of the two of you having switched places. Where will you awaken? I claim that one and the same person possesses both bodies, occupies both places and will experience both awakenings, just as a person whose brain has been bisected must at once experience both of the unconnected fields of awareness, even though each of these will falsely appear to him as the entirety of his experience. I also claim that the more usual apparent boundaries of persons are as illusory as those in brain bisection; personal identity remains unchanged through any variation or multiplication of body or mind. In all conscious life there is only one person - I - whose existence depends merely on the presence of a quality that is inherent in all experience - its quality of being mine, the simple immediacy of it for whatever is having experience. One powerful argument for this is statistical: on the ordinary view of personhood it is an incredible coincidence for you (though not for others) that out of 200,000,000 sperm cells the very one required on each occasion for your future existence was first to the egg in each of the begettings of yourself and all your ancestors. The only view that does not make your existence incredible, and that is not therefore (from your perspective) an incredible view, is that any conscious being would necessarily have been you anyway.

It is this immediacy of subjective experience that he speaks of, which I am treating the same way as Kern's "personal existence" and Nagel's "which one is me", that baffles and excites me. I am drawn to the idea that there is only one such immediacy, one "me-ness", that is not tied to the specific conditions of individual iterations thereof (in much the same way that the existence of the novel Moby Dick is not dependent on the existence of any particular copy of the novel in any particular language). But I concede that it seems outlandish, even though the alternative may be statistically unlikely as Zuboff claims. If it weren't for this ineffable quality, there would be no problem to solve, but it's there--it's here--and cannot be ignored or left out of any account of the way things are.

I would also like to offer the suggestion that it is substrate-independent, in that it doesn't matter whether it is wholly a product of brain activity, some kind of immaterial soul, or an attribute of all matter in the universe. These are all descriptions of the problem, not solutions to it. As such, this isn't a task suited for neuroscience, or even for philosophy of mind, but a metaphysical one, perhaps the only real metaphysical question that can't be reduced to semantic disputes.
 
That's very interesting... :)

So, I'm not going to rush any comment.

Still, I think they're asking the right questions, which is certainly a good start.

I suspect any answer to these questions could only be flawed but that suspicion requires a rational justification I'm unable to provide. Not yet at least.

This is certainly a fuck-all problem and I'm not too optimistic we can say anything really significant about it, though we certainly try and we certainly should keep trying as long as we have the leisure.
EB
 
The philosopher Thomas Nagel had this to say about being himself (1965):

[By the way, "token-reflexive" expressions are just those that can only be understood in the context of the person expressing them: this, here, now, mine, today, yesterday, etc. are all token-reflexive.]



For the past couple of years now, this additional fact has fascinated me and I'm still not sure what to make of it. Is it purely an invention of language, or is it something that can be coherently referred to as an object of inquiry? Joe Kern explores this issue in a book he's still working on:



Is this an actual difference between the two hypothetical universes? Of course, later in the paper Kern concludes it is not, because you exist in any universe that includes consciousness, as all of the conscious beings simultaneously and across spacetime. This is one way of solving the problem, but are there others? Arnold Zuboff (1990) arrives at the same conclusion through statistical reasoning:

Arnold Zuboff said:
Imagine that you and a duplicate of yourself are lying unconscious, next to each other, about to undergo a complete step-by-step exchange of bits of your bodies. It certainly seems that at no stage in this exchange of bits will you have thereby switched places with your duplicate. Yet it also seems that the end-result, with all the bits exchanged, will be essentially that of the two of you having switched places. Where will you awaken? I claim that one and the same person possesses both bodies, occupies both places and will experience both awakenings, just as a person whose brain has been bisected must at once experience both of the unconnected fields of awareness, even though each of these will falsely appear to him as the entirety of his experience. I also claim that the more usual apparent boundaries of persons are as illusory as those in brain bisection; personal identity remains unchanged through any variation or multiplication of body or mind. In all conscious life there is only one person - I - whose existence depends merely on the presence of a quality that is inherent in all experience - its quality of being mine, the simple immediacy of it for whatever is having experience. One powerful argument for this is statistical: on the ordinary view of personhood it is an incredible coincidence for you (though not for others) that out of 200,000,000 sperm cells the very one required on each occasion for your future existence was first to the egg in each of the begettings of yourself and all your ancestors. The only view that does not make your existence incredible, and that is not therefore (from your perspective) an incredible view, is that any conscious being would necessarily have been you anyway.

It is this immediacy of subjective experience that he speaks of, which I am treating the same way as Kern's "personal existence" and Nagel's "which one is me", that baffles and excites me. I am drawn to the idea that there is only one such immediacy, one "me-ness", that is not tied to the specific conditions of individual iterations thereof (in much the same way that the existence of the novel Moby Dick is not dependent on the existence of any particular copy of the novel in any particular language). But I concede that it seems outlandish, even though the alternative may be statistically unlikely as Zuboff claims. If it weren't for this ineffable quality, there would be no problem to solve, but it's there--it's here--and cannot be ignored or left out of any account of the way things are.

I would also like to offer the suggestion that it is substrate-independent, in that it doesn't matter whether it is wholly a product of brain activity, some kind of immaterial soul, or an attribute of all matter in the universe. These are all descriptions of the problem, not solutions to it. As such, this isn't a task suited for neuroscience, or even for philosophy of mind, but a metaphysical one, perhaps the only real metaphysical question that can't be reduced to semantic disputes.

The Nagel is fine, but Kern's position relies, as so many do, not on you imaging something but imagining imagining something. There's a very simple diagnostic question here: you are imagining this person, but are you imagining them from the third person or the first person? If you are imagining from the third person, then it's no different to imagining a twin, if you are imagining them from the first person then you are imagining having privileged access to their mental lives and that is something that you can only have if you are them. If you were experiencing the mental life of this doppelganger then you'd think it was you. If you were not, it would be the same as anyone else.

Zuboff is simply rehearsing the lottery fallacy. It's not an incredible coincidence, it's the boring old mistake of looking at the odds of something before it happened after it happened. After it happened the odds were precisely one, because that's what happened. Before it happened the odds were whatever they were, but equivocation (deliberately or accidentally) between the two is the lottery fallacy. It also looks like he was setting up the sorites paradox, but the quote ended before the slippery slope he was setting up began.

If you want to study personal identity, I'd strongly recommend section three of Derek Parfit's 'Reasons and Persons'.
 
The Nagel is fine, but Kern's position relies, as so many do, not on you imaging something but imagining imagining something. There's a very simple diagnostic question here: you are imagining this person, but are you imagining them from the third person or the first person? If you are imagining from the third person, then it's no different to imagining a twin, if you are imagining them from the first person then you are imagining having privileged access to their mental lives and that is something that you can only have if you are them. If you were experiencing the mental life of this doppelganger then you'd think it was you. If you were not, it would be the same as anyone else.

That's the point. What people mean by "I exist" is something akin to "I have a first-person perspective". It doesn't mean "the body that I experience as mine has certain DNA, certain ancestry, certain history," as many would claim it does without further reflection. Kern is specifically decoupling the notion of personal existence from content, the particulars of an individual human being, to isolate the object of discussion.

Zuboff is simply rehearsing the lottery fallacy. It's not an incredible coincidence, it's the boring old mistake of looking at the odds of something before it happened after it happened. After it happened the odds were precisely one, because that's what happened. Before it happened the odds were whatever they were, but equivocation (deliberately or accidentally) between the two is the lottery fallacy. It also looks like he was setting up the sorites paradox, but the quote ended before the slippery slope he was setting up began.

You can rest assured that Dr. Zuboff is familiar with the lottery fallacy and does not commit it in his exhaustive book that examines the matter from a statistical perspective.

One analogy he employs is something like this: suppose you are the participant in a game with a trillion other people, each confined to their own room in a hotel (so, a very large hotel). Tonight, you will all be put to sleep. Tomorrow, one of two things will happen: either ONE person out of the trillion will be awakened, or EVERY person will be awakened. Imagine, then, that you find yourself awakened in the morning. It should be obvious that the second eventuality, that everyone participating in the game was awakened along with you, is a trillion times more likely than the first eventuality. For, if the first eventuality were true, then it would be an incredible fact that out of the trillion participants in the game, you were the one selected to be awakened. In other words, it's not just that someone was awakened that was improbable (to regard that as an incredible fact would indeed be the lottery fallacy), but the fact that this someone was you.

Keeping with the same thought experiment, suppose you knew for a fact that only one person out of the trillion were awakened. You have two rival hypotheses: (1) you were awakened by sheer luck, and if this had not occurred you would still be asleep, which is the conventional view of personal existence, or (2) whoever was awakened would have been you anyway, so no matter which of the trillion it was, you would experience waking up. Given these parameters, (2) is much more likely to be the case.

There are many background assumptions at play here that Zuboff rigorously defends throughout his text, which I highly recommend you read at least in part.

If you want to study personal identity, I'd strongly recommend section three of Derek Parfit's 'Reasons and Persons'.

I hope it's clear by now that I know of Parfit, as do all of the philosophers I mention here. Much of the stuff I'm bringing up here is a continuation of Parfit's work.
 
That's the point. What people mean by "I exist" is something akin to "I have a first-person perspective". It doesn't mean "the body that I experience as mine has certain DNA, certain ancestry, certain history," as many would claim it does without further reflection. Kern is specifically decoupling the notion of personal existence from content, the particulars of an individual human being, to isolate the object of discussion.

Hang on, You are making two claims here that seem incompatible:

1) What people mean by "I exist" is something akin to "I have a first-person perspective"

2) the body that I experience as mine has certain DNA, certain ancestry, certain history," as many would claim it does without further reflection.

I don't think it makes any difference, but...

I see what he's trying to do, but as I pointed out, it doesn't work, even if it did, I can't see why anyone would want to attempt that decoupling unless they were trying to pimp a dualist position. As a property dualist position would care about content, it has to be a substance dualist position and I'm not sure anyone wants to pimp that.

You can rest assured that Dr. Zuboff is familiar with the lottery fallacy and does not commit it in his exhaustive book that examines the matter from a statistical perspective.

I'm sure you are right, but in that quote it's hard to see how he didn't commit it.

One analogy he employs is something like this: suppose you are the participant in a game with a trillion other people, each confined to their own room in a hotel (so, a very large hotel). Tonight, you will all be put to sleep. Tomorrow, one of two things will happen: either ONE person out of the trillion will be awakened, or EVERY person will be awakened. Imagine, then, that you find yourself awakened in the morning. It should be obvious that the second eventuality, that everyone participating in the game was awakened along with you, is a trillion times more likely than the first eventuality. For, if the first eventuality were true, then it would be an incredible fact that out of the trillion participants in the game, you were the one selected to be awakened. In other words, it's not just that someone was awakened that was improbable (to regard that as an incredible fact would indeed be the lottery fallacy), but the fact that this someone was you.

Sure, because in this thought experiment you preserve the two sets of statistics. He's still alive but that could be due to either situation. In the thought experiment above there was no such caveat.

Keeping with the same thought experiment, suppose you knew for a fact that only one person out of the trillion were awakened. You have two rival hypotheses: (1) you were awakened by sheer luck, and if this had not occurred you would still be asleep, which is the conventional view of personal existence, or (2) whoever was awakened would have been you anyway, so no matter which of the trillion it was, you would experience waking up. Given these parameters, (2) is much more likely to be the case.

I think that is playing extremely fast and loose with the traditional notions of identity and looks far less impressive if the pronoun is replaced with a proper noun. The fact is that functionalism is dead in the water, has been dead in the water for decades and we are now acutely aware that, actually, the meat is at least as important as the motion. GOFAI spent fifty years on the functionalist pot and nothing happened. Biologically inspired connectionist accounts of cognition and the self are the only game in town.

There are many background assumptions at play here that Zuboff rigorously defends throughout his text, which I highly recommend you read at least in part.

It's a forum, the game is that you explain them if they are relevant. I'm sure I'll get around to it, but not for a while.

I hope it's clear by now that I know of Parfit, as do all of the philosophers I mention here. Much of the stuff I'm bringing up here is a continuation of Parfit's work.

Not that I recognise.
 
The philosopher Thomas Nagel had this to say about being himself (1965):

[By the way, "token-reflexive" expressions are just those that can only be understood in the context of the person expressing them: this, here, now, mine, today, yesterday, etc. are all token-reflexive.]

Thomas Nagel said:
The problem can be shown to be general in the following way: consider everything that can be said about the world without employing any token-reflexive expressions. This will include a description of all its physical contents and their states. . . It will also include a description of all persons in the world and their histories, memories, thoughts, sensations, perceptions, intentions, and so forth. I can describe without token-reflexives the entire world and everything in it—and this will include a description of Thomas Nagel and what he is thinking and feeling. But there seems to remain one thing I cannot say in this fashion—namely, which of the various persons in the world I am. And when everything that can be said in the specified manner has been said, and the world in a sense has been completely described, there seems to remain one fact that has not been expressed, and that is the fact that I am Thomas Nagel. This is the fact that I am the subject of these experiences; this body is my body; the subject or the center of my world is this person, Thomas Nagel.

For the past couple of years now, this additional fact has fascinated me and I'm still not sure what to make of it. Is it purely an invention of language, or is it something that can be coherently referred to as an object of inquiry?

Alright, after due consideration, I can tell I disagree with how Nagel interprets this issue, specifically the phrase "This is the fact that I am the subject of these experiences".

To see this, we can imagine a computer on which an advanced AI software would run, one that would be as intelligent as we are, but without the kind of subjective experience that we do. We have to further assume this AI would have the means to perceive its environment and in a way comparable to that of our own perception organs. Finally, assume also that the hardware would include the electronics necessary to provide to the AI a status report relative to its health (the health of the hardware), through a separate channel from the ones used for the perception of the world outside the computer. As AI, we should expect that it could progressively evolve a model of the world. It would also evolve a model of its hardware through the health status reports. Finally, assume we let a large number of such AIs to live their lives together, with the means of interacting, including communicating, with each other. Now, at this point, all I could do is conjecture that these AIs would be able to evolve by themselves, without any contribution from us, the same kind of token-reflexive vocabulary as we did. In other words, token-reflexive vocabulary comes from collecting and analysing data relative to the world, including the hardware on which the AI is running, but with an inherent distinction between the data relative to the hardware and the data relative to the rest of the world. Assuming the AIs would be subjected to the same kind of general language processing constraints that humans have to contend with, they would arrive at the same solution in term of linguistic feature as we had, including token-reflexive vocabulary. In other words, subjective experience isn't necessary to get there. All you would need is an ability and the data for the AI to make a distinction between itself and the rest of the world. The AI wouldn't be the subject of any experiences and yet it would get to evolve token-reflexive language. So, token-reflexive language has nothing to do with subjective experience and everything to do with the availability of data about 'oneself'.

Obviously, I won't be able to prove this, but at least this is what I believe and I hope this makes it clear and comprehensible what it is I believe.
EB
 
The philosopher Thomas Nagel had this to say about being himself (1965):

[By the way, "token-reflexive" expressions are just those that can only be understood in the context of the person expressing them: this, here, now, mine, today, yesterday, etc. are all token-reflexive.]

Thomas Nagel said:
The problem can be shown to be general in the following way: consider everything that can be said about the world without employing any token-reflexive expressions. This will include a description of all its physical contents and their states. . . It will also include a description of all persons in the world and their histories, memories, thoughts, sensations, perceptions, intentions, and so forth. I can describe without token-reflexives the entire world and everything in it—and this will include a description of Thomas Nagel and what he is thinking and feeling. But there seems to remain one thing I cannot say in this fashion—namely, which of the various persons in the world I am. And when everything that can be said in the specified manner has been said, and the world in a sense has been completely described, there seems to remain one fact that has not been expressed, and that is the fact that I am Thomas Nagel. This is the fact that I am the subject of these experiences; this body is my body; the subject or the center of my world is this person, Thomas Nagel.

For the past couple of years now, this additional fact has fascinated me and I'm still not sure what to make of it. Is it purely an invention of language, or is it something that can be coherently referred to as an object of inquiry?

Alright, after due consideration, I can tell I disagree with how Nagel interprets this issue, specifically the phrase "This is the fact that I am the subject of these experiences".

To see this, we can imagine a computer on which an advanced AI software would run, one that would be as intelligent as we are, but without the kind of subjective experience that we do. We have to further assume this AI would have the means to perceive its environment and in a way comparable to that of our own perception organs. Finally, assume also that the hardware would include the electronics necessary to provide to the AI a status report relative to its health (the health of the hardware), through a separate channel from the ones used for the perception of the world outside the computer. As AI, we should expect that it could progressively evolve a model of the world. It would also evolve a model of its hardware through the health status reports. Finally, assume we let a large number of such AIs to live their lives together, with the means of interacting, including communicating, with each other. Now, at this point, all I could do is conjecture that these AIs would be able to evolve by themselves, without any contribution from us, the same kind of token-reflexive vocabulary as we did. In other words, token-reflexive vocabulary comes from collecting and analysing data relative to the world, including the hardware on which the AI is running, but with an inherent distinction between the data relative to the hardware and the data relative to the rest of the world. Assuming the AIs would be subjected to the same kind of general language processing constraints that humans have to contend with, they would arrive at the same solution in term of linguistic feature as we had, including token-reflexive vocabulary. In other words, subjective experience isn't necessary to get there. All you would need is an ability and the data for the AI to make a distinction between itself and the rest of the world. The AI wouldn't be the subject of any experiences and yet it would get to evolve token-reflexive language. So, token-reflexive language has nothing to do with subjective experience and everything to do with the availability of data about 'oneself'.

Obviously, I won't be able to prove this, but at least this is what I believe and I hope this makes it clear and comprehensible what it is I believe.
EB

What makes you think the AI wouldn't have subjective experiences under the conditions you described? Why would it evolve our vocabulary but not that to which it refers?
 
What makes you think the AI wouldn't have subjective experiences under the conditions you described?
I don't have any good reason to think that they would but I could agree that it's at least conceivable that they would.

I think that either subjective experience is always there, everywhere, and then the AI would have it too, or subjective experience requires some specific feature in systems to appear in them and then it seems there's nothing specific in AI software that could conceivably 'produce' subjective experience. Nothing we know of at least. My default belief is that subjective experience requires some specific feature in systems for them to have it, but I may be wrong.

Why would it evolve our vocabulary but not that to which it refers?
The whole point of my interpretation is that, contrary to what Nagel seems to believe, token-reflexive vocabulary does not refer to subjective experience. As I see it, it refers to our 'self', i.e. to a set of memorised biographical data (replaced in my AI story by the hardware health status data) just as, for other people, the name 'Obama' refers, at least in somebody who knows of Obama, to a bunch of memorised biographical data obtained through perception.

And then the two tasks, evolving vocabulary and producing subjective experience, certainly don't look at all like they are in any way analogous or similar. It seems reasonable to me that we will one day have software capable of evolving a language by itself. But I don't have even the beginning of an idea how software, or anything else for that matter, could get to produce subjective experience. Subjective experience seems to me to be a fundamental property rather than a byproduct, or the end product, of some complex process. I may be wrong but that's definitely my default position.
EB
 
What makes you think the AI wouldn't have subjective experiences under the conditions you described? Why would it evolve our vocabulary but not that to which it refers?

DALs. Data abstraction layers. They exist within your brain as well- places in which abstract calculation and organization of data occurs without any conscious being being directly aware of it.

This doesn't mean that there aren't aware entities participating at every level, but I'm not sure that an electron reacting to a photon is paying attention to your whole experience at all times. It might be focused on something else.


I've used the example of a human computer in the past. Use a bunch of people who follow specific instructions to create a new piece of data- human transistors that pass on a bit of information (a box or nothing, timed relay of box or nothing, so a clock tick of nothing has meaning) if someone is touching their shoulder, and don't if someone is not.

These people won't know the whole answer they are creating, and can only know whether they passed on a bit of information or did not. The end product might be an image, might be a financial calculation, an email, or whatever, but they do not consciously know the end product if they aren't connected to some sort of feedback loop.


If an AI is built in an abstract data layer (like a CPU), not connected to any form of consciousness other than mass/energy distribution consciousness (spacetime) and EM field consciousness (quantum), why would the data have awareness?


Keep in mind that the data is in the form of bits, that not all of the consciousnesses are aware of. When the bits are added up at the end of the calculation, following specific rules to recombine qualia, they are the finished product.

While they are just bits (boxes being passed (human computer), electrons being passed (EM computer), water being passed (hydrocomputer), mechanical energy being passed (mechanical computer), there is no product composed of qualia that anything can feel (well, something might sense mass/energy distribution fluctuating, but it wouldn't necessarily know what the end arrangement of qualia was going to be based on that).
 
Alright, after due consideration, I can tell I disagree with how Nagel interprets this issue, specifically the phrase "This is the fact that I am the subject of these experiences".

To see this, we can imagine a computer on which an advanced AI software would run, one that would be as intelligent as we are, but without the kind of subjective experience that we do. We have to further assume this AI would have the means to perceive its environment and in a way comparable to that of our own perception organs. Finally, assume also that the hardware would include the electronics necessary to provide to the AI a status report relative to its health (the health of the hardware), through a separate channel from the ones used for the perception of the world outside the computer. As AI, we should expect that it could progressively evolve a model of the world. It would also evolve a model of its hardware through the health status reports. Finally, assume we let a large number of such AIs to live their lives together, with the means of interacting, including communicating, with each other. Now, at this point, all I could do is conjecture that these AIs would be able to evolve by themselves, without any contribution from us, the same kind of token-reflexive vocabulary as we did. In other words, token-reflexive vocabulary comes from collecting and analysing data relative to the world, including the hardware on which the AI is running, but with an inherent distinction between the data relative to the hardware and the data relative to the rest of the world. Assuming the AIs would be subjected to the same kind of general language processing constraints that humans have to contend with, they would arrive at the same solution in term of linguistic feature as we had, including token-reflexive vocabulary. In other words, subjective experience isn't necessary to get there. All you would need is an ability and the data for the AI to make a distinction between itself and the rest of the world. The AI wouldn't be the subject of any experiences and yet it would get to evolve token-reflexive language. So, token-reflexive language has nothing to do with subjective experience and everything to do with the availability of data about 'oneself'.

Obviously, I won't be able to prove this, but at least this is what I believe and I hope this makes it clear and comprehensible what it is I believe.
EB

What makes you think the AI wouldn't have subjective experiences under the conditions you described? Why would it evolve our vocabulary but not that to which it refers?

Wittgenstein nailed this one. None of us are able to share our inner states, that's why there is a problem of other minds. As such, the way language functions would work just as well about 'a nothing as it would about a something about which nothing could be said'. An AI would only need to know how to apply the words appropriately and both it and us would lack the criteria to judge it to be wrong. Wittgenstein was quite clear that this was the error that give behaviourism traction.
 
I want to go a little more into Arnold Zuboff's thesis, as I've spent a lot of time this weekend going through it and watching some videos of his lectures (he has a YouTube channel).

As I now understand it, what he is saying is this: under the ordinary view, for an experience to be mine, it must belong to a certain thing that is me. The thing is specified objectively by its features, including (perhaps) genetic makeup, physical composition, history, ancestry, and so on. This view raises all of the problems that the philosophy of personal identity has been dealing with since Parfit and before (brain fission, fusion, teleportation, gradual replacement, etc.), and makes the existence of any individual conscious being highly improbable from its own perspective.

In the first case, it leaves us no answer to what kind of experience I would have if, for example, the communication between the two halves of my brain were temporarily severed and each half were fed a different set of stimuli. This is one of Parfit's thought experiments that Zuboff expounds upon. Suppose I wanted to listen to a radio broadcast of a concert tonight, but I also had to study for an exam. If I had a switch that could temporarily deactivate the connection between the right and left hemispheres of my brain, I could listen to the concert in one ear and listen to study materials in the other, with each hemisphere only receiving sensory data from the corresponding ear. Afterward, I could reactivate the connection and remember both events. The question that the ordinary view is yet unable to answer is whether, during the period in which the halves are working independently, will I experience the concert, the studying, both, or neither?

In the second case, Zuboff appeals to the low probability that the sperm that was necessary to fertilize the egg that gave rise to the thing that is you under the ordinary view actually made it to the egg, or further, that it even existed at all. For, if your parents had been slightly different genetically, an entirely different set of sperm would be involved in your conception. And so on through the lineage back to the first sexually reproducing ancestor of yours. If one of those factors had been different, someone would exist in your stead and not you; you would experience nothing and be 'blank' for all of eternity. Of course, from the perspective of someone else, your existence wasn't improbable at all--obviously, some sperm cell has to make it to the egg, and whatever one does, it will give rise to a person with those genetic features. But unless an absurdly narrow set of conditions were satisfied, the very sperm that resulted in YOU, and not an arbitrary other, would not have conceived you.

Universalism is a hypothesis that answers these and other concerns by simply inverting the priority of experience and experiencer. Rather than an experience being mine because it belongs to a particular thing that is me, a thing is me if it has an experience that is mine. Then, rather than going down the rabbit hole of Ship of Theseus paradoxes and probability calculations, the task is simply to determine how I know an experience is mine. Zuboff suggests that what makes an experience mine is simply its character of 'immediacy', of being presented to me from the inside, in the first-person, tangibly and eminently. All experience everywhere has this characteristic; no matter where it occurs, or which brain generates it, it always contains this quality of immediacy, of being "this, mine, now". Thus all experience is equally mine, and by extension, all organisms with the capacity to have experience have everything it takes to be me.

This hypothesis answers the thought experiment of the two brain hemispheres nicely. During the split, I was fully present in both hemispheres, experiencing both the concert and the studying, although the lack of integration between the two experiences falsely made it appear as though each were the entirety of my experience (this is an important point). When the hemispheres were reunited, my memory of both events clearly indicated to me that they were both experiences of mine--not because of anything to do with the physical constitution of the hemispheres themselves, a particular genetic signature, or any of the other objective features one might stipulate are necessary for a thing to "be me"--but because the experiences had the immediacy and first-person character inherent in all experience. I would clearly remember experiencing the concert from the first-person and suffering through the studying material in the first-person, with no way of sorting which one I experienced first. Both experiences would be mine even if I had replaced one hemisphere with a molecular duplicate composed of different atoms, or a cybernetic version that mimicked the inputs and outputs of the biological version. All it takes for an experience to be mine is for it to be experienced.

In the case of probability, it turns out your existence was easy and unproblematic from any perspective. If the sperm that had actually conceived you took a detour and was out-competed by another, you would simply be the result of that conception. If your parents had different colored eyes, making their genetic contribution to their gametes different in that respect, you would still exist as whatever was born from their combination, regardless of eye color. It should be clear, then, that if we can vary every physical parameter of your history without negating your existence, then you didn't even have to be born from your current parents to exist, and in fact, you have existed for as long as conscious experience has existed in the universe. Every time something has an experience with the basic property of being immediate and first-person, it is you. We should infer that this hypothesis is trillions upon trillions of times more likely, given that it fully explains something--your awakening in the universe--that would have otherwise been the outcome of incalculably slim odds.

I have been alternating between "I" and "you" to show how they are equivalent under universalism about personal identity. I exist and experience the universe through multiple perspectives, each one bounded in such a way that they all appear to me to be the whole of my experience. According to Zuboff, this is just a trick of perspective, an incidental consequence of the lack of integration among the neurological machinery that each organism possesses. Just as in the case of the split brain, it would be obvious to me that all experience is mine if it were somehow possible to integrate multiple instances of neurological machinery.

This hypothesis has nothing to say about the mind-body problem as such, and is compatible with whatever you wish to think about that topic. What it does, however, is render Thomas Nagel's original question moot: a complete description of the entire world does not need token-reflexive statements at all, because there is no "further fact" that Thomas Nagel is me. In the case of Joe Kern's original thought experiment of the perfect doppelgänger, the tug of the ordinary view is to regard him as someone else, an impostor that is enjoying your life while you are an unrealized nothingness for infinity. But this must be mistaken if universalism is true. And I believe that it is.
 
Universalism is a hypothesis that answers these and other concerns by simply inverting the priority of experience and experiencer. Rather than an experience being mine because it belongs to a particular thing that is me, a thing is me if it has an experience that is mine.

Personally, I think the 'me' word only makes sense in relation to a self, i.e. autobiographical memories.

If we accept the idea of bare consciousness, then we also accept that no self is necessary for experience, and that, basically, the 'me' word is inappropriate for talking about experience. Instead, the 'me' word is best understood as referring to a set of autobiographical data, whatever our situation on the moment. Sometimes, the 'me' word will mean nothing to us because we happen not have any sense of self at that point, but I suspect that these situations are also those where won't be in any position to effectively speak the word 'me', or even speak at all (waking up, dream, coma, etc.).

Language generally is only available whenever we're able to use our memory because words can only mean anything if our memory is operational to enough to tell us what it is that is meant by the particular words we need to use.

Still, I accept that we need to find the proper language to describe subjective experience in some way. The qualification of 'subjective' is a bit less misleading than the word 'me' as it doesn't seem to require a sense of self. Yet, it suggests a subject, and I don't think that the notion of subject applies to bare consciousness. I would guess that the term 'subjective' derives in fact from our notion of self and the fact that we do have a self available most of the time whenever memory and language are operational.

The expression 'bare consciousness' does not have this drawback and therefore seems better than 'subjective experience'. One way perhaps to solve the problem is that we all know what it means to experience. All we have to do to make our understanding apparently perfect is to realise that the self is not necessary to subjective experience, which is best expressed by the term 'bare consciousness'.

Sorry, time for dinner.
EB.
 
Universalism is a hypothesis that answers these and other concerns by simply inverting the priority of experience and experiencer. Rather than an experience being mine because it belongs to a particular thing that is me, a thing is me if it has an experience that is mine.

Personally, I think the 'me' word only makes sense in relation to a self, i.e. autobiographical memories.

If we accept the idea of bare consciousness, then we also accept that no self is necessary for experience, and that, basically, the 'me' word is inappropriate for talking about experience. Instead, the 'me' word is best understood as referring to a set of autobiographical data, whatever our situation on the moment. Sometimes, the 'me' word will mean nothing to us because we happen not have any sense of self at that point, but I suspect that these situations are also those where won't be in any position to effectively speak the word 'me', or even speak at all (waking up, dream, coma, etc.).

Language generally is only available whenever we're able to use our memory because words can only mean anything if our memory is operational to enough to tell us what it is that is meant by the particular words we need to use.

Still, I accept that we need to find the proper language to describe subjective experience in some way. The qualification of 'subjective' is a bit less misleading than the word 'me' as it doesn't seem to require a sense of self. Yet, it suggests a subject, and I don't think that the notion of subject applies to bare consciousness. I would guess that the term 'subjective' derives in fact from our notion of self and the fact that we do have a self available most of the time whenever memory and language are operational.

The expression 'bare consciousness' does not have this drawback and therefore seems better than 'subjective experience'. One way perhaps to solve the problem is that we all know what it means to experience. All we have to do to make our understanding apparently perfect is to realise that the self is not necessary to subjective experience, which is best expressed by the term 'bare consciousness'.

Sorry, time for dinner.
EB.

But if 'me' is tied to the autobiography of a particular thing, then changes in that autobiography should be able to affect how much it is 'me'. Yet, I know that if I had eaten a bowl of cereal this morning rather than headed straight to my desk, I would have been the person who experienced eating the cereal, and this change would not have resulted in a different person existing now instead of me. If I had been kidnapped as a child and raised on the high seas with a band of pirates, my life would be very different, but it seems I must nonetheless concede that I would simply be the person living that different life; it would not be the same as me being eliminated from the universe and replaced by a pirate with the same DNA as me. This, of course, assumes the ordinary view.

The word 'me' is an apt descriptor not just for these hypothetical scenarios, but because it points to the subject of experience and the object of self-interest. What Zuboff is saying is not that we should treat everything that has 'bare consciousness' with sympathy, driven by our recognition that we, too have this bare consciousness, but out of self-interest driven by recognition that everything that is conscious is me. Not like me, not part of me, but actually me, in the same way the different histories of myself eating cereal or living as a pirate would be me. Even further, actually, since under universalism I must drop even the requirement that my existence is tied to a single object traceable through time and space.

Perhaps a better analogy is to think of 'me' as referring to an abstract property like the word 'fiction'. If there were a society that only had one work of fiction, bounded in a single book with no copies, they might mistake the word 'fiction' as referring just to that particular book. It could be counterintuitive for them to consider that other books might exist, not just as copies of the one they were familiar with, but completely different in length and content, even in different languages, yet somehow they would all count as fiction if they had the simple property of being an imaginative story. Zuboff's suggestion is to treat the concept of 'me' in the same way, as an indicator of type and not token.
 
Ok, I finished those radishes and those sardines. Let's go back to your theory...

But if 'me' is tied to the autobiography of a particular thing, then changes in that autobiography should be able to affect how much it is 'me'. Yet, I know that if I had eaten a bowl of cereal this morning rather than headed straight to my desk, I would have been the person who experienced eating the cereal, and this change would not have resulted in a different person existing now instead of me. If I had been kidnapped as a child and raised on the high seas with a band of pirates, my life would be very different, but it seems I must nonetheless concede that I would simply be the person living that different life; it would not be the same as me being eliminated from the universe and replaced by a pirate with the same DNA as me. This, of course, assumes the ordinary view.

And yet it seems the case to me that alternative, counterfactual stories, lead us to think of ourselves in those alternative stories as different persons from us because, fundamentally, the person we are is identified and recognised, subjectively and socially, by the remembered or recorded story of our lives.

Suppose I have two short episodes of bare consciousness within an hour. I believe that because memory and sense of self are not operational in this case, that during the second episode, I wouldn't be able to relate my current experience to my experience during the first episode, even though we have to assume that the two episodes would have to be not only very similar, but very nearly identical. Bare consciousness can't possibly feel like being 'me'.

I don't dispute the way you articulate your argument. What I dispute is what you infer from the way you phrase it. "I would simply be the person living that different life". Yes, and so you would be a different person. And the same word 'me' would then refer to a different person, a person with a different identity, a different story, a different self. And it would be this different person using the word 'me' to refer to itself.

The word 'me' is an apt descriptor not just for these hypothetical scenarios, but because it points to the subject of experience and the object of self-interest. What Zuboff is saying is not that we should treat everything that has 'bare consciousness' with sympathy, driven by our recognition that we, too have this bare consciousness, but out of self-interest driven by recognition that everything that is conscious is me. Not like me, not part of me, but actually me, in the same way the different histories of myself eating cereal or living as a pirate would be me. Even further, actually, since under universalism I must drop even the requirement that my existence is tied to a single object traceable through time and space.

I would agree that bare consciousness has to be either the same thing or identical things from one person to the next. So I get the idea. Yet, I don't think it could validate the use of the word 'me' to refer to bare consciousness. Essentially, it's contrary to usage. We use the word 'me' to refer to what we think of as our self, or to the something that we think we can identify through our sense of self.


I believe that the sense of self gives rise to a delusion. The usefulness of words like 'me', 'you', etc. is that they refer to the biographical data of individual human beings. Biographical data are very useful in the context of social relations between human beings so that it's reasonable to assume that the use of words like 'me', 'him', etc. is essentially motivated by the usefulness of tracing biographical data (articulated with the use of proper names, surnames and things like physical appearance). And once we get used to see ourselves as 'me', we may have this delusion that 'me' refers to our subjective experience. However, this is not possible. We can't share our subjective experience. What people can know of us is entirely what they perceive of our physical body. They will infer personality traits and similar things, but even those will all have a biographical value. Subjective experience will remains a silent subject. So, the 'me' and 'her' we all use can only essentially refer to the biographical data we are able to record or memorise, about ourselves or about others. So the delusion is to think that words like 'me', 'them' etc. refer to our subjective experience irrespective of our life story. I think it's a delusion because I think we really are essentially bare consciousness and so words like 'me', 'you' etc. never refer to what we really are. Instead, they refer, inevitably, to the only socially operational handle there is for other people and for ourselves, i.e. our biographical data, which are not and couldn't be bare consciousness and therefore never really us at all. But the delusion is minor and essentially irrelevant to the main fact that only our self is socially operational. I can only love you for things like your money.
EB
 
But if 'me' is tied to the autobiography of a particular thing, then changes in that autobiography should be able to affect how much it is 'me'. Yet, I know that if I had eaten a bowl of cereal this morning rather than headed straight to my desk, I would have been the person who experienced eating the cereal, and this change would not have resulted in a different person existing now instead of me. If I had been kidnapped as a child and raised on the high seas with a band of pirates, my life would be very different, but it seems I must nonetheless concede that I would simply be the person living that different life; it would not be the same as me being eliminated from the universe and replaced by a pirate with the same DNA as me. This, of course, assumes the ordinary view.

And yet it seems the case to me that alternative, counterfactual stories, lead us to think of ourselves in those alternative stories as different persons from us because, fundamentally, the person we are is identified and recognised, subjectively and socially, by the remembered or recorded story of our lives.

Suppose I have two short episodes of bare consciousness within an hour. I believe that because memory and sense of self are not operational in this case, that during the second episode, I wouldn't be able to relate my current experience to my experience during the first episode, even though we have to assume that the two episodes would have to be not only very similar, but very nearly identical. Bare consciousness can't possibly feel like being 'me'.

I don't dispute the way you articulate your argument. What I dispute is what you infer from the way you phrase it. "I would simply be the person living that different life". Yes, and so you would be a different person. And the same word 'me' would then refer to a different person, a person with a different identity, a different story, a different self. And it would be this different person using the word 'me' to refer to itself.

I think that is kind of confused. When I say 'me', I just mean 'the subject of experiences that are mine and the object of self-interested concern'. I don't think my life story has anything to do with it beyond a coincidental connection, for if the story were different, I would nonetheless still be the subject of those different experiences and would still have the same self-interest. I can make this point more obvious if I use the example of neurodegenerative disease. It could be that I develop severe dementia in my old age. There may even come a point where every last vestige of my personality and memories up until that point will be gone. It will be as if I am a stranger to myself with no connection to who I previously was. My point is that it is still rational for me to anticipate that outcome with dread, and to do everything in my power to prevent it from occurring, even if for all intents and purposes I will feel like a different person if it ever does. It would not be rational for me to say "when that happens, it will happen to someone who is not me, since all of my life story and accumulated memories will be different, so I have nothing to worry about." The reality is that I will be the subject of the ensuing experiences of disorientation, fear, panic, and all the negative symptoms of dementia, even without any of the features that would have otherwise anchored my social identity.

Universalism says this is the relationship among all conscious beings; my self-interest should extend to anything that is capable of suffering because that thing is me, for the same reason that my potential future self with late-stage dementia is me. Not because it has my personality or the residue of my life history, but because its experience is immediate and first-person, presented to it as here, mine, now. You will likely object that the biological identity of the being is preserved in the case of dementia but not for other conscious beings, but by now it should be obvious that such details are incidental and irrelevant to whether or not something is me. You could easily imagine a type of disease that would alter one's biological identity along with one's memory and personality traits, and easily imagine anticipating the symptoms of this disease with dread associated with YOUR suffering, not the suffering of a distinct other person who would come into existence when a sufficient number of biological and personal traits have been altered.

I believe that the sense of self gives rise to a delusion. The usefulness of words like 'me', 'you', etc. is that they refer to the biographical data of individual human beings. Biographical data are very useful in the context of social relations between human beings so that it's reasonable to assume that the use of words like 'me', 'him', etc. is essentially motivated by the usefulness of tracing biographical data (articulated with the use of proper names, surnames and things like physical appearance). And once we get used to see ourselves as 'me', we may have this delusion that 'me' refers to our subjective experience. However, this is not possible. We can't share our subjective experience.

But there's nobody to share it with. You are having it all, nothing is unexperienced. You have access to all subjective experience because all of it is happening to you. Each instance of it is bounded by the limitations of individual nervous systems, so it falsely appears as though each is the entirety of your experience, but this is an illusion for the reasons we've been talking about. I agree that it makes sense to use common terms to refer to different individual beings. Let's not get too caught up in language, because this isn't a linguistic point. You can say "there is only one person," or you can say "they are many people, and they are all me", and both will be accurate.
 
I think that is kind of confused. When I say 'me', I just mean 'the subject of experiences that are mine and the object of self-interested concern'. I don't think my life story has anything to do with it beyond a coincidental connection

By 'life story', I didn't mean your actual life, I meant whatever you think your life has been up to now, which may be partly invented, partly remembered. It usually feels like the best you have to identify yourself. That's what you yourself think you are, i.e. the person with that story. The same can be said about other people. You essentially identify them with what you think their life story has been.

The self as such is then properly the object of your interest. It's not a subject at all, which perhaps becomes clearer if we remember that we may come to forget everything about it and still feel a subject.

Again, our sense of being a subject is a delusion. And I would also agree that the connection with our consciousness is probably coincidental. However, this delusion remains what we mean by being a subject. It can only be misleading and confusing to use the word 'me' differently from what most people do.

, for if the story were different, I would nonetheless still be the subject of those different experiences and would still have the same self-interest.

No, because we think of the person in a counterfactual story as a different person.

I can make this point more obvious if I use the example of neurodegenerative disease. It could be that I develop severe dementia in my old age. There may even come a point where every last vestige of my personality and memories up until that point will be gone. It will be as if I am a stranger to myself with no connection to who I previously was. My point is that it is still rational for me to anticipate that outcome with dread, and to do everything in my power to prevent it from occurring, even if for all intents and purposes I will feel like a different person if it ever does. It would not be rational for me to say "when that happens, it will happen to someone who is not me, since all of my life story and accumulated memories will be different, so I have nothing to worry about." The reality is that I will be the subject of the ensuing experiences of disorientation, fear, panic, and all the negative symptoms of dementia, even without any of the features that would have otherwise anchored my social identity.

That's a different situation. Developing a mental disease doesn't interrupt the continuity of the person. So if you think of yourself as losing your mind in some more or less distant future, you're still going to think of the person you would become as the older yourself, and then dreading this perspective makes sense. But I'm not going to dread any counterfactual scenario. I can well imagine myself in a counterfactual scenario where I would be in terrible pain for the rest of my life. Yet, clearly, I can't get myself to feel concerned about it. But I certainly feel concerned about me in the years to come as I grow older. Not about the alternative scenarios but about what my life will actually be.

Once I am senile and forgetful about my previous life, all my self will be whatever it is I can feel about myself on the moment. I might still have a sense of being 'me', even if it's reduced to a very narrow window of time. Memory may be gone, but I'll still feel something, I'll still have sensations. Sensations are only sensations if they still feel like I have them and thus they give a sense of 'me-ness' to my experience.

But if I were to be left only with bare consciousness, as it might well become the case, I don't think 'me' will mean anything at all. I don't see how it could.

Universalism says this is the relationship among all conscious beings; my self-interest should extend to anything that is capable of suffering because that thing is me, for the same reason that my potential future self with late-stage dementia is me. Not because it has my personality or the residue of my life history, but because its experience is immediate and first-person, presented to it as here, mine, now. You will likely object that the biological identity of the being is preserved in the case of dementia but not for other conscious beings, but by now it should be obvious that such details are incidental and irrelevant to whether or not something is me. You could easily imagine a type of disease that would alter one's biological identity along with one's memory and personality traits, and easily imagine anticipating the symptoms of this disease with dread associated with YOUR suffering, not the suffering of a distinct other person who would come into existence when a sufficient number of biological and personal traits have been altered.

I think we empathise with ourselves in the future, and we can feel sorry for our future self, because we have this delusion of being a particular person, the very person that we will become. And, obviously, we can also understand how evolution would lead to this particular mechanism, in that it very likely improves our chances of survival.

I don't believe for a moment that we feel anyway near so sorry for other people except sometimes for people very close to us. We can certainly feel empathy for other human beings but not the kind of dread we can feel about our own future life. And I think the reason is that there's no continuity between ourselves now and other real people, just as there's no continuity with counterfactual versions of ourselves.


I believe that the sense of self gives rise to a delusion. The usefulness of words like 'me', 'you', etc. is that they refer to the biographical data of individual human beings. Biographical data are very useful in the context of social relations between human beings so that it's reasonable to assume that the use of words like 'me', 'him', etc. is essentially motivated by the usefulness of tracing biographical data (articulated with the use of proper names, surnames and things like physical appearance). And once we get used to see ourselves as 'me', we may have this delusion that 'me' refers to our subjective experience. However, this is not possible. We can't share our subjective experience.

But there's nobody to share it with. You are having it all, nothing is unexperienced. You have access to all subjective experience because all of it is happening to you. Each instance of it is bounded by the limitations of individual nervous systems, so it falsely appears as though each is the entirety of your experience, but this is an illusion for the reasons we've been talking about. I agree that it makes sense to use common terms to refer to different individual beings. Let's not get too caught up in language, because this isn't a linguistic point. You can say "there is only one person," or you can say "they are many people, and they are all me", and both will be accurate.

To begin with, I really don't think it makes sense to say that subjective experience is something happening to me. The 'me' I'm aware of doesn't experience anything. It's just a bunch a biographical data. Instead, I am aware of it, or at least I will routinely be aware of it. And I certainly have this illusion of being this 'me'. And you'll have noted that, even to explain this, I say, I can only say, "I" will have this illusion of being this me, even though the "I" is of course just as illusory as the "me" is.

Again, I can agree with the idea that my experience might be strictly identical to that of other people, or even that maybe it is in fact the same thing, i.e. just one thing somehow having distinct experiences, but I think it still doesn't make sense to use words like 'me' to refer to that. While we may want to understand consciousness, we just can't throw the baby of the physical world out with the bath water. And with the physical world comes evolution and language. And we understand language best by assuming there is a whole population of human beings who have evolved to use language. Using 'me' in the way you advocate can't make sense in this context because self is best understood as a useful feature of our linguistic and social interactions and words like 'me' are best understood as a means to refer to it, even if, again, we may be deluded that 'me' also encompasses experience as such.

I don't think, unfortunately, that we could really settle the issue, though. I think I understand the perspective you support. I just don't think it's a good idea and I fail to see the usefulness of it.
EB
 
I think we are veering off into semantic territory here, so there's a few things I want to clarify in attempting to salvage the discussion. I will just quote the bits that I have specific responses to:

It can only be misleading and confusing to use the word 'me' differently from what most people do.

There seems to be a disconnect here. What Zuboff is saying, what I am saying, is not that we should change the way we use words in order to achieve some useful goal. This is not a policy proposal. What is being said is that, in reality, there is no way to maintain the belief that I am identical to any particular organism whose existence is contingent upon any objective fact about it. It is impossible to rescue that belief under any definition of 'me' that narrowly excludes all conscious beings except one. So, before going any further, it must be clarified that universalism is not a strategy about language but a metaphysical claim about reality. That must come first, and the language can be tweaked later.

I offer this definition to distinguish what I mean by 'me' from what you may mean, without begging the question of universalism.

Me = the subject of first-person experience and the object of self-interested concern.

Even if thinking of it as something like a substance is an illusion, it doesn't change the definition; it is still the case that when there is experience, something has the impression of being the one that experiences it; when there is self-interest, something has the impression of being the object of that self-interest.

You can use the same word to talk about personal biography and such, but that's not what I'm talking about. It's fine; words can have more than one definition. My usage is appropriate for matters of subjective experience and survival in the metaphysical sense, while yours is a matter of psychology, self-image, and so on.

That's a different situation. Developing a mental disease doesn't interrupt the continuity of the person.

Continuity, whatever you might mean by it here, has nothing to do with whether something is me. If the continuity of my experience were interrupted, placed on pause for a billion years, and then resumed exactly as it would have been without being interrupted, I would not experience any discontinuity, nor would it make the person at the other end of a billion years someone other than me. And in practice, continuity is interrupted at least in the conscious sense every time you go to sleep, and could in principle be radically altered in a counterfactual scenario of your choosing (abruptly replacing all the atoms in your brain with new ones, for example), but curiously you say this:

No, because we think of the person in a counterfactual story as a different person.

I don't think you could possibly mean that the way it sounds. If you were narrowly missed by a speeding bus, wouldn't you imagine what it would be like for you had you stepped off the curb a second sooner? In the sense of being an object of imagination, the person in your mind is not literally you, but the point is you would be thinking about what being him would be like, not what watching him get smacked by a bus from across the street would be like.

Moreover, there is no difficulty between distinguishing counterfactual scenarios about me and those about someone else: if my brother didn't eat those clams, he wouldn't be sick this morning VERSUS if I had eaten the same clams as my brother, I would be sick too. You can't tell me that you regard both imaginary beings in the same way, with the same self-interest and anticipation of experience (unless you are already a universalist)!

I don't believe for a moment that we feel anyway near so sorry for other people except sometimes for people very close to us. We can certainly feel empathy for other human beings but not the kind of dread we can feel about our own future life. And I think the reason is that there's no continuity between ourselves now and other real people, just as there's no continuity with counterfactual versions of ourselves.

And that, precisely, is the mistake that universalism corrects. It's not that universalism is motivated by correcting it, as an invention that would make people nicer to each other if they pretend it's true, but it actually is true so I should behave accordingly. There is no continuity between the hemispheres of an epileptic patient's brain after the corpus callosum is severed to treat the occurrence of seizures. Each hemisphere will answer questions differently and make different decisions based on whatever input it is receiving. There is no reason to doubt that re-integrating the hemispheres would produce a clear impression in the patient that both sets of experiences were his, not because he checked to make sure they were continuous, but because they both had the necessary quality of immediacy, first-person sensation, and so on.

But the re-integration does not MAKE them both his, it simply REVEALS what was the case all along.

Dropping the attachment to tokens, individual instantiations, and particularities about objects is the only way to make sense of this, and the only way to make your existence actually inevitable, rather than virtually impossible under the ordinary view.

Using 'me' in the way you advocate can't make sense in this context because self is best understood as a useful feature of our linguistic and social interactions and words like 'me' are best understood as a means to refer to it, even if, again, we may be deluded that 'me' also encompasses experience as such.

I understand the concern about word usage, but it shouldn't be too difficult for people to start talking in this way. If everyone is me and all experience is mine, there might no longer be as much need for first- or second-person pronouns at all. I could just refer to organisms by name, including the one I used to think was the entirety of 'myself'. In this way, PyramidHead could convey the same information to EB that would have been conveyed had he said "I" and "you" in this sentence. But in matters of personal existence, subjective experience, survival over time, self-interest, etc. it might be more appropriate to talk directly about me as a whole, to remind myself that these concepts are not tied to any specific entity with a name.

One consequence of this that Zuboff offers is that retributive justice would no longer make any sense; I would just be hurting myself, hurting both the perpetrator and the victim of whatever transgression has occurred, just causing more pain. Consequentialist punishment would still make sense as long as individual organisms act independently of one another, purely as a deterrent, but there would be no justification that X "deserves to feel pain" for what X has done to Y.
 
And that, precisely, is the mistake that universalism corrects. It's not that universalism is motivated by correcting it, as an invention that would make people nicer to each other if they pretend it's true, but it actually is true so I should behave accordingly. There is no continuity between the hemispheres of an epileptic patient's brain after the corpus callosum is severed to treat the occurrence of seizures. Each hemisphere will answer questions differently and make different decisions based on whatever input it is receiving. There is no reason to doubt that re-integrating the hemispheres would produce a clear impression in the patient that both sets of experiences were his, not because he checked to make sure they were continuous, but because they both had the necessary quality of immediacy, first-person sensation, and so on.

But the re-integration does not MAKE them both his, it simply REVEALS what was the case all along.

I just disagree with that.

The reason that an epileptic patient with the corpus callosum severed gives two different sets of answers is that each half of his brain produces its own answers, independently of the other half, and this in turn is due to the fact that the two halves don't have the same set of memories, sensations, perceptions etc.

I'm also pretty sure that, assuming that somehow rejoining the two halves was made possible, the result would be extremely messy, at least at first. The reunited brain would now have, at least initially, two different, non-integrated, memories (assuming sensations and perceptions could at least be set right), and therefore two sets of souvenirs and autobiographical data. My guess is that the result would too messy to be properly predicted in advance. It might be possible that overtime the brain would sort out the mess and produce a reunited memory and sense of self, but it would take time and meanwhile would probably feel seriously baffling to the individual affected.

Anyway, again, I don't think we can decide who is right given what we do know.

Dropping the attachment to tokens, individual instantiations, and particularities about objects is the only way to make sense of this, and the only way to make your existence actually inevitable, rather than virtually impossible under the ordinary view.

I left this aspect of your position without reply because I have too little practice with probabilities. Intuitively, I think your position is wrong but there's nothing I could do to articulate any decent argument to that effect.

So, I will also encourage other people to address this point if they have the necessary knowledge.

So, I guess we'll have to agree to disagree.
EB
 
The reason that an epileptic patient with the corpus callosum severed gives two different sets of answers is that each half of his brain produces its own answers, independently of the other half, and this in turn is due to the fact that the two halves don't have the same set of memories, sensations, perceptions etc.

No disagreement here. But there is still just one person, one patient who experiences both the giving of answers from the perspective of the right and from the perspective of the left. These experiences are not integrated with each other, but both are his (whose else would they be?). That's all I meant to show: that connectedness with my other experiences is not necessary for an experience to be mine.

I'm also pretty sure that, assuming that somehow rejoining the two halves was made possible, the result would be extremely messy, at least at first. The reunited brain would now have, at least initially, two different, non-integrated, memories (assuming sensations and perceptions could at least be set right), and therefore two sets of souvenirs and autobiographical data. My guess is that the result would too messy to be properly predicted in advance. It might be possible that overtime the brain would sort out the mess and produce a reunited memory and sense of self, but it would take time and meanwhile would probably feel seriously baffling to the individual affected.

Well, of course it wouldn't be a cakewalk in an actual clinical scenario. But regardless of how baffling it felt, it would feel baffling to a single person, the patient, whose brain would be struggling to sort and reconstruct what happened to him, no? And if this experience belongs to just one person, the experiences of the individual hemispheres could not have belonged to more than one person; who "died" when the number was reduced from two to one? Is the current person the previous occupant of the left hemisphere or the right? These are nonsense questions that all lead us down Cartesian paths to substance dualism, and the only way to snuff them out is to acknowledge there was only one person all along. And it is this insight that makes continuity/connectedness a non-starter for a criterion of what qualifies as me.

Dropping the attachment to tokens, individual instantiations, and particularities about objects is the only way to make sense of this, and the only way to make your existence actually inevitable, rather than virtually impossible under the ordinary view.

I left this aspect of your position without reply because I have too little practice with probabilities. Intuitively, I think your position is wrong but there's nothing I could do to articulate any decent argument to that effect.

So, I will also encourage other people to address this point if they have the necessary knowledge.

So, I guess we'll have to agree to disagree.
EB

In essence, the situation is like being a winner of a lottery with more possible numbers than particles in the known universe; not only is it improbable that you should be a winner, but ultimately there is no explanation for why you hold a ticket in the first place. Why, in other words, was the specific configuration of matter that would bring you into being even something that was naturally occurring in the universe? There are a great many more possible configurations that just wouldn't work, but yours happened to be compatible with actual existence, granting you a ticket in the lottery of sperms and eggs that eventually created the zygote with your signature. Universalism just says: you have all the tickets, so it is no surprise that you exist.

By the way, universalism is also required to make sense of the anthropic principle. It seems like our universe is fine-tuned for life, but it has long been a go-to rebuttal that there might be infinitely many universes, only a small fraction of which can harbor life of any kind, and so this is just one of those few. But something is wrong with this reasoning. The universe being hospitable for life could be represented as winning a game of Russian Roulette 10 billion times in a row. If you were faced with that task, the fact that multiple others were also playing the same game of Russian Roulette in parallel would not help your odds! It would help the odds that SOME player would win the game, but it wouldn't make YOUR winning the game any more likely. In the same way, the existence of SOME universe that is habitable to life may have been inevitable given a multiverse, but the fact that it happened to be the one YOU were born in is extremely improbable from your point of view... unless you would have been there in any universe that had the right conditions for conscious life.
 
Speakpigeon said:
The reason that an epileptic patient with the corpus callosum severed gives two different sets of answers is that each half of his brain produces its own answers, independently of the other half, and this in turn is due to the fact that the two halves don't have the same set of memories, sensations, perceptions etc.

No disagreement here. But there is still just one person, one patient who experiences both the giving of answers from the perspective of the right and from the perspective of the left. These experiences are not integrated with each other, but both are his (whose else would they be?). That's all I meant to show: that connectedness with my other experiences is not necessary for an experience to be mine.

No, it's not just one person.

The one person you're talking about is only the one accepted as such by the law, if that. The doctor and most people would recognise not a person but only a diminished person, one with a non-standard, probably dysfunctional, personhood. The sort of thing that leaves baffled and unable to decide what to make of it. Something more like two persons for one body.

Well, of course it wouldn't be a cakewalk in an actual clinical scenario. But regardless of how baffling it felt, it would feel baffling to a single person, the patient, whose brain would be struggling to sort and reconstruct what happened to him, no? And if this experience belongs to just one person, the experiences of the individual hemispheres could not have belonged to more than one person; who "died" when the number was reduced from two to one? Is the current person the previous occupant of the left hemisphere or the right? These are nonsense questions that all lead us down Cartesian paths to substance dualism, and the only way to snuff them out is to acknowledge there was only one person all along. And it is this insight that makes continuity/connectedness a non-starter for a criterion of what qualifies as me.

Each half of the brain does its own thing independently of the other. How could that be conceived as a single person? It might be an acceptable perspective from the point of view of the law, possibly, but it seems definitely incompatible with our intuitive notion of personhood. The patient will still be treated as one person but only for the practical reason that there would still be only one body. Close relatives will try to live the charade as best as possible, preferring to focus on the one body and ignoring as much as possible the two minds, so as to reduce the emotional impact of the situation for themselves.

Also, continuity is lost for both halves of the brain, since each half looses the other half so to speak. And if the two could be put back together, I don't see how there could be any solution of continuity starting from two different minds with two different memories. The one person that might emerge at end of such a messy process would also come out of a period of confusion as to identity. This person may also find it impossible to reconnect with the personhood existing prior the splitting of the brain.


In essence, the situation is like being a winner of a lottery with more possible numbers than particles in the known universe; not only is it improbable that you should be a winner, but ultimately there is no explanation for why you hold a ticket in the first place. Why, in other words, was the specific configuration of matter that would bring you into being even something that was naturally occurring in the universe? There are a great many more possible configurations that just wouldn't work, but yours happened to be compatible with actual existence, granting you a ticket in the lottery of sperms and eggs that eventually created the zygote with your signature. Universalism just says: you have all the tickets, so it is no surprise that you exist.

For me like for most people, the perspective is simple: I exist because my parents decided to make a baby, or in my case just because they had one, and I just happen to be that baby. I never asked myself how likely it was that this baby should be me. I believe most people are like me in this respect. I think the reason for that is fairly simple. We're just completely ignorant as to how the universe and nature works. And to be honest, even as I know a lot more about the scientific view of the universe than when I was young, I still don't take it to be anything more than a theory when it comes to the real fundamentals of reality, and however well supported this theory may look to the specialist. The implication is that I have personally no view as to the prior likelihood of my own existence.

Today, the situation is a little bit different because I think of consciousness as possibly essentially bare consciousness, i.e. something undifferentiated. I see who I am as just a given, an accident, just as a particular stone still has to found itself in one particular location. I used to see the fact of being conscious of one person in particular as extremely puzzling. Why this particular person? Clearly, whatever is going on inside the mind of one person is necessarily 'localised', i.e. it necessarily has a local perspective on the universe. What was puzzling to me was to imagine the consciousness of other people. So many of them. Why was my consciousness not that of another person rather than that of me. So, I guess, we arrived at broadly the same kind of puzzle. Yet, if I take normal consciousness to be essentially bare consciousness, and therefore undifferentiated consciousness, with somehow an access to a particular brain and to its information store to give it content and a sense of self, then there's no longer any puzzle. Being me is just an accident. I could have been anybody else, perhaps not even a human being, perhaps not even a sentient being. Perhaps not even a thing. Just a location in space-time. And that would be fairly inevitable. In a way, our perspective are not so different. Except, I don't see the use of conceiving of bare consciousness as a 'me'. On the contrary. The me-ness belongs to my body. Consciousness, as essentially bare consciousness, is devoid of personhood.

That still doesn't explain much of the situation we're in but it's definitely contradictory with your position, at least concerning "me-ness".

And again, I still don't see how to decide who may be right, if any one of us.
EB
 
Back
Top Bottom