• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Teletransporter Problem

That's exactly the same thing as saying the timing of killing the button-pusher, and nothing else, decides what will happen after he dies. Any gap, no matter how small, will contain information not captured by the duplicate. A millisecond delay between duplication and disintegration is the difference between the original person reaping the benefits of $1 million USD and ceasing to exist forever. I don't really see how that concept holds up to scrutiny. It's much more parsimonious to say that, regardless of when the original is terminated, he can never be aware of what is going on in the duplicate's sensory environment because they are two different beings.

So if I go out on a pub-crawl, and get out of my tree, then come home and pass out; and when i wake up the following afternoon, I have no recollection whatsoever of anything that happened after 10pm, despite having pictures and witness evidence and a road cone in my bed and some unexplained bruises as proof that I did stuff during that time period between buying another round at 10pm and collapsing into bed at 5am, then you are saying that the discontinuity in my consciousness means that I am not the same person any more?

No; it is not SOLELY the discontinuity of consciousness that determines whether you are the same, and I never said it was. The post you are responding to was a criticism of Tom's position, which as I understand it boils down to: if no information is lost between duplication and destruction, there will be a singular consciousness that does not end upon the disintegration of the button-pusher, but if the button-pusher experiences anything on Earth before being destroyed, his duplicate will not remember it, so in that case there are two streams of consciousness and one of them has died. Personally, I don't think the information gap is relevant at all. Whether the duplicate is created simultaneously or 10 years later, the original person's consciousness depends upon having a functional nervous system. If and when that nervous system is destroyed, the original consciousness is no more.

Because that scenario leads to exactly the same result. bilby A gets in the machine; presses the button, then bilby A goes for a cup of coffee, chats up the pretty transporter technician, and then gets disintegrated. bilby B steps out of the transporter station on Mars with no recollection of the coffee or the transporter technician.

Perhaps they have a Skype chat. bilby A spends a few minutes cursing and swearing at bilby B for murdering him and stealing his money, wife and bank account; and then dies. bilby B takes a trip to Mars, gets an abusive Skype call from someone who looks familiar and apparently has a seriously bad attitude, and then gets on with his life.

That bilby B has no recollection of anything bilby A does is not particularly important; bilbys A and B have huge gaps in their knowledge of what bilby did before he ever set foot in the teleportation corporation building.

The teleporter takes one person, splits him into two indistinguishable parts whose only difference is their location, and then kills one of them. The survivor is the same person who walked into the transporter to begin with - and so is the deceased.

I'm with you so far. I will repeat that I am not claiming either one has more authority to say they are "more" bilby than the other. However, to stop the discussion at that point omits the salient fact that two identical people are still two, not one. They are tokens of the same type. I'm arguing that consciousness resides at the token level, not the type level.

Discontinuity of consciousness and loss of memory are already things we have experience of. Nothing that the machine does is distinguishable from these familiar phenomena.

I disagree on the second point. During normal sleep, brain activity does not shut down completely, nor does the brain decompose to a fine powder. The potential for the brain's consciousness (described by the arrangement of its neuronal architecture) to re-activate in the same brain is preserved.

My current model assumes that an individual consciousness is "tethered" to the brain in which it developed. The subjective experience of eating a mango is only experienced by the consciousness whose brain is connected to the taste buds of the same body that contains the brain. I do not directly experience anything that happens to your body the way that I experience what happens to mine. So far as we know, this is an inevitable consequence of whatever organizational complexity gives rise to self-awareness. It is possible to imagine this not being the case, so it is more than a semantic problem.

The variation scenario amply demonstrates that two people who can each legitimately say "I am bilby" will exist at a point in time between the duplication event and the disintegration event. However, since they are not only experiencing different things simultaneously, but those things are mutually exclusive, it must be true that it is like one thing to be the bilby on Earth and like something else to be the bilby on Mars. To bridge this fact to my conclusion, recall that we can only actually experience something by knowing firsthand what it is like to do it. Thus the bilby on Earth will never experience what it is like to have an extra $1 million USD, since his consciousness remains tethered to his brain, not to physically indistinguishable brains that may exist elsewhere in the universe.

And as there is no material difference between the variation scenario and the synchronized one, this must be true of the synchronized scenario too. Even if you are unconscious while being scanned, replicated, and destroyed, your consciousness will no more be able to "wake up" in the body of your double than it is able to wake up in another body tomorrow morning.*

*Please note that you are completely correct to say that the conscious experience of the duplicate will feel exactly as if he has gone to sleep on Earth and woke up on Mars. But that experience is not accessible to the brain of the person who was destroyed, even if their brains were perfectly identical right up to the moment of destruction.
 
The main point is that if you could assume in all certainty that the machine will work properly, you wouldn't have any rational reason for not pressing the button since there would be no difference with actual life except the money.

What does 'work properly' mean in this context? If the machine works as described, the person who pushes the button will die from being dematerialized. The duplicate will get to spend the money. If he doesn't press the button, the duplicate is never created, so nobody gets the money. Either way, from the perspective of the guy pushing the button, there is no possible way he (the button-pusher) can get the money.
Sure but again, rationally, there's no difference with actual life. If you see a difference between Joe on Earth and an identical Joe on Mars then you should also see a difference between Joe now and the "same" Joe a wee Plank time interval later and an even bigger difference a few seconds, minutes or months later. We entertain this fiction that we are the same person through the passage of time but we are not. Unless you could accept that our clone in a different place would also be the same person as us, but you just dismissed that.

Again that's the reasoning on a rational level, but you cannot really describe human beings as rational animals so for most people I guess our reaction in the situation set out in the OP would be not to press the button.
EB
 
No, not at all. It has to do with whether or not there is any information lost between the original and the copy. If there's not, then there's no difference between the original continuing as the original or the original continuing as the copy. The guy is ressurected exactly as he was and just continues. If the original sticks around, then the copy is a former version of himself and not a continuation of himself.

That's exactly the same thing as saying the timing of killing the button-pusher, and nothing else, decides what will happen after he dies. Any gap, no matter how small, will contain information not captured by the duplicate. A millisecond delay between duplication and disintegration is the difference between the original person reaping the benefits of $1 million USD and ceasing to exist forever. I don't really see how that concept holds up to scrutiny..
You are moving the goal post. The OP said the "brain of your Replica is exactly like yours".

It's much more parsimonious to say that, regardless of when the original is terminated, he can never be aware of what is going on in the duplicate's sensory environment because they are two different beings.
Yes and the two of them will also be different from the guy who pressed the button. Both the place and time will be different for both of them relative to the guy who pressed the button since even the place were the button will have been pressed will have changed however so slightly. You never step twice into the same river and all that.
EB
 
If by "different" you mean "exactly the same" then I agree with you.

As I say in my OP, I do not dispute that they are both the same person, but there is a separate concern of whether they share the same consciousness. Clearly, since it is possible for one being to experience life on Mars while the other waits to be disintegrated, they are capable of experiencing different things, and this would be true even if there were no time delay. Since they are capable of experiencing mutually exclusive things at the same time, they cannot possibly be the same consciousness, so there is no mechanism to establish continuity between them.
That's also true of the "same" person at two different times, they will have different conscious experiences.

You just want to keep going the fiction that you are the same person from yesterday and day after day after day. Enjoy your trip.
EB
 
*Please note that you are completely correct to say that the conscious experience of the duplicate will feel exactly as if he has gone to sleep on Earth and woke up on Mars. But that experience is not accessible to the brain of the person who was destroyed, even if their brains were perfectly identical right up to the moment of destruction.
Yes but the conscious experience you have today was not (and also is not) accessible to the person you were yesterday.

In fact, the conscious experience of the person you were yesterday is also not accessible to you today. Only what we call "a memory" of it is accessible and that's definitely very not the same thing, as all old people I'm sure know.
EB
 
You might have put all of those into one post, EB, and saved a page of threads, especially as each reply is essentially a repetition of the same point over and over. It creates the impression that you have a lot to say, when you actually aren't saying much (though you do make a point worth considering). Try keeping it concise next time.

What I can glean from your replies is that you disagree that there is such a thing as a psychologically continuous first-person perspective, which emerges from the physical interactions taking place within a single person's nervous system. To boil it down systematically, I claim that we can coherently speak of a consciousness, and describe it in terms of what it is like to be a certain entity and not another entity. What you are pointing out is that, while I have been thinking in terms of spatially separate entities at a given time, it is also like something to be a certain entity at time t\(1\) versus being the same entity at time t\(2\). This is an interesting criticism of my position, and I will have to consider what it means. It reminds me of Derek Parfit's conclusion that there is really no fully fleshed-out explanation for personal identity that preserves the singularity of each consciousness. While tempting, I can't reconcile that explanation with the impression that I am only one consciousness.

Certainly, people are not ordinarily inclined to thinking this way when they talk about moral responsibility; if I poison you today and it takes 48 hours to finally kill you, no jury would be convinced by my lawyer saying the person who was poisoned is not the same as the person who died of the poison 48 hours later. More germanely, it would not be comforting to tell whoever is administered this slow-acting poison: "Don't worry. The consciousness of the person who will die in a couple of days is not the same consciousness as yours. It just seems that way because of the illusion created by memories."

So, perhaps it doesn't make a difference if a consciousness located in a particular brain changes over time in matters of life and death, which is what the teletransporter problem is all about. If it cannot be coherently said that the "me" of this instant is the same consciousness as the "me" of 1 Planck-time ago, as you suggest, then there is no such thing as psychological continuity anyway. Not only should a rational person have no problem entering the cubicle and pushing the button, she should also have no problem drinking a gallon of bleach or jumping off a bridge. Thus confirming the conclusion of my argument: pushing the button is no different from suicide. All you have added to my observation is the strange notion that moment-to-moment existence is no different from suicide either, which is not a position I am inclined to take seriously enough to discuss one way or another.
 
Last edited:
Your interpretation of what I posted is slightly off. I have not difficulty with "continuity" for example. Probably your mean something else than how the word should be used here. I mean continuity like in mathematics so that I don't see why the fact that a function for example has different values would make it not continuous. I'm perfectly comfortable with the idea that our mind while being different at different point in time is perfectly continuous over a period of time. I think that's at least a rational assumption to make. The point of my argument though is that there would also be continuity between the mind of the person pressing the button and the mind of the perfect double created on Mars. If this is the case then who is to say who is the real mind of the person?

Second, when I said that there's no difference with actual life, I wasn't talking about "psychological" continuity but about physical continuity. You seem to have missed my point about the distinction between rationality and what people actually do, which is not rational most of the time and certainly wouldn't be in the scenario considered. At the moment when the person has the possibility of pressing the button all sorts of considerations will be processed by his brain, not just the rational elements relative to the situation. For example, we project ourselves, psychologically, into the future and on these occasions we often experience various fears and worries which are based largely on unconscious processes. Because these processes are unconscious there is no way we can assess them in a fully rational way. The consideration of our death certainly comes into that category, even though none of us ever experienced death. So how do we know death is such a bad thing? And if we don't know, is it really rational to worry about it? Yet we are going to worry (or in fact not but that would be a different discussion).

So, you aren't prepared to argue your case properly because you insist on a "psychological" rather than a rational solution to the OP, even though you never actually said so in the OP. And if there is no possible rational discussion about it, then your OP is biased. You just want to parade your beliefs and opinions about the ontology of the world and baulk at engaging in a rational discussion.

What I can glean from your replies is that you disagree that there is such a thing as a psychologically continuous first-person perspective, which emerges from the physical interactions taking place within a single person's nervous system. To boil it down systematically, I claim that we can coherently speak of a consciousness, and describe it in terms of what it is like to be a certain entity and not another entity.
I'm fine with the idea of "psychologically continuous first-person perspective". Said like this, however, I don't see where this would invalidate the idea of continuity between the person pressing the button and his double created on Mars. If you want to fall back on the argument that there's not a certain kind of physical continuity between them then you should have explained this in the OP and that would have saved virtual space and our actual time. For example, if you want to insist that "a single person's nervous system" is crucial to a "psychologically continuous first-person perspective" then you shouldn't have bothered asking our views because it answers your question straight away.

Another aspect is that the first-person perspective is fine to me and the idea of the physical interactions taking place within a single person's nervous system is fine too but you don't know how the two relate to each other. For example, if the machine creates an exact double of a person, there is no reason to assume that the original and the double wouldn't have exactly the same first-person perspective since their brains would be identical at least at the point of creation. And if we assume such a machine would exist we can also assume that it could also keep the original and the double strictly identical to each other for, say, a lifetime. These people would have the same first-person perspective.
EB
 
if we assume such a machine would exist we can also assume that it could also keep the original and the double strictly identical to each other for, say, a lifetime.
Eh, no we can not. The reason they are identical is because they are copies. As soon as the copying finished (or even during copiying) the two instance will be more and more different.
 
Your interpretation of what I posted is slightly off. I have not difficulty with "continuity" for example. Probably your mean something else than how the word should be used here. I mean continuity like in mathematics so that I don't see why the fact that a function for example has different values would make it not continuous. I'm perfectly comfortable with the idea that our mind while being different at different point in time is perfectly continuous over a period of time. I think that's at least a rational assumption to make. The point of my argument though is that there would also be continuity between the mind of the person pressing the button and the mind of the perfect double created on Mars. If this is the case then who is to say who is the real mind of the person?

How is psychological continuity between two spatially separate beings possible without telepathy? Or do you mean something else by psychological continuity?

Another aspect is that the first-person perspective is fine to me and the idea of the physical interactions taking place within a single person's nervous system is fine too but you don't know how the two relate to each other. For example, if the machine creates an exact double of a person, there is no reason to assume that the original and the double wouldn't have exactly the same first-person perspective since their brains would be identical at least at the point of creation.

"Same" is a word that can mean two things in this context:

1. Identical but separate (e.g. two identical reproductions of the Mona Lisa that are indistinguishable, but still two separate objects)
2. Literally one entity described in two ways (e.g. my mother is identical to my female biological parent, but there is just one being that satisfies both predicates)

If the original and the double have identical first-person perspectives by the first definition, my point still stands. It is still the case that one cannot directly experience what the other is experiencing; it just so happens that they are both experiencing the same things due to having the same brain states. Even if this relationship is preserved indefinitely, killing one of them will still reduce the number of first-person perspectives by 1, and it would be true to say one of them has died.

If you are saying they are actually the same consciousness, in the manner that my mother is the same entity as my female biological parent, I would like to know how you came to that conclusion. The teletransporter seems a lot more analogous to making reproductions of the Mona Lisa, where the relationship of the duplicate to the original is "identical but separate" rather than "one being with multiple descriptions".
 
if we assume such a machine would exist we can also assume that it could also keep the original and the double strictly identical to each other for, say, a lifetime.
Eh, no we can not. The reason they are identical is because they are copies. As soon as the copying finished (or even during copiying) the two instance will be more and more different.
Are you saying that a machine could make exact copies of a whole human being or not?

Anyway, as I said and as you didn't understand, if we assume such a machine would exist we can also assume that it could also keep the original and the double strictly identical to each other for, say, a lifetime.
EB
 
Even if it could, I still maintain that 2 identical people who experience identical things are still 2 people individually vulnerable to being killed (which would, by definition, break the symmetry between them). The one who is killed wouldn't experience anything anymore.
 
How is psychological continuity between two spatially separate beings possible without telepathy? Or do you mean something else by psychological continuity?
I mean that psychological continuity is realised by the machine producing an exact copy of a mind. The original mind up to the point of pressing the button and the copy from this point onwards make a psychologically continuous entity. There is no point in time where you could find a discontinuous psychological value, even though there's a difference in location. That's what the machine does. I'm not sure what you mean but we are not talking about spatial continuity.

Another aspect is that the first-person perspective is fine to me and the idea of the physical interactions taking place within a single person's nervous system is fine too but you don't know how the two relate to each other. For example, if the machine creates an exact double of a person, there is no reason to assume that the original and the double wouldn't have exactly the same first-person perspective since their brains would be identical at least at the point of creation.

"Same" is a word that can mean two things in this context:

1. Identical but separate (e.g. two identical reproductions of the Mona Lisa that are indistinguishable, but still two separate objects)
2. Literally one entity described in two ways (e.g. my mother is identical to my female biological parent, but there is just one being that satisfies both predicates)

If the original and the double have identical first-person perspectives by the first definition, my point still stands. It is still the case that one cannot directly experience what the other is experiencing; it just so happens that they are both experiencing the same things due to having the same brain states. Even if this relationship is preserved indefinitely, killing one of them will still reduce the number of first-person perspectives by 1, and it would be true to say one of them has died.

If you are saying they are actually the same consciousness, in the manner that my mother is the same entity as my female biological parent, I would like to know how you came to that conclusion. The teletransporter seems a lot more analogous to making reproductions of the Mona Lisa, where the relationship of the duplicate to the original is "identical but separate" rather than "one being with multiple descriptions".
I was talking about the first case as far as naive materialism is concerned. There are two bodies, one on the Earth, one on Mars.
So, in this case, telling what the other is experiencing is irrelevant here. If the machine does maintain the two minds identical then they are experiencing exactly the same thing. Whether they would know they would is irrelevant. I guess it's a different scenario altogether. There would be psychological continuity as I defined it and no rational reason to prefer being one guy rather than the other. Even the money is not an issue here.

However, I can also support the second option because I don't actually know that two strictly identical objects are not somehow ipso facto the same object, if necessary in two different places at it were. Again, I think you made assumptions without telling us in the OP that you did. So, I'm making different assumption because I'm replying to the OP, not to what you had in mind at the time without telling us. If you had, we would have asked you to justify your assumptions and you would have been unable to do so, which may explain why you didn't specify them in the OP to begin with.
EB
 
Even if it could, I still maintain that 2 identical people who experience identical things are still 2 people individually vulnerable to being killed (which would, by definition, break the symmetry between them). The one who is killed wouldn't experience anything anymore.
But then, for the one still alive, there would be no longer a choice between being one or the other.

And if you break the symmetry of course one is probably the one you should be, but that's not necessarily the original.

And none of them could possibly guess he or the other would die first at some point. As long as they don't know of any actual difference there's no rational way for them to prefer being one rather than the other.

In fact, the symmetry would also immediately be broken if one of them or both of them knew which one is going to die first. But I assumed they would be kept identical by the machine so this scenario is excluded.
EB
 
Eh, no we can not. The reason they are identical is because they are copies. As soon as the copying finished (or even during copiying) the two instance will be more and more different.
Are you saying that a machine could make exact copies of a whole human being or not?

Anyway, as I said and as you didn't understand, if we assume such a machine would exist we can also assume that it could also keep the original and the double strictly identical to each other for, say, a lifetime.
EB

The machine is a hypotetical machine.
The machine make perfect copies, still by hypotesis.
Why on earth would such a machine be able to keep the two instances identical??
Yes, we can state such a hypotetical machine but that is a different function.
 
But then, for the one still alive, there would be no longer a choice between being one or the other.
There was never a choice. Each one is itself.

But like electrons they may only be different in where they are located with all else being determined as identical by a defining event. Self is a construct that isn't relevant since it isn't operable when each is identical. You seem to be saying location is self.
 
There was never a choice. Each one is itself.

But like electrons they may only be different in where they are located with all else being determined as identical by a defining event. Self is a construct that isn't relevant since it isn't operable when each is identical. You seem to be saying location is self.

What? There are two separate bodies. Agree?
 
Are you saying that a machine could make exact copies of a whole human being or not?

Anyway, as I said and as you didn't understand, if we assume such a machine would exist we can also assume that it could also keep the original and the double strictly identical to each other for, say, a lifetime.
EB

The machine is a hypotetical machine.
The machine make perfect copies, still by hypotesis.
Why on earth would such a machine be able to keep the two instances identical??
I don't know. I didn't say that it should or would.

Yes, we can state such a hypotetical machine but that is a different function.
Sure but all I said was that we can assume the machine could also do it. You can assume otherwise, too, it's fine.
EB
 
The machine is a hypotetical machine.
The machine make perfect copies, still by hypotesis.
Why on earth would such a machine be able to keep the two instances identical??
I don't know. I didn't say that it should or would.

Yes, we can state such a hypotetical machine but that is a different function.
Sure but all I said was that we can assume the machine could also do it. You can assume otherwise, too, it's fine.
EB

No you cannot since that would violate the specified thought experiment.
 
Back
Top Bottom