• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Reasons to disbelieve the Axiom of Choice

Swammerdami

Squadron Leader
Joined
Dec 15, 2017
Messages
4,632
Location
Land of Smiles
Basic Beliefs
pseudo-deism
Most readers of this subforum are familiar with the Axiom of Choice, for which a minimal statement might be
∅∉S ⇒ ∃f : ∀c∈S, f(c)∈c

There is a famous puzzle about 100 prisoners each trying to guess his own hat color. Prisoner #19, for example, can see the hat colors of #1, #2, ... #18, and can hear the guesses of prisoners #100, #99, ..., #20 as they try to survive.

The puzzle has nothing to do with the Axiom of Choice but try to solve it before reading on. Acting together, the prisoners can devise a plan such that at least 99 of them will live. (Best is to assume that if 99 or more guess correctly, then they ALL live — with that rule change we need not depend on altruism.)

Now let's make the problem much more difficult. Instead of two hat colors, there will be 1000 colors. (We could make it countably infinite instead.) Instead of just 100 prisoners, there will be infinitely many: a prisoner #N for each natural number N. AND each prisoner will NOT be able to hear the guesses or gunshots before his guess: When it is one prisoner's turn to guess he will have zero information to go on except the (usually infinitely many) hat colors he can see, along with any policy he pre-agreed with his mates. He has only 1 chance in 1000 to survive, right? For every billion prisoners, about 999 million will die, right? Wrong! IF you accept the Axiom of Choice.

Postulating the Axiom of Choice, can the prisoners devise a policy such that at most a finite number of them misguess? Most of the prisoners will be looking at an infinite number of hats. Play along please; treat this as a thought experiment where each prisoner has a magic supercomputer that copes with infinite and even uncountable sets.

The answer is, Yes! — an answer so mind-boggling that it might make you assume the Axiom of Choice is clearly false! I will post the very simple solution in a few days if nobody beats me to it.

There are other paradoxical conclusions that can be derived from the Axiom of Choice.

(Note that the Axiom of Choice is never needed for FINITE sets. If I write "If there are articulable mythologism models, THEN you should be able to choose one to actually, well, articulate" I am NOT dependent on the Axiom of Choice as long as I insist that the articulation of the model require no more words than are in all the volumes of the Library of Congress.)
 
Most readers of this subforum are familiar with the Axiom of Choice, for which a minimal statement might be
∅∉S ⇒ ∃f : ∀c∈S, f(c)∈c


There is a famous puzzle about 100 prisoners each trying to guess his own hat color. Prisoner #19, for example, can see the hat colors of #1, #2, ... #18, and can hear the guesses of prisoners #100, #99, ..., #20 as they try to survive.

The puzzle has nothing to do with the Axiom of Choice but try to solve it before reading on. Acting together, the prisoners can devise a plan such that at least 99 of them will live. (Best is to assume that if 99 or more guess correctly, then they ALL live — with that rule change we need not depend on altruism.)

Now let's make the problem much more difficult. Instead of two hat colors, there will be 1000 colors. (We could make it countably infinite instead.) Instead of just 100 prisoners, there will be infinitely many: a prisoner #N for each natural number N. AND each prisoner will NOT be able to hear the guesses or gunshots before his guess: When it is one prisoner's turn to guess he will have zero information to go on except the (usually infinitely many) hat colors he can see, along with any policy he pre-agreed with his mates. He has only 1 chance in 1000 to survive, right? For every billion prisoners, about 999 million will die, right? Wrong! IF you accept the Axiom of Choice.

Postulating the Axiom of Choice, can the prisoners devise a policy such that at most a finite number of them misguess? Most of the prisoners will be looking at an infinite number of hats. Play along please; treat this as a thought experiment where each prisoner has a magic supercomputer that copes with infinite and even uncountable sets.

The answer is, Yes! — an answer so mind-boggling that it might make you assume the Axiom of Choice is clearly false! I will post the very simple solution in a few days if nobody beats me to it.

There are other paradoxical conclusions that can be derived from the Axiom of Choice.

(Note that the Axiom of Choice is never needed for FINITE sets. If I write "If there are articulable mythologism models, THEN you should be able to choose one to actually, well, articulate" I am NOT dependent on the Axiom of Choice as long as I insist that the articulation of the model require no more words than are in all the volumes of the Library of Congress.)
So, my understanding is that for every set of nonempty partitions of a set, some nonempty subset may be selected from each, such that they retain features of the partition?

So if I have

A,B,C < D,E,F < J,K,L then for any selection of, IE, A, E, L, I know that the relational property stipulated holds, even if A, B, and C aren't ordinal between each other but only ordinal in relation to D,E,F. Ya?

I'm pulling this from memory, and trying to make sure I understand it right.
 
I already assume the axiom of choice is clearly false. If it were true then you could well-order the reals.
 
So, my understanding is that for every set of nonempty partitions of a set, some nonempty subset may be selected from each, such that they retain features of the partition?

So if I have

A,B,C < D,E,F < J,K,L then for any selection of, IE, A, E, L, I know that the relational property stipulated holds, even if A, B, and C aren't ordinal between each other but only ordinal in relation to D,E,F. Ya?

I'm pulling this from memory, and trying to make sure I understand it right.

This is very unclear to me, too unclear for me to address. Anyway I'm sure that I cannot improve on the discussions of the Axiom found at Wikipedia, Stackexchange, or via Google.

I already assume the axiom of choice is clearly false. If it were true then you could well-order the reals.

One Professor of Mathematics at the University of Illinois,  Jerry L. Bona takes a related but more extreme view:
Jerry Bona said:
The Axiom of Choice is obviously true, the Well–ordering theorem is obviously false; and who can tell about Zorn’s Lemma?
The punchline is that all three statements — AC, well-ordering principle, and Zorn's Lemma — have been proven to be equivalent to each other!
 
So, my understanding is that for every set of nonempty partitions of a set, some nonempty subset may be selected from each, such that they retain features of the partition?

So if I have

A,B,C < D,E,F < J,K,L then for any selection of, IE, A, E, L, I know that the relational property stipulated holds, even if A, B, and C aren't ordinal between each other but only ordinal in relation to D,E,F. Ya?

I'm pulling this from memory, and trying to make sure I understand it right.

This is very unclear to me, too unclear for me to address. Anyway I'm sure that I cannot improve on the discussions of the Axiom found at Wikipedia, Stackexchange, or via Google.

I already assume the axiom of choice is clearly false. If it were true then you could well-order the reals.

One Professor of Mathematics at the University of Illinois,  Jerry L. Bona takes a related but more extreme view:
Jerry Bona said:
The Axiom of Choice is obviously true, the Well–ordering theorem is obviously false; and who can tell about Zorn’s Lemma?
The punchline is that all three statements — AC, well-ordering principle, and Zorn's Lemma — have been proven to be equivalent to each other!
So, if "red, green, and blue" have a shared property and "pineapple, gourd, rug" have a shared property, then the property relationship is reflected by red:pineapple as much as it is green:rug.

I'm just doing my best to discuss the Axiom of Choice, and validate it against someone who CAN speak rather than something that cannot.
 
So, if "red, green, and blue" have a shared property and "pineapple, gourd, rug" have a shared property, then the property relationship is reflected by red:pineapple as much as it is green:rug.

I'm just doing my best to discuss the Axiom of Choice, and validate it against someone who CAN speak rather than something that cannot.

I'm sorry I'm no help, but this is very unclear to me. For starters, I don't get what "shared property" has to do with anything. Is your example aimed at ordering or at choice?
 
So, if "red, green, and blue" have a shared property and "pineapple, gourd, rug" have a shared property, then the property relationship is reflected by red:pineapple as much as it is green:rug.

I'm just doing my best to discuss the Axiom of Choice, and validate it against someone who CAN speak rather than something that cannot.

I'm sorry I'm no help, but this is very unclear to me. For starters, I don't get what "shared property" has to do with anything. Is your example aimed at ordering or at choice?
More about choice, but my understanding is that ordering is part of it. Fill 10 buckets. Put them in a row. Take something out of each bucket, and put it on its own row in the same order as came from the buckets.

No matter what you take out of the bucket, the list of things you chose to take out will have all the same relational properties as between the buckets they came from.
 
Does it have to do with the fact that I can know that because I am a prisoner with a hat, that the set that has the infinitesimally different probability caused by my selection that this is the color of my hat?

Or to say "this is how many hats I see" and then the next prisoner has a different answer and between them, they see the infinitessimal difference?

This even if you have infinites, if you have a selection made out of those infinites you can look at the difference? Just a naive swing at it.

...As long as you have a computer that can calculate on infinity.
 
The puzzle has nothing to do with the Axiom of Choice but try to solve it before reading on. Acting together, the prisoners can devise a plan such that at least 99 of them will live. (Best is to assume that if 99 or more guess correctly, then they ALL live — with that rule change we need not depend on altruism.)

Now let's make the problem much more difficult. Instead of two hat colors, there will be 1000 colors. (We could make it countably infinite instead.) Instead of just 100 prisoners, there will be infinitely many: a prisoner #N for each natural number N. AND each prisoner will NOT be able to hear the guesses or gunshots before his guess: When it is one prisoner's turn to guess he will have zero information to go on except the (usually infinitely many) hat colors he can see, along with any policy he pre-agreed with his mates. He has only 1 chance in 1000 to survive, right? For every billion prisoners, about 999 million will die, right? Wrong! IF you accept the Axiom of Choice.

Postulating the Axiom of Choice, can the prisoners devise a policy such that at most a finite number of them misguess? Most of the prisoners will be looking at an infinite number of hats. Play along please; treat this as a thought experiment where each prisoner has a magic supercomputer that copes with infinite and even uncountable sets.
So I solved the original two-color problem; but I don't understand your presentation of the enlarged problem. If the infinitely many prisoners are lined up analogously to the 100 prisoners, then either all of them see infinitely many hats or else all of them see finitely many, depending on which direction the line runs. What pattern of who can see whom are you assuming that leaves some of them seeing infinitely many hats and some of them seeing finitely many?
 
The puzzle has nothing to do with the Axiom of Choice but try to solve it before reading on. Acting together, the prisoners can devise a plan such that at least 99 of them will live. (Best is to assume that if 99 or more guess correctly, then they ALL live — with that rule change we need not depend on altruism.)

Now let's make the problem much more difficult. Instead of two hat colors, there will be 1000 colors. (We could make it countably infinite instead.) Instead of just 100 prisoners, there will be infinitely many: a prisoner #N for each natural number N. AND each prisoner will NOT be able to hear the guesses or gunshots before his guess: When it is one prisoner's turn to guess he will have zero information to go on except the (usually infinitely many) hat colors he can see, along with any policy he pre-agreed with his mates. He has only 1 chance in 1000 to survive, right? For every billion prisoners, about 999 million will die, right? Wrong! IF you accept the Axiom of Choice.

Postulating the Axiom of Choice, can the prisoners devise a policy such that at most a finite number of them misguess? Most of the prisoners will be looking at an infinite number of hats. Play along please; treat this as a thought experiment where each prisoner has a magic supercomputer that copes with infinite and even uncountable sets.
So I solved the original two-color problem; but I don't understand your presentation of the enlarged problem. If the infinitely many prisoners are lined up analogously to the 100 prisoners, then either all of them see infinitely many hats or else all of them see finitely many, depending on which direction the line runs. What pattern of who can see whom are you assuming that leaves some of them seeing infinitely many hats and some of them seeing finitely many?
That’s what I was wondering. A rolling landscape, perhaps? Why make it simple … and how does it keep wrong guesses finite in any event?
 
The language in the op seems wrong. Believe isn't the right word for axioms. I mean, I believe the axiom of choice has some utility. But then, I believe most axioms have some utility. They are just rules for producing mathematical structures. You can use quite useless axioms to generate interesting but useless structures.
 
The language in the op seems wrong. Believe isn't the right word for axioms. I mean, I believe the axiom of choice has some utility. But then, I believe most axioms have some utility. They are just rules for producing mathematical structures. You can use quite useless axioms to generate interesting but useless structures.
You've heard that old saw about mathematicians being Platonists on weekdays and formalists on weekends?

If you want a philosophically acceptable account of "believe", how about this? To "believe" an axiom is to regard some theory model that satisfies the axiom as adequately matching what one intuitively means by the words that appear in the theory. So when I say I don't believe the axiom of choice, I'm not denying that Goedel exhibited a model of ZF set theory in which AC is true; I'm just saying that in the model he exhibited, the term "powerset" is used in a funny way that doesn't match my intuition about what it means to be a powerset and I anticipate that something similar will be the case with any other model of ZFC anyone comes up with.
 
The language in the op seems wrong. Believe isn't the right word for axioms. I mean, I believe the axiom of choice has some utility. But then, I believe most axioms have some utility. They are just rules for producing mathematical structures. You can use quite useless axioms to generate interesting but useless structures.
You've heard that old saw about mathematicians being Platonists on weekdays and formalists on weekends?

If you want a philosophically acceptable account of "believe", how about this? To "believe" an axiom is to regard some theory model that satisfies the axiom as adequately matching what one intuitively means by the words that appear in the theory. So when I say I don't believe the axiom of choice, I'm not denying that Goedel exhibited a model of ZF set theory in which AC is true; I'm just saying that in the model he exhibited, the term "powerset" is used in a funny way that doesn't match my intuition about what it means to be a powerset and I anticipate that something similar will be the case with any other model of ZFC anyone comes up with.
Have you looked into Langland's Program yet? I've not seen whether that particular system of constructions and translations acknowledged AC or not, but many of the conjectures of it have been proven, and it allows translations between broadly acknowledged swaths of math.
 
So I solved the original two-color problem; but I don't understand your presentation of the enlarged problem. If the infinitely many prisoners are lined up analogously to the 100 prisoners, then either all of them see infinitely many hats or else all of them see finitely many, depending on which direction the line runs. What pattern of who can see whom are you assuming that leaves some of them seeing infinitely many hats and some of them seeing finitely many?

Ooops; you're right! I numbered the prisoners backwards. Let prisoner #1 be first to guess. Prisoner #N can see the hats of #(N+1), #(N+2), ...

"Obviously" 999 million of the first billion will guess wrong, 999 million of the second billion will guess wrong, and so on. But all but a finite number will guess right?!?!?!?!?!?!
 
So, each prisoner will be entirely ignorant in their choice? They see hats. There are a couple important colors here. There is the most common and the least common.

I am unsure how policy effects this at all, as there's no way to commute information between prisoners, each prisoner sees a distribution that either is perfectly distributed as random and if one wishes to make the gambler's fallacy, can assume that the distribution will be off by one: their hat color. Thus if they guess the least common hat color, they are the "selection", the choice and subset which would impact the whole.

The issue is that the gambler's fallacy should be recognized in that the roll on... (Backwards from infinity to 1)... is still going to be random.
 
Ooops; you're right! I numbered the prisoners backwards. Let prisoner #1 be first to guess. Prisoner #N can see the hats of #(N+1), #(N+2), ...

"Obviously" 999 million of the first billion will guess wrong, 999 million of the second billion will guess wrong, and so on. But all but a finite number will guess right?!?!?!?!?!?!
So they all see infinitely many hats, and none of them hear the guesses or gunshots before his guess, and each has zero information to go on except the infinitely many hat colors he can see, along with any policy he pre-agreed with his mates. Who lines up the prisoners? Is the pre-agreed policy allowed to include who will go where in the line, or will the guards make the prisoners stand in an arbitrary order? If the guards assign the positions then it would appear that no prisoner knows his own number, since that would be nonzero additional information.
 
Whether the prisoner knows his own number or not turns out not to matter to the solution.
Similarly, the solution works whether the hat assignments are random or contrived.

(This curious paradox is "well-known." I've not posted any link, but Google will find lots.)

Solution DOES assume that each prisoner is able to contemplate the countably infinite sequence of hats — already impossible in the real world — as well as the uncountable set of such sequences. But just replace each "prisoner" with a mathematical abstraction: Doesn't that legitimize the pure-mathematical question?
 
Whether the prisoner knows his own number or not turns out not to matter to the solution.
Similarly, the solution works whether the hat assignments are random or contrived.

(This curious paradox is "well-known." I've not posted any link, but Google will find lots.)

Solution DOES assume that each prisoner is able to contemplate the countably infinite sequence of hats — already impossible in the real world — as well as the uncountable set of such sequences. But just replace each "prisoner" with a mathematical abstraction: Doesn't that legitimize the pure-mathematical question?
Well, if that's the case, I stand by my answer: if every prisoner guesses the least common hat color that they see, it will be their hat color, assuming a perfect infinite normal distribution: they will always be the "odd man out".

Information about commonness and hat distribution is the only thing any prisoner can possibly see and the axiom of choice.

As it is, I recall a long time ago seeing what I think was a VSauce episode on the axiom of choice discussing sizes of infinite and uncountable sets, and if a computer could cope with that and still declare a quantity, having made the choice would define that infinity as a different size even if the difference is infinitesimally small, if I'm understanding the thrust of the argument.

This obviously doesn't work if the pure distribution is not actually perfectly random.
 
I want to give others the chance to say "Aha!" and post the solution here. But I'll make some general remarks.

First, I agree with Bomb#20: The Axiom of Choice is clearly false! :cool: Anybody who still had doubts about that should be convinced by learning the impossible result that all but a finite number of prisoners can guess correctly, even though each clearly has a 99.9% chance of failure.

Another ridiculous result achievable by assuming the Axiom of Choice is the  Banach–Tarski paradox. But the derivation of that is rather complicated, with the key step that assumes AC almost lost amid the other grotesquenesses. On the other hand, The proof that all but a finite number of prisoners can win is rather trivial! Just a few simple sentences followed by "QED." Wow! What happened?

I'll offer a small starting hint, inside Spoiler tags.
Let a = a1a2a3a4 ... be the actual correct sequence (aj is the hat color of prisoner #j).
Let b = b1b2b3b4 ... be the sequence of guesses (Prisoner #j guesses bj given the hats he sees, along with the strategy he and his fellows have agreed upon). In other words, b1b2b3b4 ... is the guessing sequence that the prisoners did CHOOSE.

Given specific a,b, what is the victory condition?
 
Back
Top Bottom