• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Regarding Cantor's Diagonal Argument

Yes, yes, I am a cruel and petty man who is desperate to fill the hole in my heart that was created when someone was mean to me.

Well there must be a reason for why you treat people the way you do.

I've already agreed that I'm a big fat meanie. There's no need to rub it in.

Now that we've established that, have you put any thought into the actual *substance* of my post?

I took first year calculus in university. We didn't cover this. Do you seriously expect me to spend more years and a lot more money just to understand this question? I love math passionately, but I have other things that I also love, and I have to balance them.

Years? Money? When have I ever said anything like that? All I've been talking about is taking a few hours or days to bone up on the underlying concepts. I really don't think that is a ridiculous request, nor do I think that saying it is attacking you personally. I'm not saying you should give up math - I think everyone should learn more math! But the topics come in an order for a reason.

I've successfully taught this topic to large groups of people, multiple times, most of whom have already taken calculus. Before I even start talking about Cantor's work, I have devoted a significant number of class-hours to the techniques of proof and to the basic properties of number systems, sets, equivalence relations, functions, biijections, and cardinalities of finite sets. I don't do it that way for my health, but because otherwise the students get confused and have a lot of trouble. Basically, I try to avoid exactly what has happened here and for good reason.
 
Because it's trivially true. It is trivially true that the nth natural number starting from 1 is the value of that natural number. Besides, all I am doing is setting a rule that seems to be allowed and seeing where it takes us.
But ryan, that isn't what you are stating. You are stating that n, where n is the cardinality of some set, must be a natural number, then saying that is a contradiction, and using that contradiction to show.... what exactly?

That is what I have been trying to say for the last few pages.

Also, not to belabor the point, but what do you think that contradiction would show anyway? Why do you think you hvae shown anything other than the fact that the cardinality of infinite sets cannot be represented by natural numbers? And if that indeed was your purpose, to what end?

I am trying to show that the number of elements in the set of all natural numbers must be a natural number.
 
Why have you refused to lay out your argument as you see it? Something like beero or Angra Mainyu has done, that is, start with your assumptions, make your inferences explicit, and end with your conclusion?

That's what I have been trying to do.
 
You are still just trying to force your argument to be true.

Then show where I am wrong!

I say that the set of natural numbers is not infinite, and then I give my reasons why.

Your argument just says that the set is infinite, and the only reason I see in your post is "the set of all naturals is something very different ...".

I gave my reason for why the set is not infinite; now you have to explain why my reason is invalid.
 
Well there must be a reason for why you treat people the way you do.

I've already agreed that I'm a big fat meanie. There's no need to rub it in.

Now that we've established that, have you put any thought into the actual *substance* of my post?

I took first year calculus in university. We didn't cover this. Do you seriously expect me to spend more years and a lot more money just to understand this question? I love math passionately, but I have other things that I also love, and I have to balance them.

Years? Money? When have I ever said anything like that? All I've been talking about is taking a few hours or days to bone up on the underlying concepts. I really don't think that is a ridiculous request, nor do I think that saying it is attacking you personally. I'm not saying you should give up math - I think everyone should learn more math! But the topics come in an order for a reason.

I've successfully taught this topic to large groups of people, multiple times, most of whom have already taken calculus. Before I even start talking about Cantor's work, I have devoted a significant number of class-hours to the techniques of proof and to the basic properties of number systems, sets, equivalence relations, functions, biijections, and cardinalities of finite sets. I don't do it that way for my health, but because otherwise the students get confused and have a lot of trouble. Basically, I try to avoid exactly what has happened here and for good reason.

Then I don't know what I don't know about this topic. Right now I feel like I understand the arguments against my claim.
 
Then I don't know what I don't know about this topic. Right now I feel like I understand the arguments against my claim.

I think I've made my position clear on why I think you're having difficulty with this topic, so here's a last-ditch attempt to help you. Others have pointed out where your argument fails and if you don't understand the problems they have shown with your claim, then you need to go back and think about it. I won't repeat the same work. Instead, I will take a different tack - I will prove the opposite claim, one that most people just intuitively accept, that there are infinitely many natural numbers. Only one of us can be right, at least if you accept that the axioms underlying all of mathematics are consistent.

Here is a proof that there are infinitely many natural numbers. Fundamentally, it is a property of the  principle of mathematical induction, a property so intrinsic to the concept of number that it is one of the fundamental axioms used to define them. I will actually use an equivalent principle that is more intuitive and easier to state - the  pigeonhole principle. The pigeonhole principle is "if we place n + 1 objects into n categories then at least one category contains at least 2 objects". Restated in terms of functions, it is "If A and B are finite sets where A has more elements than B, then there is no one-to-one function from A to B." I want to emphasize that this principle is how we DEFINE the natural numbers, so it is essentially inviolate. It should also be really intuitively obvious, like good axioms are, but we don't accept it simply because it is intuitive.

First, the definition: A non-empty set S is finite if and only if there is a bijection between S and {1,2,3,...,n} for some number n in N. In that case we say that S has n elements. For the edge case, we'll call the empty set finite as well. An infinite set is a set that is not finite.

Now, the proof. Please pay attention to the structure and syntax of the proof and contrast it with your posts. I don't assume existence or properties for the objects I define. Everything step is explained, using definitions (I left some unstated, but can fill those in if necessary). In the future, you should strive to have your mathematical arguments follow the same pattern.

This is a proof by contradiction - I will assume that the set of natural numbers is finite, that is that there is a bijection f : N -> {1,2,3,...,n} for some number n (since 1 is a natural number, I can discount the empty case). Now, I want to use that assumption to show a contradiction, thereby proving that the set of natural numbers is infinite.

Specifically, look at the bijection f : N -> {1,2,3,...,n}. Since it is a bijection, it is one-to-one and onto. Now, I will define a new function g : {1,2,...,n+1} -> {1,2,3,...,n} by taking g(x) = f(x) for every x in {1,2,...,n+1}. g is called the restriction of f to {1,2,...,n+1}.

First off, g is well-defined - for every number 1,2,...,n+1, f is defined and thus g is defined. Furthermore, g is one-to-one because f is one-to-one, that is, if g(x) = g(y) then f(x) = f(y) and so x = y. Now, g is not necessarily onto because we have restricted the domain of the onto function f, but that's OK, we only need one-to-one-ness.

So we have a function g : {1,2,...,n+1} -> {1,2,...,n} that we have proved to be one-to-one. But the pigeonhole principle states that since {1,2,...,n+1} has more elements than {1,2,...,n}, g can't be one-to-one. This is a contradiction. Therefore, the assumption that N was finite is false. That is, the set N of natural numbers is infinite.
 
Then show where I am wrong!

I say that the set of natural numbers is not infinite, and then I give my reasons why.

Your argument just says that the set is infinite, and the only reason I see in your post is "the set of all naturals is something very different ...".

I gave my reason for why the set is not infinite; now you have to explain why my reason is invalid.

The reason why I think that the set of natural numbers is infinite:

for each natural number k there is a bigger number k+1. Thus for each finit set of natural numbers you can always find at least one number that is not included in that set (the number following the largest number in the set). Thus it is impossible to create a finit set containing all natural numbers.

Now show where i am wrong!
 
Last edited:
Then I don't know what I don't know about this topic. Right now I feel like I understand the arguments against my claim.

I think I've made my position clear on why I think you're having difficulty with this topic, so here's a last-ditch attempt to help you. Others have pointed out where your argument fails and if you don't understand the problems they have shown with your claim, then you need to go back and think about it. I won't repeat the same work. Instead, I will take a different tack - I will prove the opposite claim, one that most people just intuitively accept, that there are infinitely many natural numbers. Only one of us can be right, at least if you accept that the axioms underlying all of mathematics are consistent.

Here is a proof that there are infinitely many natural numbers. Fundamentally, it is a property of the  principle of mathematical induction, a property so intrinsic to the concept of number that it is one of the fundamental axioms used to define them. I will actually use an equivalent principle that is more intuitive and easier to state - the  pigeonhole principle. The pigeonhole principle is "if we place n + 1 objects into n categories then at least one category contains at least 2 objects". Restated in terms of functions, it is "If A and B are finite sets where A has more elements than B, then there is no one-to-one function from A to B." I want to emphasize that this principle is how we DEFINE the natural numbers, so it is essentially inviolate. It should also be really intuitively obvious, like good axioms are, but we don't accept it simply because it is intuitive.

First, the definition: A non-empty set S is finite if and only if there is a bijection between S and {1,2,3,...,n} for some number n in N. In that case we say that S has n elements. For the edge case, we'll call the empty set finite as well. An infinite set is a set that is not finite.

I see that you smuggled this in here. Are you sure about this?

My argument is not necessarily about the set N being finite. It's about it not having an infinite number of elements. Maybe there needs to be a third option, like a undefined set or an unending finite set.

Now, the proof. Please pay attention to the structure and syntax of the proof and contrast it with your posts. I don't assume existence or properties for the objects I define. Everything step is explained, using definitions (I left some unstated, but can fill those in if necessary). In the future, you should strive to have your mathematical arguments follow the same pattern.

This is a proof by contradiction - I will assume that the set of natural numbers is finite, that is that there is a bijection f : N -> {1,2,3,...,n} for some number n (since 1 is a natural number, I can discount the empty case). Now, I want to use that assumption to show a contradiction, thereby proving that the set of natural numbers is infinite.

Specifically, look at the bijection f : N -> {1,2,3,...,n}. Since it is a bijection, it is one-to-one and onto. Now, I will define a new function g : {1,2,...,n+1} -> {1,2,3,...,n} by taking g(x) = f(x) for every x in {1,2,...,n+1}. g is called the restriction of f to {1,2,...,n+1}.

First off, g is well-defined - for every number 1,2,...,n+1, f is defined and thus g is defined. Furthermore, g is one-to-one because f is one-to-one, that is, if g(x) = g(y) then f(x) = f(y) and so x = y. Now, g is not necessarily onto because we have restricted the domain of the onto function f, but that's OK, we only need one-to-one-ness.

So we have a function g : {1,2,...,n+1} -> {1,2,...,n} that we have proved to be one-to-one. But the pigeonhole principle states that since {1,2,...,n+1} has more elements than {1,2,...,n}, g can't be one-to-one. This is a contradiction. Therefore, the assumption that N was finite is false. That is, the set N of natural numbers is infinite.

Yes, I certainly know this is true in today's mathematics. In one of my calculus courses we had to study a similar proof as this, except it was that no real number can be an upper bound for N.

Anyways, the whole point of this discussion is to investigate the internal mathematical logic of one of its theories, namely Cantor's diagonal argument.
 
I say that the set of natural numbers is not infinite, and then I give my reasons why.

Your argument just says that the set is infinite, and the only reason I see in your post is "the set of all naturals is something very different ...".

I gave my reason for why the set is not infinite; now you have to explain why my reason is invalid.

The reason why I think that the set of natural numbers is infinite:

for each natural number k there is a bigger number k+1. Thus for each finit set of natural numbers you can always find at least one number that is not included in that set (the number following the largest number in the set). Thus it is impossible to create a finit set containing all natural numbers.

Now show where i am wrong!

Any k is finite, and so is k + 1.
 
And even think about the interval of natural numbers. We use [1, infinity) not [1, infinity]. We don't include infinity in the interval.
 
The reason why I think that the set of natural numbers is infinite:

for each natural number k there is a bigger number k+1. Thus for each finit set of natural numbers you can always find at least one number that is not included in that set (the number following the largest number in the set). Thus it is impossible to create a finit set containing all natural numbers.

Now show where i am wrong!

Any k is finite, and so is k + 1.

Yes. Of course, they are natural numbers. In what way do you think that is an argument against my post? Why would k and k+1 have to be infinite? The property that makes a set infinite is not the finiteness/infiniteness of each of its elements but that its cardinality (=how many elements it contains) is infinite.
 
Any k is finite, and so is k + 1.

Yes. Of course, they are natural numbers. In what way do you think that is an argument against my post? Why would k and k+1 have to be infinite? The property that makes a set infinite is not the finiteness/infiniteness of each of its elements but that its cardinality (=how many elements it contains) is infinite.

Yes, and in this case k + 1 is also how many elements there are in the set.
 
Yes. Of course, they are natural numbers. In what way do you think that is an argument against my post? Why would k and k+1 have to be infinite? The property that makes a set infinite is not the finiteness/infiniteness of each of its elements but that its cardinality (=how many elements it contains) is infinite.

Yes, and in this case k + 1 is also how many elements there are in the set.

In what set?
 
So? Please write your complete argument not just oneliners! Explain why you think this destroys my argument.

You said that you can always add 1 to any k, then I said that is also a natural number, which conveniently is also how many elements there are.

Yes, and in what way do you think that this contradicts my argument?
 
I think I've made my position clear on why I think you're having difficulty with this topic, so here's a last-ditch attempt to help you. Others have pointed out where your argument fails and if you don't understand the problems they have shown with your claim, then you need to go back and think about it. I won't repeat the same work. Instead, I will take a different tack - I will prove the opposite claim, one that most people just intuitively accept, that there are infinitely many natural numbers. Only one of us can be right, at least if you accept that the axioms underlying all of mathematics are consistent.

Here is a proof that there are infinitely many natural numbers. Fundamentally, it is a property of the  principle of mathematical induction, a property so intrinsic to the concept of number that it is one of the fundamental axioms used to define them. I will actually use an equivalent principle that is more intuitive and easier to state - the  pigeonhole principle. The pigeonhole principle is "if we place n + 1 objects into n categories then at least one category contains at least 2 objects". Restated in terms of functions, it is "If A and B are finite sets where A has more elements than B, then there is no one-to-one function from A to B." I want to emphasize that this principle is how we DEFINE the natural numbers, so it is essentially inviolate. It should also be really intuitively obvious, like good axioms are, but we don't accept it simply because it is intuitive.

First, the definition: A non-empty set S is finite if and only if there is a bijection between S and {1,2,3,...,n} for some number n in N. In that case we say that S has n elements. For the edge case, we'll call the empty set finite as well. An infinite set is a set that is not finite.

I see that you smuggled this in here. Are you sure about this?

My argument is not necessarily about the set N being finite. It's about it not having an infinite number of elements. Maybe there needs to be a third option, like a undefined set or an unending finite set.

Now, the proof. Please pay attention to the structure and syntax of the proof and contrast it with your posts. I don't assume existence or properties for the objects I define. Everything step is explained, using definitions (I left some unstated, but can fill those in if necessary). In the future, you should strive to have your mathematical arguments follow the same pattern.

This is a proof by contradiction - I will assume that the set of natural numbers is finite, that is that there is a bijection f : N -> {1,2,3,...,n} for some number n (since 1 is a natural number, I can discount the empty case). Now, I want to use that assumption to show a contradiction, thereby proving that the set of natural numbers is infinite.

Specifically, look at the bijection f : N -> {1,2,3,...,n}. Since it is a bijection, it is one-to-one and onto. Now, I will define a new function g : {1,2,...,n+1} -> {1,2,3,...,n} by taking g(x) = f(x) for every x in {1,2,...,n+1}. g is called the restriction of f to {1,2,...,n+1}.

First off, g is well-defined - for every number 1,2,...,n+1, f is defined and thus g is defined. Furthermore, g is one-to-one because f is one-to-one, that is, if g(x) = g(y) then f(x) = f(y) and so x = y. Now, g is not necessarily onto because we have restricted the domain of the onto function f, but that's OK, we only need one-to-one-ness.

So we have a function g : {1,2,...,n+1} -> {1,2,...,n} that we have proved to be one-to-one. But the pigeonhole principle states that since {1,2,...,n+1} has more elements than {1,2,...,n}, g can't be one-to-one. This is a contradiction. Therefore, the assumption that N was finite is false. That is, the set N of natural numbers is infinite.

Yes, I certainly know this is true in today's mathematics. In one of my calculus courses we had to study a similar proof as this, except it was that no real number can be an upper bound for N.

Anyways, the whole point of this discussion is to investigate the internal mathematical logic of one of its theories, namely Cantor's diagonal argument.

How, exactly, were you planning on talking about infinite sets if you can't even recognize the definition of an infinite set? This is the kind of shit I've been talking about. You're just woefully under-prepared to think about this topic but insist on doing it anyway. It's exasperating as all hell...
 
Back
Top Bottom