• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

What does it mean for something to be "logically possible"?

Ack. You're going to make me dig out some sources, which may have to wait until I have access at my current uni. I'm working from memory here, but I recall distinctly that, while Cauchy still uses terms like "infinitely small", they are mere shorthand for a statement about a value that can be made arbitrarily small, and he gives a formal definition based on epsilon. The infinitesimal is banished because Cauchy doesn't consider it to be an actual object of real analysis. It's not a real number; it's part of a circumlocution that he can unfold if needed to.

The only thing missing from Cauchy was a proof of the least upper bound property, which had to wait for an actual construction of the reals, first provided by Dedekind.

I'm annoying that way sometimes. :D

After looking into it myself, it seems like Cauchy's actual stance might still be a bit of a contentious issue in the history of mathematics - https://arxiv.org/pdf/1108.2885.pdf, so it seems like either might be a reasonable interpretation. I guess that means we need to have the bitterest of arguments about it. To the death!
Hmm...while that does cover both sides, it says that my claims above are false, namely that Cauchy had an epsilon definition underpinning his idea of infinitesimal. I know the source book I was reading, so I'm now going to have to buy another copy and figure out how I got here.
 
So, what are infinitesimals? Well, mathematicians themselves seem to have had to wrestle with this notion for a very long time. Prior to Newton and Leibnitz, notions of calculus can be found in all traditions (Ancient Egypt, Ancient China, Ancient Greece, medieval Europe and India) and we all know the Zeno paradox. And European mathematicians still struggled to find a rigorous definition of infinitesimals. I'm proud to say that the French mathematician Augustin-Louis Cauchy got there first, but it was already 1821 and it came after much debates among mathematicians. Let's note Cauchy was also a physicist.
Cauchy got to a rigorous foundation of calculus by rigorously defining infinitesimals out of existence! His infinitesimal-free foundation for Calculus spearheaded a new rigour in mathematics, and was so triumphant that, a century later, Russell announced that infinitesimals had been an absurd notion all along. To this day, infinitesimals play absolutely no formal role in standard real or complex analysis. Instead, we talk exclusively in terms of limits, an idea heavily promoted by Euler, but only formally defined by Cauchy, and only in the most elementary terms of real numbers. Infinitesimals make no appearance at all (I'm happy to elaborate).

Russell's declarations of absurdity were completely wrong, however, and infinitesimals were shown to be not only logically coherent, but a fairly natural logical notion discovered first by Robinson with the hyperreals, and given an alternative formulation be Conway with the surreal numbers. You can do analysis in the hyperreals, and my old boss did his doctoral research near enough successfully vindicating Newton's calculus as being rigorous if you interpret him as working in the hyperreals.

However, pretty much all maths students today do the standard analysis that was laid down by Cauchy (with final gaps filled in by Dedekind and Cantor), and never see an infinitesimal anywhere. This makes sense, since the construction of the hyperreals ends up building on top of the standard reals anyway.

Thank you for you comment, which brings a bit of sanity in this thread.

And I take you point.


I did two years of maths and physics a very, very long time ago in Paris and then had to drop out, there and then, without finishing the cursus. So, I'm not going to pretend I'm the expert around here on this issue.

Still, I am in effect discovering now that this had been, historically, something of a controversial issue. To me it's never been and still isn't. I take the term 'infinitesimal' to have a precise meaning, i.e. to refer to a clear concept, one that I thought most mathematicians were confortable with. So, I accept the fact that it's a controversial issue but not through and through. Which now I have to explain.

Bear in mind that you may be more interested in the technically mathematical aspect of this issue while my interest is more in the conceptual landscape behind it.

Personally, I make a clear distinction between concepts of the mathematical world and how we mathematically define the properties we associate with these concepts. For example, we have a concept of infinitesimal built around the concept of continuous curve around a particular point called the 'limit'. Broadly, this concept is known and understood through a simple diagram showing, say, two axes, a curve representing the function itself, curve which will be continuous near one specific point. We don't really need any formal definition to have clear enough ideas about the concept of infinitesimal: given that the function is continuous, we can choose a point on the curve which will be as close as we may wish from the point we call the limit. If someone selected one point, somebody will always be able to find and select another point even closer to the limit. Intuitively, there's no other limiting point than the pre-defined limit itself. So that's one thing.

The second thing is how we're going to express those lovely intuitive ideas in a formal and rigorous way so that we can make progress in discovering the properties of our concepts that may be not so obvious and about which we should expect to have recurring disagreements if we were to discuss them informally.

According to this distinction, I don't believe for a moment that any of the mathematicians you've discussed here with beero had any issue with this notion of infinitesimal. I take the historical debate between them, and probably many others, to have been about the second aspect: how do we formally specify our mathematical concepts in such a way as to preclude any impasse and disagreement. Formal definitions are notoriously difficult to articulate straight away. It's an iterative process and we always need several people to bring a fresh perspective on the discussions.

No doubt, some more metaphysical points would have clouded the discussions. The idea of infinity is definitely difficult to formalise. Still, I am convinced that all these people mostly had the same basic conception in mind. It was therefore only a matter of agreeing on the language to use to talk about it to reduce any unnecessary friction.

Accordingly, the particular language agreed by mathematicians after some huffing and puffing should be understood as just the formal way retained for talking about the original conception, which won't have changed in the process because it has been unproblematic from the start. I would even go so far as saying that the concept of infinitesimal is not only easy for mathematicians but it is also intuitive for most people, even though few people would be able to articulate any formal definition of it.

Consequently, I don't see the language eventually retained by mathematicians for expressing the properties associated with continuity of a curve in the neighbourhood of a point as primary. What is primary is our intuitive concept of a curve being continuous near a limit.

In other words, while Cauchy's definition doesn't mention infinitesimals, you really understand what this definition means if you understand the concept of infinitesimal. You need the formal definition to go beyond this and make progress.

I wish I could have understood this at the time.
EB
 
Actually, it was Bolzano who first gave the formal epsilon-delta definition of limit and then it was Weierstrass who finished the formalization of calculus. Cauchy was an intermediary who had the right notions, but whose work still utilized the infinitesimal.

(ε said:
https://en.wikipedia.org/wiki/(ε,_δ)-definition_of_limit

In calculus, the (ε, δ)-definition of limit ("epsilon–delta definition of limit") is a formalization of the notion of limit. The concept is due to Augustin-Louis Cauchy, who never gave an (ε , δ {\displaystyle \varepsilon ,\delta } \varepsilon,\delta) definition of limit in his Cours d'Analyse, but occasionally used ε , δ {\displaystyle \varepsilon ,\delta } \varepsilon,\delta arguments in proofs. It was first given as a formal definition by Bernard Bolzano in 1817, and the definitive modern statement was ultimately provided by Karl Weierstrass.[1][2] It makes rigorous the following informal notion: the dependent expression f(x) approaches the value L as the variable x approaches the value c if f(x) can be made as close as desired to L by taking x sufficiently close to c.

I had to check my old textbook. It doesn't include any discussion on infinitesimals! Only this (ε, δ)-definition around the notion of limit.

Of course, to me, ε and δ are basic to the notion of infinitesimals.
EB
 
The position of infinity in mathematics is contestable, and there is a lot of rigorous mathematical logic that has been devoted to the subtleties of reasoning with the infinite.

I would make the distinction between infinity as the idea that there is no end to something and infinity as the thing having no end. The first doesn't seem controversial to me. The second seems fraught with misconceptions and difficulties.

Every real number is an infinite object in its own right. Every infinite decimal expansion (save those that end up as just 9s) corresponds to a unique real.

And here we have an example of these difficulties: Is "0.9999..." the same thing as "1"?

Let's ask our in-house expert on infinite things, our great leader King Jong UM, straightforwardly explain this very simple issue.
EB
 
Suppose you want to change the least amount possible. How much change is that? Is it possible to do that?
There isn't a least amount of change. In 10^-100 seconds, the universe will have changed. In 10^-1,000,000 seconds (when the Earth has completed ~10^-999,993 orbits), the universe will have changed.
 
(ε said:
https://en.wikipedia.org/wiki/(ε,_δ)-definition_of_limit

In calculus, the (ε, δ)-definition of limit ("epsilon–delta definition of limit") is a formalization of the notion of limit. The concept is due to Augustin-Louis Cauchy, who never gave an (ε , δ {\displaystyle \varepsilon ,\delta } \varepsilon,\delta) definition of limit in his Cours d'Analyse, but occasionally used ε , δ {\displaystyle \varepsilon ,\delta } \varepsilon,\delta arguments in proofs. It was first given as a formal definition by Bernard Bolzano in 1817, and the definitive modern statement was ultimately provided by Karl Weierstrass.[1][2] It makes rigorous the following informal notion: the dependent expression f(x) approaches the value L as the variable x approaches the value c if f(x) can be made as close as desired to L by taking x sufficiently close to c.

I had to check my old textbook. It doesn't include any discussion on infinitesimals! Only this (ε, δ)-definition around the notion of limit.

Of course, to me, ε and δ are basic to the notion of infinitesimals.
EB
ε and δ are used to formally define limits.

As for infinitesimals, the basic idea is trivial: an infinitesimal is a number x such that x + x + ... + x is always less than 1. The idea of such a quantity appears in ancient Greek texts, where it is stated axiomatically that such quantities, again, do not exist.

Instead, when the Greeks wanted to do integration proofs, they effectively used limits. The proof method was called "the method of exhaustion", and Archimedes is responsible for a number of proofs which follow this method, perhaps most famously in his calculation of π.

Modern real analysis is also based entirely on limits, and we sometimes pay homage to our intellectual ancestors by saying that the real numbers are Archimedean. The method of exhaustion in real analysis is replaced by the far more powerful calculus, but even then, if you look at how some of the core proofs work, you see something very similar to the method of exhaustion at work.

The hyperreal numbers, by contrast, really do have infinitesimals. In the hyperreals, there are positive values x such that nx < 1 for all natural numbers n. You can do calculus with the hyperreals, but then you don't use ε and δ. You use the infinitesimals directly.
 
Every real number is an infinite object in its own right. Every infinite decimal expansion (save those that end up as just 9s) corresponds to a unique real.

And here we have an example of these difficulties: Is "0.9999..." the same thing as "1"?
Actually not. That 0.999... = 1 is completely uncontroversial among all mathematicians who think the question is well formed in the first place.
 
And here we have an example of these difficulties: Is "0.9999..." the same thing as "1"?
Actually not. That 0.999... = 1 is completely uncontroversial among all mathematicians who think the question is well formed in the first place.

I'm sure it is uncontroversial among mathematicians. And yet, the layman will look at the apparent and very visible formal difference in how "1" and "0.9999..." are written and squirm at the idea that there's no difference in value.

Can I tempt you into explaining why there's indeed no difference?

So, here is a reason for controversy:

0.9999... straightforwardly is 0.9 + 0.09 + 0.009 + 0.0009 + etc. You keep adding more terms, each one the tenth of the last term you just added. No big deal. Except, there's no end to it. It's an infinite sum. There's no end to it. So, to get both the whole sum and the whole number written, the one, then, which is said to be equal to 1, you have to assume that you actually finish writing this infinitely long sum. Wherein lies the problem. It would be one thing to say that you get closer and closer to 1 by adding more and more terms like 0.00...0009. And you can even say you could get as close as you might want to. But it's now a very different thing because you now have to say that you've effectively finished adding a sum which is supposed to have infinitely many terms: you would have finished adding all the terms of the sum even though this sum has no end. And this would be the equivalent of adding infinitely many 1s, the sum being ∞, which would make ∞ a number, an ordinary number like any finite, real number. Another way to say it is to observe that the limit of the summation is 1 but 1 is not only the limit, it is also the actual result of the summation.

Isn't that problematic?
EB
 
Actually not. That 0.999... = 1 is completely uncontroversial among all mathematicians who think the question is well formed in the first place.

I'm sure it is uncontroversial among mathematicians. And yet, the layman will look at the apparent and very visible formal difference in how "1" and "0.9999..." are written and squirm at the idea that there's no difference in value.

Can I tempt you into explaining why there's indeed no difference?

So, here is a reason for controversy:

0.9999... straightforwardly is 0.9 + 0.09 + 0.009 + 0.0009 + etc. You keep adding more terms, each one the tenth of the last term you just added. No big deal. Except, there's no end to it. It's an infinite sum. There's no end to it. So, to get both the whole sum and the whole number written, the one, then, which is said to be equal to 1, you have to assume that you actually finish writing this infinitely long sum. Wherein lies the problem. It would be one thing to say that you get closer and closer to 1 by adding more and more terms like 0.00...0009. And you can even say you could get as close as you might want to. But it's now a very different thing because you now have to say that you've effectively finished adding a sum which is supposed to have infinitely many terms: you would have finished adding all the terms of the sum even though this sum has no end. And this would be the equivalent of adding infinitely many 1s, the sum being ∞, which would make ∞ a number, an ordinary number like any finite, real number. Another way to say it is to observe that the limit of the summation is 1 but 1 is not only the limit, it is also the actual result of the summation.

Isn't that problematic?
EB
Do you have the same problem with 1/3=0.333.... ? Or 0 = 0.000... ?
 
Actually not. That 0.999... = 1 is completely uncontroversial among all mathematicians who think the question is well formed in the first place.

I'm sure it is uncontroversial among mathematicians. And yet, the layman will look at the apparent and very visible formal difference in how "1" and "0.9999..." are written and squirm at the idea that there's no difference in value.

Can I tempt you into explaining why there's indeed no difference?

So, here is a reason for controversy:

0.9999... straightforwardly is 0.9 + 0.09 + 0.009 + 0.0009 + etc. You keep adding more terms, each one the tenth of the last term you just added. No big deal. Except, there's no end to it. It's an infinite sum. There's no end to it. So, to get both the whole sum and the whole number written, the one, then, which is said to be equal to 1, you have to assume that you actually finish writing this infinitely long sum. Wherein lies the problem. It would be one thing to say that you get closer and closer to 1 by adding more and more terms like 0.00...0009. And you can even say you could get as close as you might want to. But it's now a very different thing because you now have to say that you've effectively finished adding a sum which is supposed to have infinitely many terms: you would have finished adding all the terms of the sum even though this sum has no end. And this would be the equivalent of adding infinitely many 1s, the sum being ∞, which would make ∞ a number, an ordinary number like any finite, real number. Another way to say it is to observe that the limit of the summation is 1 but 1 is not only the limit, it is also the actual result of the summation.

Isn't that problematic?
EB
It's not. All mathematicians will make it clear what they mean by the notation "0.999..." if the question comes up, and they'll stipulate unambiguously that "0.999..." must denote 1.

There are subtleties when it comes to reasoning about the infinite, but this isn't one of them.
 
Actually not. That 0.999... = 1 is completely uncontroversial among all mathematicians who think the question is well formed in the first place.

I'm sure it is uncontroversial among mathematicians. And yet, the layman will look at the apparent and very visible formal difference in how "1" and "0.9999..." are written and squirm at the idea that there's no difference in value.

Can I tempt you into explaining why there's indeed no difference?

So, here is a reason for controversy:

0.9999... straightforwardly is 0.9 + 0.09 + 0.009 + 0.0009 + etc. You keep adding more terms, each one the tenth of the last term you just added. No big deal. Except, there's no end to it. It's an infinite sum. There's no end to it. So, to get both the whole sum and the whole number written, the one, then, which is said to be equal to 1, you have to assume that you actually finish writing this infinitely long sum. Wherein lies the problem. It would be one thing to say that you get closer and closer to 1 by adding more and more terms like 0.00...0009. And you can even say you could get as close as you might want to. But it's now a very different thing because you now have to say that you've effectively finished adding a sum which is supposed to have infinitely many terms: you would have finished adding all the terms of the sum even though this sum has no end. And this would be the equivalent of adding infinitely many 1s, the sum being ∞, which would make ∞ a number, an ordinary number like any finite, real number. Another way to say it is to observe that the limit of the summation is 1 but 1 is not only the limit, it is also the actual result of the summation.

Isn't that problematic?
EB

For the same reasons, this argument would also prohibit the sum of 1/2^n (from i = 1 to n as n goes to infinity) to equal 1.
 
If the real number line is complete and constructed from rationals and irrationals, then infinitesimals exist as the number they represent on the real number line. It doesn't make sense to think that the infinitesimal segments that create the real number line change or cease to exist once constructed.

The real (R) value of an infinitesimal is 0. That is what should be expected since any other value wouldn't be infinitesimal.

The definition of a limit using delta and epsilon works without infinitesimals, but I don't think that necessarily means they don't exist.
 
What I said, and what is true, is that if movement can be divided infinitely there is no smallest movement possible. You can always have a smaller move, without end.

The idea of a smallest movement possible makes no sense if movement can be divided infinitely.

For any object to move though it must first move the smallest movement possible. That is a truism and indisputable.

But if we claim that movement can be divided infinitely there is no smallest movement possible for an object to move first.

The situation instantly dissolves into absurdity. It is absurd to claim movement can be divided infinitely.

Any way you try to apply a real infinity to the real universe the situation instantly becomes an absurdity.

Some get this hint.

Weren't you the very one arguing strenuously against the concept of infinity. Should I quote you on the subject? It would prove quite embarrassing for you if I did.

No.

I very much want you to quote me.

This thread is about somebody claiming that it is "logically possible" that the universe is infinite.

I am arguing against the notion that there is any logic in applying imaginary qualities to the real universe.

You can't divide the ''smallest possible movement'' because, by self definition, it has already been reduced to its final state; the smallest possible movement.

If you have any movement, AND movement can be infinitely divided, you could have had a smaller movement.

No matter how far you move you could have moved a shorter distance.

There is NO shortest movement.

In a universe where movement can be divided infinitely there is no such thing as the shortest movement possible. It is a fiction.

But in the real world when an object moves the very first movement it makes is the shortest movement possible.

In a real universe there IS such a thing as the shortest movement possible. Everything that moves makes that move first.

So clearly in the real universe movement is not divided infinitely.
 
I'm annoying that way sometimes. :D

After looking into it myself, it seems like Cauchy's actual stance might still be a bit of a contentious issue in the history of mathematics - https://arxiv.org/pdf/1108.2885.pdf, so it seems like either might be a reasonable interpretation. I guess that means we need to have the bitterest of arguments about it. To the death!
Hmm...while that does cover both sides, it says that my claims above are false, namely that Cauchy had an epsilon definition underpinning his idea of infinitesimal. I know the source book I was reading, so I'm now going to have to buy another copy and figure out how I got here.

No worries, let me know what you find!

Cauchy got to a rigorous foundation of calculus by rigorously defining infinitesimals out of existence! His infinitesimal-free foundation for Calculus spearheaded a new rigour in mathematics, and was so triumphant that, a century later, Russell announced that infinitesimals had been an absurd notion all along. To this day, infinitesimals play absolutely no formal role in standard real or complex analysis. Instead, we talk exclusively in terms of limits, an idea heavily promoted by Euler, but only formally defined by Cauchy, and only in the most elementary terms of real numbers. Infinitesimals make no appearance at all (I'm happy to elaborate).

Russell's declarations of absurdity were completely wrong, however, and infinitesimals were shown to be not only logically coherent, but a fairly natural logical notion discovered first by Robinson with the hyperreals, and given an alternative formulation be Conway with the surreal numbers. You can do analysis in the hyperreals, and my old boss did his doctoral research near enough successfully vindicating Newton's calculus as being rigorous if you interpret him as working in the hyperreals.

However, pretty much all maths students today do the standard analysis that was laid down by Cauchy (with final gaps filled in by Dedekind and Cantor), and never see an infinitesimal anywhere. This makes sense, since the construction of the hyperreals ends up building on top of the standard reals anyway.

Thank you for you comment, which brings a bit of sanity in this thread.

And I take you point.


I did two years of maths and physics a very, very long time ago in Paris and then had to drop out, there and then, without finishing the cursus. So, I'm not going to pretend I'm the expert around here on this issue.

Still, I am in effect discovering now that this had been, historically, something of a controversial issue. To me it's never been and still isn't. I take the term 'infinitesimal' to have a precise meaning, i.e. to refer to a clear concept, one that I thought most mathematicians were confortable with. So, I accept the fact that it's a controversial issue but not through and through. Which now I have to explain.

Bear in mind that you may be more interested in the technically mathematical aspect of this issue while my interest is more in the conceptual landscape behind it.

Personally, I make a clear distinction between concepts of the mathematical world and how we mathematically define the properties we associate with these concepts. For example, we have a concept of infinitesimal built around the concept of continuous curve around a particular point called the 'limit'. Broadly, this concept is known and understood through a simple diagram showing, say, two axes, a curve representing the function itself, curve which will be continuous near one specific point. We don't really need any formal definition to have clear enough ideas about the concept of infinitesimal: given that the function is continuous, we can choose a point on the curve which will be as close as we may wish from the point we call the limit. If someone selected one point, somebody will always be able to find and select another point even closer to the limit. Intuitively, there's no other limiting point than the pre-defined limit itself. So that's one thing.

The second thing is how we're going to express those lovely intuitive ideas in a formal and rigorous way so that we can make progress in discovering the properties of our concepts that may be not so obvious and about which we should expect to have recurring disagreements if we were to discuss them informally.

According to this distinction, I don't believe for a moment that any of the mathematicians you've discussed here with beero had any issue with this notion of infinitesimal. I take the historical debate between them, and probably many others, to have been about the second aspect: how do we formally specify our mathematical concepts in such a way as to preclude any impasse and disagreement. Formal definitions are notoriously difficult to articulate straight away. It's an iterative process and we always need several people to bring a fresh perspective on the discussions.

No doubt, some more metaphysical points would have clouded the discussions. The idea of infinity is definitely difficult to formalise. Still, I am convinced that all these people mostly had the same basic conception in mind. It was therefore only a matter of agreeing on the language to use to talk about it to reduce any unnecessary friction.

Accordingly, the particular language agreed by mathematicians after some huffing and puffing should be understood as just the formal way retained for talking about the original conception, which won't have changed in the process because it has been unproblematic from the start. I would even go so far as saying that the concept of infinitesimal is not only easy for mathematicians but it is also intuitive for most people, even though few people would be able to articulate any formal definition of it.

Consequently, I don't see the language eventually retained by mathematicians for expressing the properties associated with continuity of a curve in the neighbourhood of a point as primary. What is primary is our intuitive concept of a curve being continuous near a limit.

In other words, while Cauchy's definition doesn't mention infinitesimals, you really understand what this definition means if you understand the concept of infinitesimal. You need the formal definition to go beyond this and make progress.

I wish I could have understood this at the time.
EB

(ε said:
https://en.wikipedia.org/wiki/(ε,_δ)-definition_of_limit

In calculus, the (ε, δ)-definition of limit ("epsilon–delta definition of limit") is a formalization of the notion of limit. The concept is due to Augustin-Louis Cauchy, who never gave an (ε , δ {\displaystyle \varepsilon ,\delta } \varepsilon,\delta) definition of limit in his Cours d'Analyse, but occasionally used ε , δ {\displaystyle \varepsilon ,\delta } \varepsilon,\delta arguments in proofs. It was first given as a formal definition by Bernard Bolzano in 1817, and the definitive modern statement was ultimately provided by Karl Weierstrass.[1][2] It makes rigorous the following informal notion: the dependent expression f(x) approaches the value L as the variable x approaches the value c if f(x) can be made as close as desired to L by taking x sufficiently close to c.

I had to check my old textbook. It doesn't include any discussion on infinitesimals! Only this (ε, δ)-definition around the notion of limit.

Of course, to me, ε and δ are basic to the notion of infinitesimals.
EB

That's one of the things I find absolutely fascinating about the history of math - that once the 'right' definitions are found, they seep into the way we learn and the concepts become completely obvious. I'm pretty sure Newton (for example) didn't think of infinitesimals in that way - yet nowadays we can't really imagine anyone thinking about them in any other way. Zero is another - I mean, OF COURSE there's a number zero. How could anyone think differently?

I would make the distinction between infinity as the idea that there is no end to something and infinity as the thing having no end. The first doesn't seem controversial to me. The second seems fraught with misconceptions and difficulties.

Every real number is an infinite object in its own right. Every infinite decimal expansion (save those that end up as just 9s) corresponds to a unique real.

And here we have an example of these difficulties: Is "0.9999..." the same thing as "1"?

Let's ask our in-house expert on infinite things, our great leader King Jong UM, straightforwardly explain this very simple issue.
EB

Actually not. That 0.999... = 1 is completely uncontroversial among all mathematicians who think the question is well formed in the first place.

I'm sure it is uncontroversial among mathematicians. And yet, the layman will look at the apparent and very visible formal difference in how "1" and "0.9999..." are written and squirm at the idea that there's no difference in value.

Can I tempt you into explaining why there's indeed no difference?

So, here is a reason for controversy:

0.9999... straightforwardly is 0.9 + 0.09 + 0.009 + 0.0009 + etc. You keep adding more terms, each one the tenth of the last term you just added. No big deal. Except, there's no end to it. It's an infinite sum. There's no end to it. So, to get both the whole sum and the whole number written, the one, then, which is said to be equal to 1, you have to assume that you actually finish writing this infinitely long sum. Wherein lies the problem. It would be one thing to say that you get closer and closer to 1 by adding more and more terms like 0.00...0009. And you can even say you could get as close as you might want to. But it's now a very different thing because you now have to say that you've effectively finished adding a sum which is supposed to have infinitely many terms: you would have finished adding all the terms of the sum even though this sum has no end. And this would be the equivalent of adding infinitely many 1s, the sum being ∞, which would make ∞ a number, an ordinary number like any finite, real number. Another way to say it is to observe that the limit of the summation is 1 but 1 is not only the limit, it is also the actual result of the summation.

Isn't that problematic?
EB

It's not. All mathematicians will make it clear what they mean by the notation "0.999..." if the question comes up, and they'll stipulate unambiguously that "0.999..." must denote 1.

There are subtleties when it comes to reasoning about the infinite, but this isn't one of them.

I'm with Phil on this one (and I'm not sure there would be (m)any mathematicians who wouldn't be). The issues people have with 0.999... = 1 arise because they don't have a well-formed understanding of what those symbols mean. It's not much different than 1/2 = 2/4.
 
What does it mean for something to be logically possible? What are the objective criteria to determine such a thing?

That it follows the rules for logic as laid out by Aristotle.

https://en.wikipedia.org/wiki/Term_logic

An example of broken logic.

1) All religions in the world are equally true.
2) The Abrahamic God created the world.
3) Brahma created the world.

Those are rules about statements or propositions or premises.

They are not rules about possibilities in the universe.

When you talk about possibilities in the universe, any kind of possibility, you first are limited to things that could possibly happen in the universe. If something can't happen in the universe we call that an impossibility.

So logical possibility might mean "a possibility according to the "logic" of the universe". Whatever that might mean.

I'm not really so sure that the word "logical" can even be applied to the word possibility.

It seems to me there are possibilities and impossibilities. Logic has nothing to do with any of them.

A road makes it possible for you to walk on it. But you're still doing the walking. The road doesn't walk you over it. Same with logic making things possible.

Logic deals with human statements and claims and propositions.

It has nothing to do with possibilities. Or impossibilities.

That is more the purview of physics.

You don't understand how logic works. Logic doesn't care about the real world.

True.

But possibilities care very much.
 
Suppose you want to change the least amount possible. How much change is that? Is it possible to do that?
There isn't a least amount of change. In 10^-100 seconds, the universe will have changed. In 10^-1,000,000 seconds (when the Earth has completed ~10^-999,993 orbits), the universe will have changed.

When something changes it has to change the least amount possible first.

So clearly it does exist.
 
Weren't you the very one arguing strenuously against the concept of infinity. Should I quote you on the subject? It would prove quite embarrassing for you if I did.

No.

I very much want you to quote me.

This thread is about somebody claiming that it is "logically possible" that the universe is infinite.

I am arguing against the notion that there is any logic in applying imaginary qualities to the real universe.


So to get this straight, you are still arguing that infinity is an imaginary quality, so according to you infinity cannot be applied to the real Universe.


But in the real world when an object moves the very first movement it makes is the shortest movement possible.

In a real universe there IS such a thing as the shortest movement possible. Everything that moves makes that move first.

So clearly in the real universe movement is not divided infinitely


So why even give the slightest appearance of arguing for infinite division of movement by making the remark you made?

What is the point of your objection if you don't support for the very concept you used as an objection?

Are you just doing it just to be contrary to anything anyone says?
 
So to get this straight, you are still arguing that infinity is an imaginary quality, so according to you infinity cannot be applied to the real Universe.

You tell me.

Is it rational to say the universe has imaginary qualities?

So why even give the slightest appearance of arguing for infinite division of movement by making the remark you made?

What is the point of your objection if you don't support for the very concept you used as an objection?

Are you just doing it just to be contrary to anything anyone says?

I am arguing AGAINST the irrational idea that movement can be divided infinitely. Or that anything real can be divided infinitely.

It is an idea that instantly becomes an absurdity.
 
Back
Top Bottom