• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

What does it mean for something to be "logically possible"?

Umm, weren't merchants using zero and negatives way before geometers?
I believe the Chinese are often credited with inventing the convention of red and black for negative and positive respectively, and I think it was before geometry. But I'm not going to compare the tallying of merchants and debt collectors with the pioneers of pure mathematics. It's not clear why Greek mathematicians and geometers would have found zero and negatives useful.

Geometers didn't have ideas about positions of objects, such as "this object is below the beginning (zero point) of this object"?
 
I believe the Chinese are often credited with inventing the convention of red and black for negative and positive respectively, and I think it was before geometry. But I'm not going to compare the tallying of merchants and debt collectors with the pioneers of pure mathematics. It's not clear why Greek mathematicians and geometers would have found zero and negatives useful.

Geometers didn't have ideas about positions of objects, such as "this object is below the beginning (zero point) of this object"?
Orientation and relative position in Greek geometry is handled more than adequately using angles and directed line segments.

The benefits of zero and negatives only kick in when you start doing algebraic geometry, which is a good deal more advanced.
 
I believe the Chinese are often credited with inventing the convention of red and black for negative and positive respectively, and I think it was before geometry. But I'm not going to compare the tallying of merchants and debt collectors with the pioneers of pure mathematics. It's not clear why Greek mathematicians and geometers would have found zero and negatives useful.

Geometers didn't have ideas about positions of objects, such as "this object is below the beginning (zero point) of this object"?

Sure they did. For example,  Apollonius of Perga defined coordinate systems (axes and positions along them) for analyzing his conics, and essentially derived the standard equations for them, but everything was in terms of distances and directions, not variables that could be positive or negative. They were SO CLOSE to analytic geometry (calculus too), but I suppose that's just hindsight...

To consider the power of the modern algebraic approach look at Euclid's statements:
II.12 In obtuse-angled triangles the square on the side opposite the obtuse angle is greater than the sum of the squares on the sides containing the obtuse angle by twice the rectangle contained by one of the sides about the obtuse angle, namely that on which the perpendicular falls, and the straight line cut off outside by the perpendicular towards the obtuse angle.

II.13 In acute-angled triangles the square on the side opposite the acute angle is less than the sum of the squares on the sides containing the acute angle by twice the rectangle contained by one of the sides about the acute angle, namely that on which the perpendicular falls, and the straight line cut off within by the perpendicular towards the acute angle.

Note that it's split into two propositions (one for adding and one for subtracting) and that they're fairly lengthy, complicated statements, whereas modern math just says c2 = a2 + b2 - 2ab cos C.
 
Geometers didn't have ideas about positions of objects, such as "this object is below the beginning (zero point) of this object"?

Sure they did. For example,  Apollonius of Perga defined coordinate systems (axes and positions along them) for analyzing his conics, and essentially derived the standard equations for them, but everything was in terms of distances and directions, not variables that could be positive or negative. They were SO CLOSE to analytic geometry (calculus too), but I suppose that's just hindsight...
The opening paragraph of On Conics was so dense that it might be enough to justify it as being the most difficult maths text written to this day.

I haven't tried reading it further, or read much more about it, and I'd definitely love to hear a story on how it was close to analytic geometry and calculus. I mentioned above the method of exhaustion, but that wasn't calculus, whose power derives from the combination of algebra, differentiation and the fundamental theorem of calculus. The way I'd tell the story is that analytic geometry only becomes relevant once you've laid down enough algebraic techniques for solving geometric problems and you're ready to move over, and, on my take on the history, that was mostly developed during the Islamic period.
 
Sure they did. For example,  Apollonius of Perga defined coordinate systems (axes and positions along them) for analyzing his conics, and essentially derived the standard equations for them, but everything was in terms of distances and directions, not variables that could be positive or negative. They were SO CLOSE to analytic geometry (calculus too), but I suppose that's just hindsight...
The opening paragraph of On Conics was so dense that it might be enough to justify it as being the most difficult maths text written to this day.

I haven't tried reading it further, or read much more about it, and I'd definitely love to hear a story on how it was close to analytic geometry and calculus. I mentioned above the method of exhaustion, but that wasn't calculus, whose power derives from the combination of algebra, differentiation and the fundamental theorem of calculus. The way I'd tell the story is that analytic geometry only becomes relevant once you've laid down enough algebraic techniques for solving geometric problems and you're ready to move over, and, on my take on the history, that was mostly developed during the Islamic period.

Yeah, it's a real doozy. I spent a good amount of time on his Conics for a class I taught on classical geometry a few years ago. In my opinion the only thing the Greeks were really missing was algebra, and even though that's a huge and difficult component, I think even the basics would have pushed them over the edge of discovery. Doing everything geometrically meant that they could only consider the simplest curves that had easy geometric definitions (lines, circles, conics, etc), and that it took some of the best mathematicians of the time to handle even those.

They already had the basic idea of integration (Eudoxus' method of exhaustion), and the basic idea of differentiation (Apollonius constructed tangent lines to conics and determined their slopes), they were interested in maximization/minimization problems (Zenodorus on the beginnings of the isoperimetric inequality). No fundamental theorem of calculus though, because they were so restricted in the curves they could analyze that everything was done a posteriori - i.e. you take the curve and THEN apply your coordinate system and analysis, instead of finding a coordinate system and analysis that works and then placing the curve on that.

It was just a matter of putting everything together, which they couldn't do without having a general technique like algebra for describing and analyzing shapes. So we had to wait a couple thousand years...
 
The opening paragraph of On Conics was so dense that it might be enough to justify it as being the most difficult maths text written to this day.

I haven't tried reading it further, or read much more about it, and I'd definitely love to hear a story on how it was close to analytic geometry and calculus. I mentioned above the method of exhaustion, but that wasn't calculus, whose power derives from the combination of algebra, differentiation and the fundamental theorem of calculus. The way I'd tell the story is that analytic geometry only becomes relevant once you've laid down enough algebraic techniques for solving geometric problems and you're ready to move over, and, on my take on the history, that was mostly developed during the Islamic period.

Yeah, it's a real doozy. I spent a good amount of time on his Conics for a class I taught on classical geometry a few years ago. In my opinion the only thing the Greeks were really missing was algebra, and even though that's a huge and difficult component, I think even the basics would have pushed them over the edge of discovery. Doing everything geometrically meant that they could only consider the simplest curves that had easy geometric definitions (lines, circles, conics, etc), and that it took some of the best mathematicians of the time to handle even those.

They already had the basic idea of integration (Eudoxus' method of exhaustion), and the basic idea of differentiation (Apollonius constructed tangent lines to conics and determined their slopes), they were interested in maximization/minimization problems (Zenodorus on the beginnings of the isoperimetric inequality). No fundamental theorem of calculus though, because they were so restricted in the curves they could analyze that everything was done a posteriori - i.e. you take the curve and THEN apply your coordinate system and analysis, instead of finding a coordinate system and analysis that works and then placing the curve on that.

It was just a matter of putting everything together, which they couldn't do without having a general technique for describing and analyzing shapes. So we had to wait a couple thousand years...
Interesting stuff, thanks.

On the a posteriori thing, does this relate to the analysis/synthesis distinction with the method of exhaustion? The case study we read on this was Archimedes' proof that the cone is a third of the cylinder. To figure out what his exhaustion proof needs to be, he starts with a hypothetical physical argument where he imagines the two objects on a scale that he claims balances. Only then does he go in with the rigorous exhaustion proof. The claim made was that this is how exhaustion proofs were typically done.
 
If I go to New York, the idea is that at some point I will be in New York. So, "n goes to infinity" suggests that n will somehow be infinite in value. Not so if I express thing using the concept of limit. So, n may well tend towards a limit without ever reaching it. So, the two expressions are not equivalent. There's no n in N which is infinite, so, contrary to your claim here, saying "n goes to infinity" does not suggest "all n of the naturals".

Look, I think everyone goes through this; I certainly did. To this day the logic of it bothers me. How can there be an infinite number of natural numbers without n finally equalling infinity?

Apparently you can't even distinguish between the elements of N and the cardinality of N. The cardinality of N, an expression of the number of elements of the N set, can well be said 'infinite' by convention, even though no one natural number is equal to infinity.

You discussing things you don't understand.

And then what? How would that prove that Newton's notion of infinitesimals was somehow incoherent?

I certainly fail to see how the idea that infinitesimals are all null could possibly be coherent. Why even talk of infinitesimals if they were null. Talk of zero instead and you will save time.
And that just may be why 0 is used instead.

How did an invention like 0 evade great thinkers and Greek mathematicians? It's all obvious now, but...

Never mind.
EB
 
Yeah, it's a real doozy. I spent a good amount of time on his Conics for a class I taught on classical geometry a few years ago. In my opinion the only thing the Greeks were really missing was algebra, and even though that's a huge and difficult component, I think even the basics would have pushed them over the edge of discovery. Doing everything geometrically meant that they could only consider the simplest curves that had easy geometric definitions (lines, circles, conics, etc), and that it took some of the best mathematicians of the time to handle even those.

They already had the basic idea of integration (Eudoxus' method of exhaustion), and the basic idea of differentiation (Apollonius constructed tangent lines to conics and determined their slopes), they were interested in maximization/minimization problems (Zenodorus on the beginnings of the isoperimetric inequality). No fundamental theorem of calculus though, because they were so restricted in the curves they could analyze that everything was done a posteriori - i.e. you take the curve and THEN apply your coordinate system and analysis, instead of finding a coordinate system and analysis that works and then placing the curve on that.

It was just a matter of putting everything together, which they couldn't do without having a general technique for describing and analyzing shapes. So we had to wait a couple thousand years...
Interesting stuff, thanks.

On the a posteriori thing, does this relate to the analysis/synthesis distinction with the method of exhaustion? The case study we read on this was Archimedes' proof that the cone is a third of the cylinder. To figure out what his exhaustion proof needs to be, he starts with a hypothetical physical argument where he imagines the two objects on a scale that he claims balances. Only then does he go in with the rigorous exhaustion proof. The claim made was that this is how exhaustion proofs were typically done.

I'm not sure, but I suppose it's possible. My guess is that this was happening at such an unconscious level that they never really considered it. When they talked about analysis vs synthesis it was about the specific types of arguments and approach used, whereas I think this was more the underlying implicit biases that affected how the arguments themselves were generated.
 
Look, I think everyone goes through this; I certainly did. To this day the logic of it bothers me. How can there be an infinite number of natural numbers without n finally equalling infinity?

Apparently you can't even distinguish between the elements of N and the cardinality of N. The cardinality of N, an expression of the number of elements of the N set, can well be said 'infinite' by convention, even though no one natural number is equal to infinity.

You discussing things you don't understand.
You don't have to understand something to know it.

My issue is that any n chosen from the set of N is finite. Therefor, each n comes from a finite number of elements leading up to it. The logic of it doesn't make sense to me, but I wouldn't dispute it on an exam.
 
Then continuous "reality" is a fantasy. Because in the real world when any thing moves it moves the shortest possible distance first.

There is no shortest possible distance in the continuum. It's all connected man.... whoa.


That is why the idea is an imaginary fantasy.

Nothing like it could ever exist in this world.

Because in the real world when something moves it always moves the least amount possible first.

Every time with every thing.
 
There isn't a least amount of change. In 10^-100 seconds, the universe will have changed. In 10^-1,000,000 seconds (when the Earth has completed ~10^-999,993 orbits), the universe will have changed.

When something changes it has to change the least amount possible first.

So clearly it does exist.

The least amount possible is zero.

So first something doesn't change; and then it does.

Can you explain why that's a problem for you?
 
Does anybody think there's an actual argument for the 'it has to move/change the least amount possible first' thing that isn't begging the question? Or is it one of those untermensche classics?
 
When something changes it has to change the least amount possible first.

So clearly it does exist.

The least amount possible is zero.

So first something doesn't change; and then it does.

Can you explain why that's a problem for you?

Zero means NO CHANGE. It is not an amount of change.

The first amount of change HAS to be the smallest amount possible.

This is not disputable.
 
Does anybody think there's an actual argument for the 'it has to move/change the least amount possible first' thing that isn't begging the question? Or is it one of those untermensche classics?

I am asking simple questions you apparently can't handle.

Not begging anything.

They show the absurdity of the notion of infinite division in the real world.

Infinities only exist in the imagination.
 
Does anybody think there's an actual argument for the 'it has to move/change the least amount possible first' thing that isn't begging the question? Or is it one of those untermensche classics?

I am asking simple questions you apparently can't handle.

Not begging anything.

They show the absurdity of the notion of infinite division in the real world.

Infinities only exist in the imagination.

Kind of like your arguments, huh?

All I see are assertions and incredulous 'I can't imagine it any other way, so it must be true' questions. Can you produce an actual argument?
 
I am asking simple questions you apparently can't handle.

Not begging anything.

They show the absurdity of the notion of infinite division in the real world.

Infinities only exist in the imagination.

Kind of like your arguments, huh?

All I see are assertions and incredulous 'I can't imagine it any other way, so it must be true' questions. Can you produce an actual argument?

Then you can't read.

Because the argument in terms of motion is more than that.

1. If motion can be divided infinitely that means there is no smallest motion. Any motion could be cut in half no matter how small.

2. For any object to move in the real world it MUST move the smallest amount first. The very first move it makes MUST be the smallest possible move.

3. Since a universe where motion can be divided infinitely has NO smallest possible amount to move it has no relation to the real world where all things that move ALWAYS move the smallest possible amount first.
 
Apparently you can't even distinguish between the elements of N and the cardinality of N. The cardinality of N, an expression of the number of elements of the N set, can well be said 'infinite' by convention, even though no one natural number is equal to infinity.

You discussing things you don't understand.
You don't have to understand something to know it.

My issue is that any n chosen from the set of N is finite. Therefor, each n comes from a finite number of elements leading up to it. The logic of it doesn't make sense to me, but I wouldn't dispute it on an exam.

And yet that is not an issue: what you say does NOT require that a member of an infinite set ITSELF must be infinite.
The requirement for a set of positive whole numbers to be infinite is that for each element you select there is at least one that is greater. But the set consists entirely of finite numbers.

You are conflating size of members of the set with the cardinality of the set.

Think of set of elements where their size is measured differently.
 
Kind of like your arguments, huh?

All I see are assertions and incredulous 'I can't imagine it any other way, so it must be true' questions. Can you produce an actual argument?

Then you can't read.

Because the argument in terms of motion is more than that.

1. If motion can be divided infinitely that means there is no smallest motion. Any motion could be cut in half no matter how small.

2. For any object to move in the real world it MUST move the smallest amount first. The very first move it makes MUST be the smallest possible move.

3. Since a universe where motion can be divided infinitely has NO smallest possible amount to move it has no relation to the real world where all things that move ALWAYS move the smallest possible amount first.

You're just asserting your claim again. Here is my post:

Does anybody think there's an actual argument for the 'it has to move/change the least amount possible first' thing that isn't begging the question? Or is it one of those untermensche classics?

Care to try again?
 
Then you can't read.

Because the argument in terms of motion is more than that.

1. If motion can be divided infinitely that means there is no smallest motion. Any motion could be cut in half no matter how small.

2. For any object to move in the real world it MUST move the smallest amount first. The very first move it makes MUST be the smallest possible move.

3. Since a universe where motion can be divided infinitely has NO smallest possible amount to move it has no relation to the real world where all things that move ALWAYS move the smallest possible amount first.

You're just asserting your claim again. Here is my post:

Does anybody think there's an actual argument for the 'it has to move/change the least amount possible first' thing that isn't begging the question? Or is it one of those untermensche classics?

Care to try again?

No claims.

Your claim that these clear arguments are mere claims is nonsense.

That is a dodge to enable you to run away from them.
 
Back
Top Bottom