• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Euclid-style constructions

But there are plenty special cases of higher-degree equations that can be solved by Euclidean techniques.

Some of them occur in constructions of regular n-gons. The size of the angle at the center for each side is (360d)/n, and one has this equation for its cosine:
T(n,x) = 1

For a triangle, one must solve
4x^3 - 3x = 1

This has a trivial solution, x = 1, so we can factor it out, giving
(2x + 1)^2 * (x - 1) = 0
So we find that cos(120d) = -1/2
Implying that cos(60d) = 1/2

A square is easy:
8x^4 - 8x^2 + 1 = 1

One can treat it as a series of two bisections:
2*(2*x^2 - 1)^2 - 1 = 1

One gets x = 1, x = -1 (180d), and two of x = 0 (90d)

A pentagon looks hard, since one has to solve a quintic:
16x^5 - 20x^3 + 5x = 1

But it also reduces to a much easier equation:
(4x^2 + 2x - 1)^2 * (x - 1) = 0

We find from it
cos(72d) = (sqrt(5) - 1)/4
cos(144d) = (-sqrt(5) - 1)/4
Filling in,
cos(36d) = (sqrt(5) + 1)/4
cos(108d) = (-sqrt(5) + 1)/4

Around 1800, Carl Friedrich Gauss, a prolific mathematician, discovered a solution for the 17-gon that involves a sequence of 3 quadratic equations, meaning that it can be constructed with Euclidean techniques. Likewise can find a solution for the 257-gon with 7 quadratic equations, and one for the 65537-gon with 15 quadratic equations.

Likewise, with a Pierpont prime (2^u*3^v+1), one can reduce the problem to solving a sequence of quadratic and cubic equations.

Thus, for a regular heptagon (7-gon), one must solve
(8x^3 + 4x^2 - 4x - 1)^2 * (x - 1) = 0
So one needs to solve a cubic equation.
 
A big deficiency in Euclid's Elements is lack of conceptions of zero and negative numbers. That was rather general, however, and Western mathematicians did not get good conceptions of them until recent centuries.

Mathematicians in India beat them by some centuries, but their results were not very well-known outside of India until recent centuries.

ASIMOV ON NUMBERS : ISAAC ASIMOV : Free Download, Borrow, and Streaming : Internet Archive - has an essay about zero called "Nothing Counts". Much of it was about alternatives to Arabic numerals, more precisely Hindu-Arabic ones.

Roman numerals:
I - 1, V - 5, X - 10, L - 50, C - 100, D - 500, M - 1000
Written large to small in most cases, though small before next larger is subtractive.

1 to 10: I II III IV V VI VII VIII IX X
10 to 100: X XX XXX XL L LX LXX LXXX XC C
100 to 1000: C CC CCC CD D DC DCC DCCC CM M
1000 to 3000: M MM MMM

These are still used, mostly as a sort of alternate font, and mostly for items in sequence.

Letters for numbers:
This was done in Greece, and present-day Greeks still use such numerals, though mainly in contexts that other people would use Roman numerals.

A Roman-alphabet version:
1 to 9: A B C, D E F, G H I
10 to 90: J K L, M N O, P Q R
100 to 900: S T U, V W X, Y Z &

An obvious problem is how one does large numbers, and it is not surprising that Roman numerals and letter numerals now have very limited use.
 
More on how nothing counts.

How might one avoid repeated symbols and/or lots of different symbols?

By using a place system, and that requires noting the absence of a digit. Thus, zero. It was first used in Babylon some 3000 years ago, and it was used by Greek astronomers like Ptolemy, but none of these users ever took the obvious inference of treating it as a standalone number.

That step was taken in India over 1000 years ago, around 650 CE. Mathematicians like Brahmagupta (598 - 670), Mahavira (~800 - ~870), and Bhaskara (1114 - 1185) stated rules for working with zero and negative numbers. They got everything right with the exception of division by zero, something they stumbled over.

One of the first Europeans to use Hindu-Arabic numerals was Leonardo of Pisa, a.k.a. Fibonacci (~1170 - ~1245), in his 1202 book Liber Abaci ("Book of the Abacus"). But he considered 0 a "sign" rather than a number. It took some time before Western mathematicians would consider zero a legitimate number.
 
Here also, Indian mathematicians were well ahead of the Western world. They called positive numbers asset numbers (or fortune numbers or credit numbers) and negative numbers debt numbers.

Before them was some Chinese mathematicians about 2000 years ago.

But Western mathematicians only gradually accepted their legitimacy, though by the 19th century, most of them did so.


Euclid had no idea of imaginary numbers, though with his lack of an idea of negative numbers, that is very understandable. One of the first to conceive of them was Girolamo Cardano (1501 - 1576), and Rafael Bombelli (1526 - 1572) worked out some of their properties.

It also took some time for mathematicians to accept their legitimacy, though also by the 19th century, most of them did so.
 
Though Euclid's axioms are flawed, the idea of axioms has been a very fruitful.

With analytic geometry, one can derive geometry from arithmetic, and one can derive arithmetic from set theory. One can show that the cardinality or number of members of a set is something that obeys Peano's axioms.

Let's see how much we can derive from Peano's axioms. Here goes:
  • E1: Equality is reflexive: a = a
  • E2: Equality is symmetric: a = b implies b = a
  • E3: Equality is transitive: a = b and b = c imply a = c
  • E4: Equality is closed for numbers: if a is a number and a = b, then b is a number
  • N1: Zero is a number
  • N2: For every number a, its successor S(a) is a number
  • N3: Two numbers a and b satisfying a = b is equivalent to their satisfying S(a) = S(b)
  • N4: There is no number a such that S(a) = 0
  • N5: If a predicate function p(a) of number a satisfies p(0) is true and p(a) implies p(S(a)) then p is true for every number
  • A1: a + 0 = a
  • A2: a + S(b) = S(a + b)
  • M1: a * 0 = 0
  • M2: a * S(b) = (a*b) + a
  • O1: a <= b is if there is some number x such that a + x = b
 
Now for properties of addition and multiplication.

Addition is associative.
(a + b) + c = a + (b + c)

Start from c = 0. By A1, (a + b) + 0 = a + b, and a + (b + 0) = a + b
Now use A2.
(a + b) + S(c) = S(a + (b + c)) = a + S(b + c) = a + (b + S(c))

It is thus true for all a, b, c

The additive identity is 0 on both sides.
From A1, a + 0 = a
Now consider 0 + a = a
For a = 0, from A1, 0 + 0 = 0 is satisfied
From A2, 0 + S(a) = S(a)
Thus, from N5, 0 + a = a for all a.

Addition is commutative.
Use 1 = S(0).
First, S(a) = S(a + 0) = a + S(0) = 1.

Then, a + 1 = 1 + a
Do:
S(a) + 1 = S(a) + S(0) = S(S(a) + 0) (by A2) = S(a+1) = S(1+a) (hypothesis) = 1 + S(a) (by A2)
By N5, a + 1 = 1 + a for all a

Finally, a + b = b + a
Do:
a + S(b) = S(a + b) (from A2) = (a + b) + 1 (succession adds 1) = (b + a) + 1 (hypothesis) = b + (a + 1) (associativity) = b + (1 + a) (commutativity of 1) = (b + 1) + a (associativity) = S(b) + a (succession adds 1)
By N5, a + b = b + a for all numbers b, and also all numbers a.
 
I now turn to multiplication.

First, the zero element or annihilator, 0 itself.
a*0 = 0
Now consider 0*a = 0
0*0 = 0 (M1)
0*S(a) = 0*a + 0 (M2) = 0*a = 0 (hypothesis)
Thus for all a, 0*a = a*0 = 0

Next the identity, 1 = S(0)
a*1 = a*S(0) = a*0 + a (M2) = a
1*0 = 0 (M1)
1*S(a) = 1*a + 1 (M2) = a + 1 (hypothesis) = S(a)
Thus for all a, 1*a = a*1 = a

Multiplication is distributive over addition
(b+c)*0 = 0 (M1)
(b+c)*S(a) = (b+c)*a + (b+c) (M2) = (b*a + c*a) + (b + c) (hypothesis) = (b*a+b) + (c*a+c) = b*S(a) + c*S(a) (M2)
a*(b+0) = a*b = a*b + a*0 (M1)
a*(b+S(c)) = a*S(b+c) = a*(b+c) + a (M2) = a*b + a*c + a (hypothesis) = a*b + a*S(c) (M2)
Thus, for all a, b, c, (b+c)*a = b*a + c*a and a*(b+c) = a*b + a*c

Multiplication is associative
(a*b)*0 = 0 (M1)
a*(b*0) = a*0 = 0 (M1)
Try:
(a*b)*S(c) = (a*b)*c + (a*b) (M2)
a*(b*S(c)) = a*(b*c + b) (M2) = a*(b*c) + (a*b) (distributive) = (a*b)*c + (a*b) (hypothesis)
Thus, for all a, b, c, (a*b)*c = a*(b*c)

Multiplication is commutative
a*0 = 0*a = 0
Test hypothesis a*b = b*a
a*S(b) = (a*b) + a (M2)
S(b)*a = (b+1)*a = (b*a) + (1*a) (distributive) = (b*a) + a = (a*b) + a (hypothesis)
Thus, for all a, b, a*b = b*a

So from Peano's axioms, one gets all these familiar properties of addition and multiplication.

Also note that since every number is some number of applications of the successor function on 0, every number is thus the sum of some number of 1's.
 
I'll now take on ordering.

If a <= b and b <= a, then a = b.

From O1, the first proposition states that a + x = b for some number x and the second one that b + y = a for some number y.

Insert the first one into the second one, and we get a + x + y = a
If a = 0, then x + y = 0
But if a is nonzero, we use axiom N3: a = S(a'). Using axiom N3, that gives us a' + x + y = a'. Repeating gives us x + y = 0

If at least one of x and y is nonzero, x = S(x') then x + y = S(x') + y = S(x'+y), and by axiom N4, that is not possible. Likewise for y = S(y') then x + y = x + S(y') = S(x+y').

So both x and y must be 0, and a = b.

Let's try chaining some inequalities: a <= b and b <= c. These mean that for some numbers x and y, that a + x = b and b + y = c. Substituting the first equation into the second one gives a + (x + y) = c. Since x + y is a number, a <= c -- transitivity.

One can easily prove that a + c <= b + c, and also a*c <= b*c

For the first one, a + c + x = b + c, and for the second one, a*c + x*c = b*c, where a + x = b
 
So we have the nonnegative integers, with addition, multiplication, and ordering.

I say "nonnegative integers" to resolve an ambiguity in the definitions of "natural number" and "whole number". Do these numbers start from 0 or 1? I started from 0 to get addition and multiplication. Even though using this name means that I was getting ahead of myself.

In these numbers, subtraction is only sometimes possible. a - b is the value of x that solves x + b = a. In these numbers, a solution exists if a >= b but not otherwise. So let us invent a kind of number that makes solutions possible. It will contain the two inputs, (a,b), with suitable rules of manipulation.

(a,b) + (c,d) = (a+c, b+d)
Satisfies commutative, associative properties. Identity = (0,0)

Addition has an inverse: for (a,b), we get (b,a).
(a,b) + (b,a) = (a+b,a+b) = (0,0) = identity

(a,b) * (c,d) = (a*c+b*d, a*d+b*c)
Satisfies commutative, associative properties. Distributive over addition. Zero = (0,0), identity = (1,0)

(a,b) <= (c,d) if there is some number x such that (a+d) + x = (b+c)
Equality is for x = 0, and it satisfies equality axioms E1, E2, and E3.

Equality has the property that (a+x,b+x) = (a,b) for all numbers x.

This means that one can reduce these numbers to a simplified form: (a,0) or (0,b).

(a,0) <= (b,0) if a <= b
(0,a) <= (0,b) if b <= a
(a,0) <= (0,b) never
(0,a) <= (b,0) always
So (a,0) behaves like a, but (0,a) doesn't. We can extend equality, setting (a,0) = a, with no nonnegative integer being equal to (0,b) unless b = 0.

Let's now combine ordering and addition and multiplication. Addition is easy:
(a,b) + (e,f) <= (c,d) + (e,f) is true if (a,b) <= (c,d)

Multiplication is more difficult.
(a,b) * (e,f) <= (c,d) * (e,f)
means that there is some x such that
a*e + b*f + c*f + d*e + x <= a*f + b*e + c*e + d*f

for (e,f) = (g,0),
(a+d)*g + x <= (b+c)*g
Thus getting (a,b) <= (c,d)

For (e,f) = (0,g),
(b+c)*g + x <= (a+d)*g
One gets (c,d) <= (a,b)
The inequality reverses direction.

How do numbers (0,a) relate to numbers (a,0)? Easy. Multiply by (0,1).

We have just derived the integers, with nonnegative integers (a,0) being written a, and nonpositive integers (0,a) being written -a. One integer is both nonnegative and nonpositive: 0.
 
Though Euclid did not recognize zero or negative numbers, he did recognize rational numbers.

We can get them with a similar sort of construction: (a,b) where a and b are both integers.

Addition: (a,b) + (c,d) = (a*d + b*c, b*d)
Commutative, associative properties, identity = (0,1), inverse of (a,b) = (-a,b)

Multiplication: (a,b) * (c,d) = (a*c,b*d)
Commutative, associative properties, distributive over addition, zero = (0,1), identity = (1,1), inverse of (a,b) = (b,a) (reciprocal)

Equality: (a,b) = (c,d) if a*d = b*c
Thus (a*x,b*x) = (a,b)

Ordering: (a,b) <= (c,d) if a*d <= b*c where b and d are assumed > 0. One can always do (a,b) = (-a,-b) to make that happen.
Ordering has the properties of integer ordering, with the exception of always being able to find a number in between two other numbers.

Numbers (a,1) can be identified with a, but in general, that is not possible.

We have thus derived the rational numbers, but with (a,1) usually written a, and (a,b) usually written a/b.
 
Euclid not only recognized rational numbers, he also recognized some irrational numbers.

In his Elements (Book X, Proposition 9), he has a proof that if a positive integer is not the square of some other integer, then its square root is irrational, meaning that it is not a rational number, the ratio of two integers. Here is that proof, specialized to sqrt(2).

Let sqrt(2) = some rational number a/b where a and b are in lowest terms, that is, relatively prime or coprime.

Take the square: 2 = a^2/b^2, or a^2 = 2*b^2.

Since 2 is not the square of some integer, a must be divisible by 2: a = 2*c. This gives 4*c^2 = 2*b^2, or 2*c^2 = b^2. This means that b must also be divisible by 2, something that contradicts the statement of the problem. Thus, sqrt(2) is irrational.

This proof can be extended to nth roots of integers, and also to roots of integer-coefficient monic polynomials, the rational-root theorem. That theorem states that every such polynomial's roots are either integers or irrational numbers, though such a polynomial can have both types of roots in them. Such polynomials' roots are sometimes called "algebraic integers", and sqrt(2) is thus an algebraic integer.

An interesting result about the algebraic numbers is that polynomials with algebraic-number coefficients always have algebraic-number solutions. Thus algebraic numbers are closed under taking polynomial roots.

A similar result holds about algebraic integers, but in that case, it is for monic polynomials.
 
Thus,

The nonnegative integers are closed under addition and multiplication, but not under subtraction or division. They are an abstract-algebra ordered commutative semiring.

Subtraction yields the rest of the integers.

The integers are closed under addition, subtraction, and multiplication, but not under division. They are an AA ordered commutative ring.

Division yields the rest of the rational numbers.

The rational numbers are closed under addition, subtraction, multiplication, and division, but not under taking polynomial roots. They are an AA ordered field.

Taking polynomial roots yields he rest of the algebraic numbers.

The algebraic numbers are closed under addition, subtraction, multiplication, division, and taking polynomial roots. They are an AA field.

The algebraic integers are closed under addition, subtraction, multiplication, and taking monic-polynomial roots, but not under division. They are an AA ring.


Closure under additive inversion (negation) is easy to prove for both algebraic numbers an algebraic integers, and under multiplicative inversion (taking reciprocals) for algebraic numbers. Take -arg or 1/arg in some number's defining polynomial.
 
Closure under addition and multiplication is more difficult, and doing so can be done with the Newton-Girard identities for symmetric polynomials and sums of powers.

Symmetric polynomials? Let us consider polynomial coefficients as functions of polynomial roots. I will specialize to monic polynomials for convenience.

(x - a1)*(x - a2) = x^2 - (a1 + a2)*x + (a1*a2)
(x - a1)*(x - a2)*(x - x3) = x^3 - (a1 + a2 + a3)*x^2 + (a1*a2 + a1*a3 + a2*a3)*x + (a1*a2*a3)
Etc.

Notice a pattern in the polynomials' coefficients? Those coeffcients are symmetric polynomials of the roots:
x^n - S(1,a1,a2,...,a(n))*x^(n-1) + S(2,a1,a2,...,a(n))*x^(n-2) - ...

These are related to sums of powers: P(p,a1,a2,...,a(n)) = a1^p + a2^p + ... + a(n)^p

P1 = S1
P2 = S1^2 - 2*S2
P3 = S1^3 - 3*S1*S2 + 3*S3
...
and
S1 = P1
S2 = (P1^2 - P2)/2
S3 = (P1^3 - 3*P1*P2 + 2*P3)/6
...

These are the Newton-Girard identities, and they can be calculated recursively:
P1 = S1
P2 = S1*P1 - 2*S2
P3 = S1*P2 - 2*S2*P1 + 3*S3
...
and
S1 = P1
S2 = (S1*P1 - P2)/2
S3 = (S2*P1 - S1*P2 + P3)/3
...

In terms of the operand polynomials' roots a(i) and b(j), the sum polynomials' roots are c(i,j) = a(i) + b(j), and the product polynomials' roots are c(i,j) = a(i)*b(j). Sums of powers are thus

(sum) P(n,c) = sum over k = 0 to n of P(k,a)*P(n-k,b)
(product) P(n,c) = P(n,a)*P(n,b)

When one gets the sum and product polynomials' coefficients, one finds them to be integer-coefficient polynomials in the operand polynomials' coefficients. This is what makes algebraic numbers algebraically closed, and algebraic integers monic-polynomial algebraically closed.
 
I have not addressed ordering in the algebraic numbers and algebraic integers, and for that, I will need to discuss real numbers.

Here also, one extends a previous kind of number. For that, a convenient way of doing so is with infinite sequences of numbers.

Consider 1, 1/2, 1/3, 1/4, 1/5, ... and 1, 1/2, 1/4, 1/8, 1/16, ... both sequences' members get smaller and smaller while staying positive. They both have limit zero, a number that is not a positive one. So it's possible for a limit of some sequence to be outside of whatever set of numbers the sequence member was defined in.

Limits have a precise definition. Consider sequence {a(i}} that converges onto a limit a. For every positive number e, there is a number N such that |a(i) - a| < e for all i > N.

But if the limit is not in whatever set the sequence members were defined in, one can define convergence in a way that avoids invoking that limit. For every positive number e, there is a number N such that |a(i) - a(j)| < e for all i, j > N.

It's easy to show that the two sequences I'd mentioned are both Cauchy sequences.

Let us now consider sequences of rational numbers that have irrational limits. Use Newton's method to find sqrt(2). Start with x = some positive rational number, then repeatedly do

x becomes (1/2)*(x + 2/x)

One gets a sequence of rational numbers that gets closer and closer to sqrt(2), while never reaching it, because sqrt(2) is irrational.

Another sequence that does so is 1, 1.4, 1.41, 1.414, 1.4142, ... and analogous sequences with other number-base systems.

So rational numbers are not Cauchy-closed. Real numbers are, however. One can define addition, subtraction, multiplication, division, and ordering with Cauchy sequences, so the real numbers are like the rational numbers in all these respects.
 
But real numbers are not algebraically closed. Try to solve for x in x^2 + 1 = 0. If x is a real number, then x^2 >= 0, and x^2 + 1 >= 1 > 0. So x cannot be a real number.

Here also, we can accept it as a new kind of number, i = sqrt(-1), the unit of the imaginary numbers.

Do we need any more kinds of numbers? As it turns out, we don't. From the Fundamental Theorem of Algebra, every polynomial with complex coefficients and at least one root has at least one complex root. Once one finds that root, one can divide it out of the polynomial and repeat. Thus, all a complex-coefficient polynomial's roots are complex, and the complex numbers are algebraically closed.

This means that real numbers are not algebraically closed, though they have a property that complex numbers lack: ordering. One can construct a partial ordering of the complex numbers, where several numbers can have the same place in the ordering, but not a full ordering, like for the real numbers.


A Short History of the Complex Numbers
Carl Friedrich Gauss (1777-1855). There are indications that Gauss had been in possession of the geometric representation of complex numbers since 1796, but it went unpublished until 1831, when he submitted his ideas to the Royal Society of Göttingen. Gauss introduced the term "complex number".
Then quoting him.
If this subject has hitherto been considered from the wrong viewpoint and thus enveloped in mystery and surrounded by darkness, it is largely an unsuitable terminology which should be blamed. Had +1, -1 and sqrt(-1) instead of being called positive, negative and imaginary (or worse still, impossible) unity, been given the names say, of direct, inverse and lateral unity, there would hardly have been any scope for such obscurity.
Or forward, backward, and sideways.
 
Back
Top Bottom