• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Abstract Algebra

Advancing in Wikipedia's list, I get to lattice-like entities.

Each lattice binary operator makes a semilattice, a semigroup whose operation is commutative and also "idempotent": a*a = a for all a in its set. For numbers and arithmetic operations, the only nontrivial case is multiplication over {0,1}. However, the minimum and maximum of two numbers are both semilattice operations, as are the union and intersection of two sets.

A lattice is a set with two semilattice operations, usually called meet and join. A complete one has meets and joins defined for arbitrary pairs of elements. Examples:
Numbers: meet = mininum, join = maximum
Sets: meet = intersection, join = union

A set of numbers with min and max is a complete lattice, as is the set of all subsets of some set, including itself (its power set), with union and intersection.

A bounded lattice has an overall maximum and minimum. For all a in its set S:
(Smax) join a = (Smax)
(Smin) meet a = (Smin)

A complemented lattice has a complement operator with these properties:
For ac = complement(a),
a join ac = (Smax)
a meet ac = (Smin)


One can define a sort of ordering.
If
a meet b = a
and
a join b = b
then
a <= b

Real numbers and their subsets are well-ordered: one can find the ordering of any two of them. However, for power sets, that is in general not the case, and the only nontrivial exception is the power set of a set with only one elements.
 
For a power set of some set M, the complement operator does set complementation relative to M: cmpl(S) = M - S

For numbers, complementation is only possible if there are at most two of them. For two numbers, the complement operator reverses them.

Complementation can be extended as ortho-complementation. In this case, the complement operator has some additional properties:

Involution: cmpl(cmpl(a)) = a
Order reversing: a <= b is equivalent to cmpl(b) <= cmpl(a)

Set complementation is an example of this extended complementation.

These properties imply De Morgan's laws:
cmpl(a meet b) = cmpl(a) join cmpl(b)
cmpl(a join b) = cmpl(a) meet cmpl(b)



Now for distributive lattices. In these, meet is distributive over join, and join over meet:
a meet (b join c) = (a meet b) join (a meet c)
a join (b meet c) = (a join b) meet (a join c)

This is the case for min and max, and also for union and intersection.
 
We now get to Boolean algebras.
As a lattice, a Boolean algebra is a distributive one with a complement.
As a ring, a Boolean algebra is a commutative one with idempotent multiplication.

Let's see what happens with the ring. Add multiplicative identity 1 to itself and impose idempotence:
1 + 1 = 2
2^2 = 4 = 2
2 = 0
Thus, a Boolean ring has characteristic 2 -- all operations must be done modulo 2.

So we get this mapping:
a meet b = a*b mod 2
a join b = a + b + a*b mod 2
cmpl(a) = 1 + a mod 2

The simplest Boolean algebra is the one that is associated with logical values and operations:
true = 1
false = 0
and = meet
or = join
not = cmpl

Ring operations:
multiplication = and
addition = exclusive or (xor)

One can form additional Boolean algebras by taking the product of n of this basic one:
a = (a1,a2,...,an)

This maps onto the power-set lattice by doing
a -> subset of M: {a1 is whether or not M element M1 is in it, and likewise for the others}


History of Computers and Computing, Birth of the modern computer, The thinkers, Claude Shannon
In his paper, Shannon proved that Boolean algebra and binary arithmetic could be used to simplify the arrangement of the electromechanical relays then used in telephone routing switches, then turned the concept upside down and also proved that it should be possible to use arrangements of relays to solve Boolean algebra problems. Exploiting this property of electrical switches to do logic is the basic concept that underlies all electronic digital computers.
From History of Computers and Computing, The birth of the modern computer, The thinkers

Combined with Turing universality and John von Neumann's stored-program idea, it showed that it was not necessary to have a lot of specialized hardware to do anything that is computable. All that is necessary is to implement a few basic instructions. In practice, one uses specialized hardware to speed up various operations, but that's not the same thing as absolutely needing it.
 
Boolean algebra has lots of operator interrelationships. For values true and false, and operators not, and, or:

Commutativity: a and b = b and a ... a or b = b or a
Associativity: (a and b) and c = a and (b and c) ... (a or b) or c = a or (b or c)
Idempotence: a and a = a ... a or a = a
Identity: a and true = a ... a or false = a
Zero: a and false = false ... a or true = true ... (absorber, annihilator)

Distributivity: a and (b or c) = (a and b) or (a and c) ... a or (b and c) = (a or b) and (a or c)
Absorption: a and (a or b) = a ... a or (a and b) = a

Complement: a and (not a) = false ... a or (not a) = true
Repeat: not (not a) = a
De Morgan: not (a and b) = (not a) or (not b) ... not (a or b) = (not a) and (not b)

One can easily derive operation tables:
not false = true ... not true = false
false and false = false and true = true and false = false ... true and true = true
false or false = false ... false or true = true or false = true or true = true

These relations are not all independent of each other -- one can assume some of them and derive the rest of them from those ones. For example, one can start with (not) and (and) and define (or) with one of De Morgan's laws:
a or b = not((not a) and (not b))
 
Now to module-like entities. They involve two sets and at least two binary operations.

A module has an abelian group (M,+) and a ring R, along with an operator * that combines M and R. In the module, the elements of R are sometimes called scalars.

A left module has R*M = M, with properties:
r*(x+y) = r*x + r*y
(r+s)*x = r*x + s*x
(r*s)*x = r*(s*x)
(1)*x = x (if the ring has an identity, (1))
for r, s in R and x,y in M

A right module has M*R = R.

If R is a division ring or a field, then a module with it is a vector space.

A division ring is a ring where every element but the additive identity has a multiplicative inverse. For numbers, every element but 0 has a reciprocal.

A division ring is sometimes called a skew field. A finite one has commutative multiplication, thus being a plain field.

-

I'll close by mentioning Lie algebras. They are vector spaces with an additional operator [], the "Lie bracket". It has these properties:

It is *not* associative.

It is bilinear: [a*x+b*y,z] = a*[x,z] + b*[y,z] ... [x,a*y+b*z] = a*[x,y] + b*[x,z]
for all scalars a,b and vectors (algebra members) x,y,z

It is alternative: [x,x] = 0

It satisfies the Jacobi identity: [[x,y],z] + [[y,z],x] + [[z,x],y] = 0

From alternativity, [x+y,x+y] = 0, but from bilinearity, it is equal to [x,x]+[x,y]+[y,z]+[y,y] = [x,y]+[y,x]
Thus, this operator is anticommutative: [x,y] = - [y,x]


Lie algebras are generators of Lie groups, groups whose elements are specified with continuous parameters. Near the group's identity:
(group element) = (identity) + (group parameters).(group's Lie algebra) + O(group parameters)^2

taking their value at the identity as zero.
 
Back to Boolean algebras. I will try to find which ones exist.

Suppose that an algebra has an element a that is neither true nor false. Is it possible that a = (not a)? By complementation,
a or (not a) = a or a = a = true
a and (not a) = a and a = a = false
A contradiction. Thus, a and (not a) are distinct.

For elements a and b neither true nor false, construct ab = a and b, a' = a and (not ab), and b' = b and (not ab). These all have (and) of any combination of them equal to (false), so the non-(false) ones form a basis set p1, p2, and in some cases, p3. Then construct elements with (or) on any combinations of them.

If an element c is not among the elements derived from some basis set p1, p2, ..., pn, then construct

c1 = c and p1, c2 = c and p2, ..., cn = c and pn
c' = c and (not c1) and not (c2) ... and not (cn)

If c1 is not (false), then replace p1 with c1, and likewise for the others.
If c' is not (false), add it as a basis-set member.

Continue until one runs out of elements.

Thus, every finite Boolean algebra can be decomposed into a product of instances of the simple Boolean algebra. That one is the well-known one, the one with only (true) and (false).
 
Abstract-algebra rings were named after certain extensions of integers.

Consider Z(2^(1/3)), the integers extended with the cube root of 2, which I will call x.

Start with x, multiply by x to get x^2, and and multiply again by x to get x^3 = 2, an integer. Thus looping around and forming a ring shape.

-

2^(1/3) is an example of an "algebraic integer". Every algebraic integer satisfies an integer-coefficient monic polynomial, a polynomial with its highest-degree coefficient being 1. Like x^3 - 2.

Another algebraic integer is i, the unit of the imaginary numbers. More generally, all roots of unity are algebraic integers.

Every algebraic number can be expressed in the form (algebraic integer) / (nonzero integer).

The complex algebraic integers are closed under addition and multiplication, though not division, and they thus form a ring. The real algebraic integers are a subring of it, and the integers a subring of that one. The only rational numbers that are algebraic integers are ordinary integers.

Algebraic integers, like ordinary integers, are not just a ring, but an integral domain (a commutative ring with no zero divisors).
 
For testing the closure of algebraic numbers and algebraic integers, one can use a remarkable result: for the sums and products of polynomial roots, one can find polynomials for them whose coefficients are arithmetic functions of the original polynomials' coefficients.

I will prove that by construction.

I use the Newton-Girard formulas for relating polynomial coefficients to powers of roots. A degree-n monic polynomial can be expressed as x^n - e(1)*x^(n-1) + e(2)*x^(n-2) - e(3)*x^(n-3) + ...

where the coefficients e(k) are, in terms of the polynomials' roots x(k):
e0 = 1
e1 = x1 + x2 + ... + x(n)
e2 = x1*x2 + x1*x3 + x2*x3 + ... + x(n-1)*x(n)
e3 = x1*x2*x3 + x1*x2*x4 + ... + x(n-2)*x(n-1)*x(n)
...
e(n) = x1*x2*x3*...*x(n)

For k > n, e(k) = 0

We want to find powers of sums of roots from these coefficients:
p(k) = x1^k + x2^k + ... + x(n)^k
with
p(0) = n

The Newton-Girard formulas are:
sum from m = 0 to m = k of (-1)^m * e(m) * p(k-m) = 0
or
p(k) + sum from m = 1 to m = k-1 of (-1)^m * e(m) * p(k-m) + (-1)^k * k * e(k) = 0

So if one has the polynomial coefficients e, one can get the sums of root powers p, and if one has the p's, one can get the e's.


Taking polynomials P1 and P2, with degrees n1 and n2, they have n1 and n2 roots respectively, and thus n1*n2 sums and products of roots from each one: x1(1) + x2(1), x1(1) + x2(2), x1(2) + x2(1), x1(2) + x2(2), ... , x1(n1) + x2(n2), and likewise for multiplication.

So one makes P1 and P2 both monic by dividing by their highest coefficients, and from the resulting coefficients, one finds the sums of root powers, p1(k) and p2(k) for each one, up to power k = n1*n2.

For multiplication, the combined sum of root powers p12(k) is easy: p12(k) = p1(k) * p2(k)

For addition, it is more difficult. Using the binomial theorem gives: p12(k) = sum over m from 0 to k of (k!/m!/(k-m)!)) * p1(m) * p2(k-m)

One then uses the Newton-Girard formulas to get the combined-root polynomial coefficients from the p12(k) values, thus getting combined-root polynomial P12.


So if the coefficients of P1 and P2 are in some field, then the coefficients of P12 are also in that field, at least if that field includes the rational numbers. Getting the coefficients of P12 requires some divisions, so that's why I'm using fields and not something less restrictive, like rings.
 
I'll now consider polynomial coefficients in commutative rings. Commutative ones to avoid multiplication-order complications, because such complications will appear when calculating determinants.

For two polynomials P1 and P2, one can calculate their "resultant", res(P1(x),P2(x),x), something equal to

(highest coefficient of P1)^(degree of P2) * (highest coefficient of P2)^(degree of P1) *
Product for i1 and i2 of (x1(i1) - x2(i2))

The resultant is the determinant of the "Sylvester matrix" of the two polynomials. For degrees of P1 and P2 being n1 and n2, this matrix is:

row 1: P1 coefficients from highest to lowest, n2 0's
row 2: one 0, P1 coefficients, (n2-1) 0's:
...
row n2: n2 0's, P1 coefficients
row n2+1: P2 coefficients from highest to lowest, n1 0's
row n2+1: one 0, P2 coefficients, (n1-1) 0's
...
row n1+n2: n1 0's, P2 coefficients


To get the sum-of-roots polynomial, one finds P12(y) = res(P1(x) ,P2(y-x), x)

To get the product-of-roots polynomial, one finds P12(y) = res(P1(x), x^(n2)*P2(y/x), x)

It is evident that both P12's have coefficients in the same commutative ring as the coefficients of P1 and P2.


If P1 and P2 are monic, then it can be shown that both P12's are also monic, to within multiplication by +/- 1.

This proves that the algebraic integers are closed under addition and multiplication, and that they thus form a ring.
 
Back
Top Bottom