• Welcome to the Internet Infidels Discussion Board.

Abstract Algebra

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
26,850
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Mathematicians love abstraction, and the field of abstract algebra is no exception. I'm bringing it out here to make it easier to find than in The Math Thread.

Consider some properties of addition:
  • It is commutative: a + b = b + a
  • It is associative: (a + b) + c = a + (b + c)
  • It has an identity: a + 0 = a
  • It has inverses: a + (-a) = 0
Take those properties and make some binary (two-argument) operator have them or else have only some of them. What possible operators can there be?

Try adding a unary operator or another binary operator, especially one that is intertwined with the original binary operator in some way. What further possible operators can there be?

But why start with a binary operator? Why not a unary (one-argument) or a ternary (three-argument) one? Unary ones are almost too simple, while ternary ones has not been researched very much.
 
Consider some properties of addition:
  • It is commutative: a + b = b + a
  • It is associative: (a + b) + c = a + (b + c)
  • It has an identity: a + 0 = a
  • It has inverses: a + (-a) = 0
Take those properties and make some binary (two-argument) operator have them or else have only some of them. What possible operators can there be?

Can't you make up an infinite amount of bullshit *(non set) operators that satisfy those conditions?

Are you only looking for operators on certain well defined sets?


Try adding a unary operator or another binary operator, especially one that is intertwined with the original binary operator in some way. What further possible operators can there be?

Anticpmmutator?

But why start with a binary operator? Why not a unary (one-argument) or a ternary (three-argument) one? Unary ones are almost too simple, while ternary ones has not been researched very much.

There are tons of unary, binary, ternary, polyary operators in modern compute languages... All of which can be broken-down to binary operators designed by polyary operators. (brains).


Off topic? I remember trying to come up with an easy (papertime) unary division algorith!.
 
There is no simple binary division operator. It is done in hardware using what is called a barrel shifter. In binary shift one way is division, the other multiplication.

Floating point fast multiplication and division in binary assigns weights to each bit. Algorithms have been established since the 50s.

https://en.wikipedia.org/wiki/Barrel_shifter

1000 = 8 decimal in binary
Shift right 1 bit is 0100 or 4 decimal, divide by 2.

Floating point numbers are represented by exponent, the decimal point, and the mantissa which is 1 maximum. It is called fractional arithmetic. Each bit in the mantissa is binary weighted. The lsb or resolution is 1./2^n mantissa bits.
 
Anticpmmutator?
Do you mean a commutator? An anticommutator is a symmetric binary operator, while a commutator is an antisymmetric one. One can certainly do abstract algebra with antisymmetric operators. With a few other constraints, that's how one gets Lie algebras.

There are tons of unary, binary, ternary, polyary operators in modern compute languages... All of which can be broken-down to binary operators designed by polyary operators. (brains).
True, but I'm talking about their abstract mathematical properties. Especially properties that can be deduced from constraints like being commutative or associative or whatever.
 
Let's see about unary functions. Consider a function from a set S to itself. The function's range or set of possible output values I will call R. It will always be a subset of S.

The function will be invertible only if (1) R = S and (2) for all a, b in S with a != b, f(a) != f(b). In short, f is a bijection.

If S is finite and there are some a, b in S such that a != b but f(a) = f(b), then R must be a proper subset of S. That is not necessarily the case if S is infinite, and it should be easy to come up with counterexamples.

-

Let us consider repeated application of f: f(n)(a) = f (f(...f(a))) -- n f's. f(0)(a) = a, and if f is invertible, then n can be negative.

It is easy to prove that f(m)(f(n)(a)) = f(m+n)(a)

For S being finite, it can be shown that for any a, there is some m and n such that f(m+n)(a) = f(m)(a). It is easy to prove with a counting argument, so it fails for S infinite. It is also easy to find counterexamples if S is infinite.

So for S finite, repeated applications of f on every element will converge on some "limit cycle", a set of elements where applying f goes through all of them until doing so repeats the element that one started with.

A limit cycle can have length 1: f(a) = a
Length 2: f(a) = b, f(b) = a
Length 3: f(a) = b, f(b) = c, f(c) = a
...

When S is infinite, limit cycles may exist, but they do not have to. Infinite strips may also exist.

If f is a bijection, then if S is finite, then all of its elements are in cycles, while if S is infinite, then all of its elements are in cycles and/or strips.

-

I hope that this post is a good example of what one does in abstract algebra.
 
 Algebraic structure has a big list of them. No mention of ternary operators, however.

Abstract algebra with ternary operators has been asked about a few times in the math Stack Exchange:

abstract algebra - Is anybody researching "ternary" groups? - Mathematics Stack Exchange -- its responders mentioned  Heap (mathematics) and a now dead link on work on ternary-operator algebras.

Heaps are defined with:
Para-associativity: f(f(a,b,c),d,e) = f(a,f(d,c,b),e) = f(a,b,f(c,d,e))
Identity: f(x,a,a) = f(a,a,x) = x

Group to heap: f(a,b,c) = a*b-1*c
Heap to group: a*b = f(a,e,b), identity e, inverse a-1 = f(e,a,e)

abstract algebra - algebraic ternary operators - Mathematics Stack Exchange -- no responses

abstract algebra - how many "pure ternary" operators are there? - Mathematics Stack Exchange

One that cannot be turned into a composition of binary operators: for args a,b,c no possible f(g(a,b),c) and similar.
 
Just for the heck of it, I decided to find all the pure ternary operators over {0,1}.
  • A unary operator has 2^1 = 2 possible arg values, and thus 2^2 = 4 possibilities.
  • A binary operator has 2^2 = 4 possible arg values, and thus 2^4 = 16 possibilities.
  • A ternary operator has 2^3 = 8 possible arg values, and thus 2^8 = 256 possibilities.
I found that only 152 of the possible ternary operators can be reduced to compositions of binary ones. Leaving 104 pure ternary ones.
 
1,2,4,8,16,32...2^n
I do not understand what you mean by ternary.
 
Turning to binary operators, the simplest case is one set and one binary operator on them. That is, group-like structures. The set will be S and the operator *: S*S -> S


The most general sort is called a magma or a groupoid.

A magma with associativity is a semigroup. For all a, b, c in S, (a*b)*c = a*(b*c)

A semigroup with an identity is a monoid. For all a in S, there is some e in S such that e*a = a*e = a.

A monoid with a unary operator that does inverses is a group. For all is some operator inv(S) -> S such that for all a in S, inv(a)*a = a*inv(a) = e.

A magma with division is a quasigroup. For all a and b in S, there is some x and y in S such that a*x = b and y*a = b.

A quasigroup with an identity is a loop.

A loop with associativity is a group.

The operators in all of these entities can be commutative, and a commutative group is usually called an abelian one.
 
Let us see what happens if there is an identity element. In general, there may be an identity that is an identity on only one side of the operator, a left identity or a right identity:
el*a = a
a*er = a
for all a in S

If there is a left identity and a right identity, then they are equal. There might be more than one left identity, but only if there are no right identities, and vice versa. Proof:
Construct el*er
Left identity: el*er = er
Right identity: el*er = el
Thus, el = er.

The same arguments apply for zero or absorber elements:
zl*a = zl
a*zr = zr
There can be multiple left zeros only if there is no right zero, and vice versa. If there is both, then they are identical.

If an operator is commutative, then if it has a left identity, then that element is also its right identity. Likewise for zeros.

Finally, identities and zeros can coexist.
 
Now for quasigroups. One adds division. For all a and b in the quasigroup's set S, there is a unique x and y such that
a*x = b and y*a = b

x and y need not be distinct.

A consequence is the lack of zeros for more than one element. For left zero z, z*x = z -- only z and not all the members of S. Likewise for a right zero. This generalizes the meaningless of division by zero.


Another consequence is that its operation table is a Latin square, a n*n square where each row and each column contains one each of n distinct symbols. If one rearranges the rows and columns of a Latin square, one gets another Latin square.

I will now find the smallest Latin squares. I will make both rows and columns 1, 2, ..., n for definiteness.

Code:
1

1 2
2 1

1 2 3
2 3 1
3 1 2

1 2 3 4 ... 1 2 3 4 ... 1 2 3 4 ... 1 2 3 4
2 1 4 3 ... 2 1 4 3 ... 2 3 4 1 ... 2 4 1 3
3 4 1 2 ... 3 4 2 1 ... 3 4 1 2 ... 3 1 4 2
4 3 2 1 ... 4 3 1 2 ... 4 1 2 3 ... 4 3 2 1
However all but the first of the 4*4 Latin squares are equivalent by rearrangement and relabeling.

These ones are the operation tables of groups Z1, Z2, Z3, Z2*Z2, and Z4. Z1 is the identity group.
 
A quasigroup with an identity is called a loop.

A loop has left and right inverses: linv(a)*a = a*rinv(a) = e. They are distinct only if the loop's operator is neither commutative nor associative.


Backing up to groupoids, a groupoid with associativity is a semigroup.

Matrix multiplication is a well-known operation that is associative without being commutative.


Adding an identity to a semigroup gives a monoid.


Adding inverses to a monoid gives a group.

A group's inverse is a unary operation that makes a bijection over the group's set. Inversion is an involution, meaning that it is its own inverse operation: inv(inv(a)) = a.

Group theory is a large subject, so I'll save that for another thread.


I'll close this post with how to get commutative groups from commutative monoids. It is the Grothendieck construction, a generalization of subtraction. Define the group elements as ordered pairs of monoid elements:
a = (a1,a2), b = (b1,b2)

Group-element equality: a = b if a1+b2 = a2+b1
Group-element operator: a + b = (a1+b1, a2+b2) using + for the monoid operator
Group identity: e = (e,e) = (a,a) for all a in the monoid
Group inverse: inv(a) = (a2,a1)

Associativity of the group readily follows from the associativity of the monoid. Likewise for commutativity.
Check identity: a + e = (a1+e,a2+e) = (a1,a2) = a
Check inverse: inv(a) + a = (a2+a1, a1+a2) = e

If the monoid is ordered, then one can also define ordering for the group.
 
First, I must note that a commutative group has a special name: abelian group.


Let's see what happens if there are two binary operations. Something called a ring generalizes addition and multiplication and how they are related. So I will discuss ring-like entities here.

The simplest ringlike entity is a ringoid, where the multiplication analog is distributive over the addition analog:
a*(b+c) = (a*b) + (a*c)
(a+b)*c = (a*c) + (b*c)
One can restrict oneself to left or right distributivity if one wants.

One can add various properties of groups to a ringoid's addition and multiplication, and the terminology can be confusing.
Semiring (Wikipedia): both addition and multiplication are monoids (additive identity usually called 0 and multiplicative identity 1)
Semiring (Wolfram Mathworld): both addition and multiplication are semigroups
Ring (Wikipedia): addition is an abelian group and multiplication is a monoid
Ring (Wolfram Mathworld): addition is an abelian group and multiplication is a semigroup
Integral domain: ring where multiplication is commutative and without zero divisors
Algebraic Field: integral domain where multiplication is an abelian group over all nonzero elements

The single-element case is trivial: the zero ring. It is much like the identity group.


Here is a simple property of rings. Every ring with a multiplicative identity (1) contains what may be called an integer ring. Either the integers themselves or else the integers modulo some number -- Z or Z(n). To construct this ring's elements, add 1's and also add -1's. If doing so wraps around to 0, then it's Z(n), and if not, Z. The number n in Z(n) is called the ring's "characteristic", and for Z, the characteristic is officially 0.

The ring Z(6) has zero divisors: 2*3 = 0. Likewise for any other Z(n) where n is composite. But if n is prime, then Z(n) is not just a ring, it is a field. In fact, all the finite fields are known, and they all have order p^m for prime p. Each prime power's field is unique, its additive group is Z(p)^m, and its multiplicative group Z(p^m-1) (omitting 0, of course).
 
The Grothendieck construction for subtraction also works for getting rings from semirings. For a semiring R with addition an abelian monoid and multiplication a semigroup, construct R' with elements
a = (a1,a2), b = (b1, b2), ...

Then,
a = b if a1+b2 = a2+b1
a + b = (a1+a2,b1+b2)
a*b = (a1*b1+a2*b2, a1*b2 + a2*b1)

The additive identity
0 = (0,0)

The additive inverse
-a = (a2,a1)
thus making R' a full-scale ring, since this makes addition in R' a group.

If multiplication is commutative, then that carries over. If it has an identity 1, then we get a multiplicative identity for R':
1 = (1,0)

This is useful for getting the integers from the natural numbers. Those numbers may be constructed by the Peano axioms or as set-theory cardinality. They form a semiring where addition and multiplication are both commutative monoids.
 
An integral domain is a ring with commutative multiplication and no zero divisors. The integers under addition and multiplication are an ID, as well as Z(n) for n prime.

From an integral domain, one can construct a "field of fractions":
a = (a1,a2), b = (b1,b2), ...
Note: the second member of each pair must be nonzero.

Equality, addition, and multiplication:
a = b if a1*b2 = a2*b1
a + b = (a1*b2+a2*b1, a2*b2)
a * b = (a1*b1, a2*b2)

They are both commutative and associative and have identities
0 = (0, 1)
1 = (1, 1)
and inverses
-a = (-a1,a2)
1/a = (a2,a1)
Note: the additive identity has no multiplicative inverse -- division by zero is invalid.

This is how one gets the rational numbers from the integers.
 
Anticpmmutator?
Do you mean a commutator?
No, I was thinking of an ??operation?? that negated the commutative property. Bad nomenclature takes precedent among pedantic obfuscators, since it came first, but it shouldn't. So I'm applying antibadnomenclature to anticommutator.


An anticommutator would render a commutative operation like addition non-commutative under certain conditions. It could rotate the second term of an operation like addition or multiplication by an arbitrary amount (assuming complex numbers), etc.


Assuming a and b are complex numbers, how would you define an anticommutator for addition and multiplication? You could define it using the first term as number of radians to rotate the second term, or whatever....

It has to generate the same output given the same inputs in the same order. a{+}b = a{+}b
It has to generate different outputs when the inputs are in differing order. a{+}b != b{+}a
If the inputs (a=b) are equal.... a{+}b = b{+}a

a{+}b != b{+}a unless b=a

a{*}b != b{*}a unless a=b
 
One can construct a ring of polynomials. Their coefficients must be in some ring, and the usual rules of polynomial addition and multiplication hold for them.

If the coefficients are not just in a ring, but in a field, then one can define an extension field by adding the roots of some polynomial. For instance, one can extend the rational numbers with the square root of 2, the first irrational number discovered.

a = a1 + a2*x
where x^2 - 2 = 0

Addition goes as usual, while multiplication goes
a*b = (a1*a2) + (a1*b2+a2*b1)*x + (b1*b2)*x^2 = ((a1*a2) + 2*(b1*b2)) + (a1*b2+a2*b1)*x
That last result can be obtained by taking the remainder after dividing by x^2 - 2.

Doing this for every rational-coefficient polynomial equation is how one gets the algebraic numbers from the rational numbers.


One can also construct a ring of matrices with some specified size. Their contents must be in some ring, and the usual rules for polynomial addition and multiplication hold for these matrices.

The matrix ring's additive identity 0 = (all 0's)

If the contents ring has a multiplicative identity, then the matrix ring also has one, the identity matrix:
a(i,j) = if (indexes i and j are equal) then 1 else 0, needing only the two identities.

For a matrix ring, one can define an additional operation: scalar multiplication.


It is hard to define a determinant unless multiplication is commutative. Otherwise, it may be hard to avoid such troubles as det(transpose(A)) not equaling det(A). That also makes it hard to define matrix inverses. In fact, for matrix inverses, the content ring must also be a field, so that division will be defined in it.
 
No, I was thinking of an ??operation?? that negated the commutative property.
So you want some non-commutative binary operator.

It's pretty easy to invent one. What may not be so easy is to invent one that nevertheless satisfies some other properties. What other properties should it satisfy?
 
Back
Top Bottom