• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

How many groups and semigroups and rings and the like - abstract algebra

Another one we may call the maximum semigroup. Consider a set S of completely ordered entities like real numbers or their subsets. Use operation

a*b = max(a,b)

It is a semigroup with identity min(S) and zero max(S) if they exist. It is idempotent: max(a,a) = a, making it a band. It is commutative, making it a semilattice.


Now consider the bicyclic semigroup: (A,A) for a set A of completely ordered numbers (real numbers and their subsets, some selections of complex numbers). The operation is

(a,b)*(c,d) = (a - b + t, d - c + t) where t = max(b,c)

Its identity is (min(A),min(A)) if min(A) exists and its zero is (max(A),max(A)) if max(A) exists.


Another property: the cancellative property. Left cancellativity: a*b = a*c implies b = c. Right cancellativity: b*a = c*a implies b = c. Two-sided cancellativity is plain cancellativity. All groups have it, but many semigroups don't. The null semigroup, for instance. The left-zero semigroup is right cancellative but not left cancellative, having only partial cancellativity.
 
About idempotence, every semigroup element can be idempotent, but only one group element can be: the identity.

Let us see what happens to a ring when all its elements are idempotent. That makes it a "Boolean ring", related to how Boolean functions are idempotent: f(x,x) = x where f is "and" or "or" and x = "true" or "false".

Consider the idempotence of x + x:

x + x = (x + x)^2 = x^2 + x^2 + x^2 + x^2 = x + x + x + x
Thus, x + x = 0 and the ring has characteristic 2 -- 2*(everything) = 0

Now consider multiplication:
x + y = (x + y)^2 = x^2 + x*y + y*x + y^2 = x + y + x*y + y*x
Thus, 0 = x*y + y*x
Add x*y to both sides: x*y = x*y + x*y + y*x = y*x from having char 2.
Thus, an idempotent ring is commutative.

This means that some semigroups cannot be multiplication semigroups for rings.

Proofs from Idempotent Ring has Characteristic Two - ProofWiki and Idempotent Ring is Commutative - ProofWiki

ProofWiki is a big compendium of proofs of a lot of mathematical propositions.


An idempotent semigroup is not nilpotent, because an idempotent entity raised to an arbitrary power gives itself, and for nilpotence, some power of every element must be the semigroup's zero.

For a ring, something similar applies, though the idempotence and nilpotence apply to multiplication and though the nilpotence convergent point is 0, the additive identity.
 
Now to ideals of semigroups.
For semigroup S,
Left ideal: S*J <= J
Right ideal: J*S <= J
Two-sided (plain) ideal: union(S*J,J*S) <= J

S itself is an ideal, the trivial ideal, and S is "simple" if it only contains that ideal. Every ideal is a subsemigroup, and semigroup powers are also ideals: S*S, S*S*S, ...

S is "zero-simple" if it contains a zero z, if S^2 != {z}, and if its only ideals are S itself and {z}.

For instance, the ideals of (N0,+) are, for each a in N0, {a, a+1, a+2, a+3, ...}

Turning to the left-zero semigroup, its only left ideal is S, while its right ideals are S and all its subsets.

One can simplify the task of finding ideals by looking at "principal ideals", ideals generated by semigroup elements. One can then find other ideals by taking unions of those for different elements. For them, we use S1 = semigroup + added-on identity if not already present.

Left: L(a) = S1*a = union(S*a,{a})
Right: R(a) = a*S1 = union(a*S,{a})
Two-sided: J(a) = S1*a*S1 = union(S*a*S,S*a,a*S,{a})

For Green's relations, we need the intersection of left and right:
Intersection: H(a) = intersect(L(a),R(a))

Green's relations are: a L b = L(a) equal to L(b), a R b = R(a) equal to R(b), a J b = J(a) equal to J(b), a H b = H(a) equal to H(b), and an extra one, a D b if there is some c that makes a L c and c R b. For finite semigroups, a D b = a J b. These relations split up S into classes whose members satisfy these relations relative to each other.

Let's look at these semigroups.

The principal ideals of (N0,+) are J(a) = {a, a+1, a+2, ...} Since this semigroup is commutative, J(a) = L(a) = R(a) = H(a).

The principal ideals of (N1,*) are J(a) = {a, 2a, 3a, ...} Those of (N0,*) are J(a) = {0, a, 2a, 3a, ...} and for a = 0, {0} the zero ideal.

For the rectangular band, L((i,j)) = {(x,j) for all x in I}, R((i,j)) = {(i,y) for all y in J}, J((i,j)) = S, H((i,j)) = {(i,j)}

For the maximum semigroup (band, semilattice), J(a) = {x in S for x >= a}

For the bicyclic semigroup
(a,b)*(c,d) = (a - b + t, d - c + t) where t = max(b,c)

L((a,b)) = {((min)+x,b+y) for x,y >= 0}
R({a,b}) = {(a+x,(min)+y) for x,y >= 0}
J({a,b}) = S
H({a,b}) = {(a+x,b+y) for x,y >= 0}
 
There's a result called Green's theorem, one that can find us the subgroups that a semigroup contains. Not just semigroups but full-scale groups.

If a H a^2 or H(a) = H(a^2) then a is the identity of a group whose elements b satisfy a H b or H(a) = H(b).

Let's see how this works for integers under multiplication: (Z,*).

Its principal ideals J(a) are {0} for a = 0 and |a|*Z for a != 0. Both positive and negative a produce the same ideals. For commutative semigroups, L(a) = R(a) = J(a) = H(a).

Let us now look for values of a where J(a^2) = J(a). This gives us a^2*Z = |a|*Z with solutions a = +-1. So we find an identity, 1, with ideal J(1) = Z. The other semigroup element with that ideal is -1: J(-1) = Z also. Thus, we have found a group, {1,-1}. Also, J(0) = {0} the only element to have that ideal, and 0^2 = 0, so there is a second group, {0} - the trivial group.

Gaussian integers are similar, with the principal ideal for (a+b*i) having the form {(a*u-b*v) + (a*v+b*u)*i for integer u, v}. The absolute square has value (a^2+b^2)*(u^2+v^2). Its minimum nonzero value is (a^2+b^2). Checking on whether H(a^2) = H(a), this means that the minimum nonzero values must be equal: (a^2+b^2)^2 = (a^2+b^2). That is only possible if a^2+b^2 = 0 or 1, giving solution a = b = 0 (the semigroup's zero), a = +-1 and b = 0, and a = 0 and b = +-1. Thus, this semigroup contains group parts {1,i,-1,-i} and {0}.

Another case is the Eisenstein integers, a+b*w where w = (-1+sqrt(-3))/2 making a triangular grid; the Gaussian integers make a square grid. Here also, one finds group parts {1,1+w,w,-1,-1-w,-w} and {0}. Note that w^2+w+1 = 0
 
That document mentions the "Rees matrix semigroups" - all finite zero-simple semigroups and some infinite ones have that form. Their construction is a bit complicated. Consider group G, sets I, U, and matrix P with values P(u,i) (u in U, i in I) either being z or a member of G. Every row and column of P contain at least one element other than z.

The semigroup's elements are the zero, z, and (i,a,u) with i in I, a in G, and u in U. The semigroup operation is

(i,a,u)*(j,b,v) = z if P(u,j) = z or else (i,a*P(u,j)*b,v) otherwise

All the elements are "regular", for element a, there is some x such that a = a*x*a. The zero z is regular, and for (i,a,u), the corresponding values are (j, nv(p(u,j))*inv(a)*inv(p(v,i)), v) for some j, v.

(i,a,u) is idempotent eqv p(u,i) != z and a = inv(p(u,i))
eqv = is equivalent to

Green's relations:
(i,a,u) R (j,b,v) eqv i = j
(i,a,u) L (j,b,v) eqv u = v
(i,a,u) H (j,b,v) eqv i = j and u = v
The only two-sided ideals are the semigroup itself and {z}.

The rectangular property:
a*b D a eqv a*b R a
a*b D b eqv a*b L b

Rees matrix semigroups are "completely zero-simple", meaning that they don't have infinite chains of left or right ideals, each one being a sub-ideal of a neighbor. (N0,+) has an infinite chain of ideals, for instance. Finite semigroups don't have infinite chains, of course.

Likewise, for rings, an "Artinian ring" is one without some infinite chain of ideals. Every finite ring is Artinian, of course.
 
Then this document gets into "regular semigroups". A semigroup element a is regular if there exists some x such that a = a*x*a. x need not be unique. If all a semigroup's elements are regular, then that semigroup is regular.

Bands are regular, but semigroups like (N0,+) and (Z,*) are not.

Elements a and b are inverse if a = a*b*a and b*a*b. Inverses need not be unique. Consider a rectangular band:

(i,j)*(k,l)*(i,j) = (i,j)
(k,l)*(i,j)*(k,l) = (k,l)

Every pair of elements is an inverse pair.

Rees matrix semigroups are also regular, as are groups.

If inverses are unique, then the semigroup is an "inverse semigroup".

The bicyclic semigroup is regular: (a,b)*(b,a)*(a,b) = (a,b) and it is also inverse. Its idempotents are (a,a) and their products (a,a)*(b,b) = (t,t) where t = max(a,b).

Inverse-semigroup theorem: being inverse <-> regular with the idempotents forming a semilattice (commutative subsemigroup).

The document ended about there.

-

Nil-3 semigroups are not regular, except for the trivial one, {z}. Their only regular element is their zero: S^3 = {z}. Thus, most semigroups are not regular.
 
We have counts of nil-2 semigroups -- 1 -- and of nil-3 semigroups, and they seem to be nearly all semigroups. But what of the remainder?

Groups are a subset of monoids, and most monoids are trivial-action ones: (group part) * (semigroup part) = (no-change permutation of semigroup part). Nontrivial action: permutation with changes, like (1,2) -> (2,1). Since semigroups increase roughly as (n/(2*log(n)+1))^(n^2), they increase much faster than groups, and the most common kind of monoid is (semigroup) + (identity).


Idempotent semigroups or bands are semigroups with all elements idempotent: x*x = x. They are obviously not nilpotent, since S*S = S.

In my researches, I found counts of semilattices by number of generators. A semilattice is a commutative idempotent semigroup. However, those numbers do not well into counts of semilattices by number of elements.
Let us consider all the sets of subsets of some set, sets of subsets that are closed under union or intersection of those subsets. Of the subsets, I'll write {1,2,3} as 123 and {} as _ If nothing is in some set, I'll write .

0: . , _
1: . , _, 1, _ 1
2: . , _, 1, 2, 12, _ 1, _ 2, _ 12, 1 2, 1 12, 2 12, _ 1 2, _ 1 12, _ 2 12, 1 2 12, _ 1 2 12
(semilattices: all)

Empty set present:
0: _
1: _, _ 1
2: _, _ 1, _ 2, _ 12, _ 1 2, _ 1 12, _ 2 12, _ 1 2 12
(semilattices: no identity)

Empty set absent:
0: .
1: . , 1
2: . , 1, 2, 12, 1 2, 1 12, 2 12, 1 2 12
(semilattices: no zero)

Complete set present:
0: _
1: 1, _ 1
2: 12, _ 12, 1 12, 2 12,_ 1 12, _ 2 12, 1 2 12, _ 1 2 12
(semilattices: no identity)

Complete set absent:
0: .
1: . , _
2: . , _, 1, 2, _ 1, _ 2, 1 2, _ 1 2
(semilattices: no zero)

Empty and complete sets both present:
0: _
1: _ 1
2: _ 12, _ 1 12, _ 2 12, _ 1 2 12
(semilattices: neither identity nor zero)

Empty set present, complete set absent:
0: (contradiction)
1: _
2: _, _ 1, _ 2, _ 1 2

Empty set absent, complete set present:
0: (contradiction)
1: 1
2: 12, 1 12, 2 12, 1 2 12

Empty set absent, complete set absent:
0: .
1: .
2: . , 1, 2, 1 2
(semilattices: neither identity nor zero)

For n generators, the limiting number of labeled ones is is 2^(2^n/sqrt(n*pi/2)) and that number for unlabeled ones is the labeled-ones number divided by n!

Subsets of subsets of n entities grows as 2^(2^n)
 
 Semilattice
Semi-lattice - Encyclopedia of Mathematics

That's related to:

 Lattice (order)
Lattice - Encyclopedia of Mathematics

A lattice is defined on a partially ordered set or poset. Its ordering goes into two operations:
  • Join - maximum - supremum - least upper bound
  • Meet - minimum - infimum - greatest lower bound
A semilattice has only one of these operations, and is either an upper one or a lower one depending on which one it has.

Considering an upper or join semilattice for definiteness, if it has a unique minimum element, then that element is the identity, and the semilattice is bounded. But a finite one has a unique maximum element, and that element is its zero.


Examples:

Nonnegative integers, rational numbers, real numbers between 0 and 1, other sets of numbers
  • Join: maximum
  • Meet: minimum
  • Top: the overall maximum if it exists
  • Bottom: the overall minimum if it exists
Positive integers
  • Join: least common multiple
  • Meet: greatest common divisor
  • Top: (none)
  • Bottom: 1
Boolean algebra: {false, true}
  • Join: Or
  • Meet: And
  • Top: true
  • Bottom: false
Subsets of some set:
  • Join: union
  • Meet: intersection
  • Top: that set
  • Bottom: the empty set
...
 
I will return again to the issue of subalgebras and homomorphisms / mappings of algebras onto other algebras.

The unary-function case is fairly simple.

Every finite algebra can be divided up into disjoint algebras that contain a limit cycle with optional approach trees that feed into it. An infinite algebra can also contain approach trees that have no limit cycle. An example of this is the integers with function (arg)+1. This produces a single unbranched approach tree that extends over the entire set. With function (arg)+2, one gets two unbranched approach trees: all even numbers and all odd numbers. These examples show that approach trees for infinite sets can have no beginning as well as no end.

Subalgebras are simple. Limit cycles with optionally parts of the approach trees such that if a tree member is in it, all downstream tree members are also in it.

Homomorphisms are interesting. One can map a subalgebra onto a single element with the others all remaining separate. Or with all the approach trees' members separate, one can do an alternating homomorphism in the limit cycle. For length 6, there are two nontrivial alternating ones, corresponding to divisors 2 and 3:

1 2 3 4 5 6 -> 1
1 3 5 -> 1, 2 4 6 -> 2
1 4 -> 1, 2 5 -> 2, 3 6 -> 3
1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4, 5 -> 5, 6 -> 6
 
Now to binary algebras or groupoids.

It is easy to show by construction that a groupoid can contain arbitrarily many subgroupoids with arbitrary sizes. However, decomposition into subgroupoids is *not* unique, and a groupoid may contain elements that are outside of some decomposition, though those elements may be inside some other one. Consider

1 2 3
2 1 1
3 1 1

This groupoid has three subgroupoids: (1), (1, 2) and (1,3). But (2), (3), and (2, 3) are not subgroupoids.

If a subgroupoid contains an identity, then that identity need not be the identity of the whole groupoid, if it has one at all. Likewise for zeros and ideals, and likewise for left and right ones.

But if a groupoid has an identity and a subgroupoid has that element in it, then that element is also an identity for the subgroupoid. Likewise for zeros and ideals, for left and right.

Turning to homomorphisms, it's hard to make general statements. But if the groupoid has an ideal J, one can construct the homomorphism f(J) = some new element j and f(G - J element) = itself. Then j becomes a zero for the result groupoid.

-

An interesting question is how many groupoids are there without subgroupoids or else without nontrivial homomorphisms onto other groupoids. Nontrivial meaning something other than the two trivial ones: f(G element) = same element with the same operation, and f(G) = one element for all as the order-1 groupoid.

For subgroupoids, order-2 groupoids have order 1, and every idempotent element (x*x = x) has its own subgroupoid: {x}. Out of the 16 total groupoids, only 4 have no subgroupoids, groupoids like
2 2
1 1

For homomorphisms, order-2 groupoids may have a homomorphism that reverses the two elements. This is an automorphism, an isomorphism of an algebra onto itself, and in turn, and an isomorphism is a homomorphism where the result algebra has the same size as the source algebra -- every source element goes onto a distinct result element.

Of the order-2 groupoids, 12 do not have this homomorphism, the only possible nontrivial one. One of those with it is the above one,
2 2
1 1
 
Turning to quasigroups and Latin squares, for order n, the maximum size of a subsquare is floor(n/2).

This is evident from a counting argument. Consider a subsquare of size m with elements 1 to m extending over indices 1 to m in both dimensions. The rest of the square is divided into two rectangles, (m+1 to n) * (1 to m) and (1 to m) * (m+1 to n), both with elements m+1 to n. The remaining part of the original square has extent m+1 to n in both dimensions, and it contains elements 1 to m in each row and column, and other ones if large enough.

That means that this second square has a size at least m, and since its size is n - m, that means that n >= 2m.

If the second square's size is exactly n/2, then it also is a Latin square, and the remaining squares are also Latin squares. This is a Latin square with four Latin subsquares:
1 to n/2 _ n/2+1 to n
n/2+1 to n _ 1 to n/2

This pattern leads to a homomorphism: 1 to n/2 -> 1 and n/2+1 to n -> 2. That gives us an order-2 Latin square:
1 2
2 1

One can make similar constructions for other divisors. For n divisible by 3, one gets 1 to n/3 and n/3+1 to 2n/3 and 2n/3+1 to n.
 
Turning to semigroups, every finite semigroup has at least one idempotent element, meaning that every semigroup with order greater than 1 has at least 1 proper subsemigroup.

Looking at  Semigroup with two elements Of the 5 semigroups, 3 have 2 subsemigroups, and 2 have 1 subsemigroup.

Looking at  Semigroup with three elements Of the 24 semigroups, all but 1 have order-2 subsemigroups. That lone exception is the cyclic group Z3.

Turning to homomorphisms, only 2 of the order-2 ones satisfy the flip automorphism.

Looking at the order-3 ones, all of them but Z3 have homomorphisms onto order-2 semigroups, with 2 initial elements yielding 1 result element, and the remaining initial element yielding the remaining result element.

-

Turning to monoids, the group-semigroup decomposition of finite ones has this operation table:
G S
S S

This has a homomorphism to what may be called the boolean monoid, the operation table for conjunction (and) and disjunction (or).

It's obvious from this one that the group part and the semigroup part form separate subsets of the monoid.[/wiki]
 
Last edited by a moderator:
We get to groups, and we find some big results. From Lagrange's theorem, the order of a subgroup evenly divides the order of its containing group.

Since every element generates a cyclic subgroup, every element's order evenly divides the order of the group.

There is a converse theorem, Cauchy's theorem. It states that for every prime number in the factors of a group's order, there is at least one element with that order.

Sylow's theorems go further.

The first of them states that every prime p in the factors of a group's order, if it has multiplicity m as a factor, then the group always contains at least one group with order p^m, a "Sylow subgroup".

The second of them states that all Sylow subgroups of order p^m are conjugate, related by g.(subgroup).g^(-1) for some g in the group.

The third of them states constraints on how many of these conjugate subgroups there are:
  1. (this number) = 1 mod p
  2. (this number) evenly divides (order of the group) / (p^m), the "index" of the Sylow subgroups in the group.
  3. (this number) = (order of the group) / (order of the "normalizer" of any Sylow subgroup). The normalizer of a subgroup is the set of all the group's elements that commutes with the subgroup as a whole: g*s = t*g where s and t are in the subgroup.

-

Going from subgroups to homomorphisms, there are two trivial ones, to the group itself, element by element, and to the identity group. In-between ones only exist if the group has a nontrivial "normal subgroup", one with no other conjugate subgroups. This subgroup is the "kernel" of the homomorphism, all the elements that map onto the identity.

Groups without nontrivial normal subgroups are called "simple groups", and all of them are now known. The proof is some 10,000 pages long, however.

There are several infinite families of these groups, and 26 "sporadic" ones, with the largest one being the "Monster" one. Its order is about 8*1053, and its existence was discovered using fancy group-theory techniques.
 
Back to rings and possible nonabelian additive groups.

(a + b) * (c + d) -> a*d + b*c = b*c + a*d

For ring R, R*R must be commutative under ring addition.

There are further constraints.

For constant a and variable x, a*x = y is a group homomorphism of additive group (R,+) onto an abelian subgroup of it. That means that the group must have a normal subgroup with an abelian quotient group, a group isomorphic to some subgroup of the group. Likewise for x*b = y.

So a nonabelian simple group has only the zero ring.

In general, x(i)*y(j) = c(i,j) for quotient-group elements i, j -- the left and right sides may have different quotient groups.

-

Let us consider an additive group with quotient group Z2 -- the ring's elements are divided between Re, the additive subgroup, and Ro, its coset.

Select one element out of R: c, an element where c + c = 0, if such an element exists.

Then Re*Re = Re*Ro = Ro*Re = 0 and Ro*Ro = c

This satisfies associativity.

For Z(m), we get something analogous: R decomposes into subsets R(i) for i = 0 to m-1 where R(0) is a normal subgroup with Z(m) as its quotient group. Then,

R(i)*R(j) = (i*j mod m) c where m c = 0

Let c have quotient-group element u. Then we have (i*j*u mod m). Testing associativity, we find for R(i), R(j), and R(k) on both sides (i*j*k*u mod m) c.
 
I'll now consider another approach for finding semigroups. Every set of matrices with matrix multiplication that is closed under that operation forms a semigroup, because of associativity. Not just state-transition matrices, but *any* matrices.

But is the converse true? For a semigroup, can we find a set of matrices that implements it? It turns out that we can, by generalizing the "regular representation" of a group.

Use indices for the elements (1,2,3,...) and consider z = a*b*x. It is y = b*x and z = a*y. Make a vector from x that is 1 for index x and 0 otherwise. Now make a matrix from b that is 1 for indices (y,x) if y = b*x and 0 otherwise. Likewise for a with z = a*y. One gets matrices that are composed of 0's and 1's.

I'll consider the five 2-element semigroups:

1 1, 1 1 -- 1 1, 0 0 - 1 1, 0 0
1 1, 2 2 -- 1 1, 0 0 - 0 0, 1 1
1 2, 1 2 -- 1 0, 0 1 - 1 0, 0 1
1 1, 1 2 -- 1 1, 0 0 - 1 0, 0 1
1 2, 2 1 -- 1 0, 0 1 - 0 1, 1 0

The matrices are not necessarily unique, and they may have zero determinant.

Representations can be sums of other representations: D(sum,a) = T1 . D(bdsum,a). T2 -- T1 and T2 are transform matrices.

D(bdsum,a) is a block-diagonal matrix where each block is matrix D(i,a) for representation i.

If one cannot proceed further with this decomposition, one has an irreducible representation or irrep. Every rep can be decomposed as a sum of irreps.

For groups, for an irrep, if a matrix X satisfies X.D(a) = D(a).X for rep matrix D(a) for every element a, then X is a multiple of the identity matrix. Every one-dimensional rep is obviously irreducible.
 
For groups, the number of irreps is equal to the number of "conjugacy classes". Take an element a, then go through all x in the group:

A(a) is the set of all x*a*inv(x) -- a conjugated by x.

For abelian groups, every element is in its own conjugacy class and every irrep has size one -- as many irreps as elements.

The "character" of a rep is the trace of each matrix D(a) for each element a. Trace = sum of diagonal elements. All matrices for the same conjugacy class have the same trace value, so we can interpret a character as a list of rep-matrix traces for each class.

The matrix of irrep characters is thus a square matrix that satisfies

sum over irreps i of char(i,A)*cjg(char(i,B)) = (N/N(A))*d(A,B)
sum over classes A of N(A)*char(i,A)*cjg(char(j,a)) = N*d(i,j)

where d(i,j) is the Kronecker delta function, 1 if i = j, 0 otherwise. N is the order of the group, N(A) the size of class A.


The size of a rep matrix is char(identity), and for an irrep, this evenly divides N. Also for irreps i and classes A, char(i,A) is an "algebraic integer", a root of a monic polynomial with integer coefficients. Also N(A)*char(i,A)/char(i,identity) if I remember correctly.

char(i,A) = sum of eigenvalues of rep matrix D(i,a in A). For a finite group, these eigenvalues are powers of w, a primitive nth root of unity, for n = order of a.

All this discussion is good for groups, and not necessarily for non-group semigroups or monoids.
 
Returning to Latin squares and homomorphisms / mappings, it's hard to find out anything online. But I think I have a proof of a restriction on Latin-square homomorphisms.

A homomorphism partitions its source set into subsets, each one corresponding to an element of its destination set.

For a Latin square, let us suppose that the set of elements gets partitioned into subsets 1, 2, 3, ... with possibly varying sizes, n1, n2, n3, ..., with n1 being the maximum. The (1)*(1) subdivision has dimensions n1*n1 and gets mapped onto a subset with n1 elements. That also for (1)*(2) with dimensions n1*n2 where n2 <= n1. It must have distinct elements along its long direction, elements that must be part of of some subset with size n1. These elements must also be in a subset distinct from the (1)*(1) subset.

Continuing for the other (1)*(k), we find a strip of distinct subsets, subsets whose number equals that of the number of subsets from the mapping partition, a number that I will call m. There are thus n1*m elements in the original square, and that is only equal to the original number if all the subsets have the same size.

Thus, a Latin-square homomorphism partitions the square into equal-sized subsquares with sizes that evenly divide the original square's size, like what a group homomorphism does.
 
For counting semigroups, I've found Enumerating 0-simple semigroups
with its source code at a Github repository at
ChristopherRussell (Chris Russell)

A "simple" semigroup is one with no nontrivial ideals. A "0-simple" one has a zero but no other nontrivial ideals. Rees matrix semigroups are 0-simple.

CR also defines semigroup congruences: a set of pairs of elements (a,b) such that component-by-component combinations are also in that set.

For (a,b) and (c,d) in C, (a*c,b*d) must also be in C. The trivial one: (a,a) for a in S. The universal one: (a,b) for a,b in S.

Homomorphism: all (a,b) with f(a) = f(b) is a congruence.

Congruence-free semigroups: analog of simple groups.

Finite semigroup S is congruence-free if and only if one of the following holds: S is a simple group, S has order <= 2, S is isomorphic to a Rees matrix semigroup(G,I,J,P) where G is the trivial group and P is regular (no rows or columns all 0) with all rows distinct and all columns distinct. Its elements are (g,i,j) with g in G, i in I, and j in J.

P is a matrix indexed by I and J, with elements in group G or else 0.

Rather complicated operation: (g1,i1,j1) * (g2,i2,j2) = (g1*P(i2,j1)*g2, i1, j2) if P(i2,j1) != 0, 0 otherwise. 0 * (g,i,j) = (g,i,j) * 0 = 0 * 0 = 0.

H-trivial: every element's generated left ideal and right ideal have only it in its intersection. Left ideal: x union S*x, right ideal: x union x*S.

For a 0-simple semigroup, that is equivalent to its group being trivial. Its elements thus reduce to (i,j) or 0 with operation

(i1,j1) * (i2,j2) = (i1,j2) if P(i2,j1) != 0, 0 otherwise. 0 * (i,j) = (i,j) * 0 = 0
 
I've found [2107.13215] Enumerating finite rings

For rings of order p^n for some prime p, there are

p^( (4/27)*n^3 + O(n^(5/2)) )

of them. Nilpotent ones, non-nilpotent ones, and ones with unity all have the same form of expression for their counts.

For commutative rings,

p^( (2/27)*n^3 + O(n^(5/2)) )

with the same form for nilpotent ones, non-nilpotent ones, and ones with unity.

Nilpotent: some number of elements multiplied together always yields 0. The minimum number of them is the degree. Degree 1: the zero ring or null ring. Degree 2: a*b = 0 for all a, b.

Rings with unity are non-nilpotent, since for unity 1, 1^m = 1 for all m.

There is also a result that in the limit of high prime powers, most rings tend to have relatively large Jacobson radicals. A ring's Jacobson radical is the intersection of all its maximal one-sided ideals, maximal meaning no ideal contains it that is smaller than the entire ring.


However, the paper does not discuss numbers for how many rings, and I can only find numbers for all rings only for prime powers 1, 2, and 3. Christof Noebauer has numbers for all rings with unity of order 3^4, all rings of order 2^4, and all rings with unity of order 2^5. He did not attempt to separately count nilpotent and non-nilpotent rings, and he did not attempt to give the sizes of rings' Jack-rads.
 
One can construct nil-3 rings like one constructs nil-3 semigroups: generators a(i) and b(i) with
a(i)*a(j) = sum over k of c(i,j,k) * b(k)
a(i)*b(j) = b(i)*a(j) = b(i)*b(j) = 0


For prime p, I've found complete constructions of rings with orders p, p^2 (B. Fine) and p^3 (Antipkin and Elizarov), so I'm able to find out which ones are nilpotent. But I don't have any for any higher powers.

So I compared this paper's asymptotic formulas with CN's results, and I found a very rough fit, to within some rather large error bounds.
 
Back
Top Bottom