• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

How many groups and semigroups and rings and the like - abstract algebra

I'll now consider asymptotic behavior.

Let us say that the number of irreducible partitions increases in polynomial fashion:

Nirr(n) = n^a

Now find the total number.

Ntot(n) = sum over divisor sets of product over divisors d of Nirr(d)

That is
Ntot(n) = (number of divisor sets of n) * n^a

From  Multiplicative partition, the asymptotic behavior of the number of divisor sets is approximately n, so we find
Ntot(n) = n^(a+1)

So the total number tends to be dominated by the reducible algebras.


But exponential or factorial increase is another story.

Consider Nirr(n) = N0*exp(c0*n^a) or N0*n^(c0*n^a)

It is evident that the irreducible term dominates the total value. So most algebras are irreducible, and that complicates the task of finding them.
 
I've found some formulas for some algebras, and they are usually very complicated, because of the combinatorics of isomorphic algebras. Most of them have exponential-of-power increase, making most of the irreducible.

There are some partial exceptions. Abelian (commutative) groups, nilpotent groups, and rings all decompose into prime-power ones, but the prime-power ones have rapidly-increasing number with increasing power.

For prime powers, we only need to do partitions of integers:  Partition (number theory)

For Nirr(n) = n^a, Ntot(n) = np(n)*n^a

While multiplicative partitions are ~ n, integer partitions increase more rapidly:

\( \frac{1}{4n\sqrt{3}} \exp \left( \pi \sqrt{\frac{2n}{3}} \right) \)

Approximately exp(n^(1/2))

But if Nirr is exponential-in-power, then it easily dominates that total number for powers greater than 1/2. So most prime-power groups and rings are irreducible.
 
A few details that I'll return to.

One finds a construction for finite unital rings (those with 1) that is much like that for finite monoids: (group part) + (semigroup part):
G*G = G
G*S = (permutation of S), S*G = (permutation of S)
S*S <= S

Likewise for unital rings under multiplication:
G*G = G
G*D = (permutation of D), D*G = (permutation of D)
D*D <= D

-

Also, the average number of abelian groups is asymptotically zeta(2)*zeta(3)*zeta(4)*zeta(5)* ... = 2.29486... using the Riemann zeta function.
A constant.
Source: A000688 - OEIS - Number of Abelian groups of order n; number of factorizations of n into prime powers.

I have a proof of this result, and it uses a sort of average that one stretches farther and farther out. A weighted average where the weights are n^(-c), with c > 1. As c -> 1, the total goes to infinity as 1/(c-1) + (Euler-Mascheroni constant: 0.5772...) -  Riemann zeta function

\( {\bar N(c)} = \frac{\sum_{n=1}^\infty n^{-c} N(n)}{\sum_{n=1}^\infty n^{-c}} = \prod_p \frac{\sum_{m=0}^\infty p^{-mc} N(p^m)}{\sum_{m=0}^\infty p^{-mc}} = \prod_p \frac{\sum_{m=0}^\infty p^{-mc} P(m)}{\sum_{m=0}^\infty p^{-mc}} \)
decomposing n into products of powers of primes p and using the number of integer partitions P(m) for each m.
\( {\bar N(c)} = \prod_p \prod_{k=2}^\infty \frac{1}{1-p^{-kc}} = \prod_{k=2}^\infty \zeta(kc) \)
using the generating function for that number of partitions.

The limit for c at 1 is that aforementioned product of zeta-function values.
 
I'll now resolve the question of what groups are self-converse.

Is b*a isomorphic to a*b?

Yes, if one interchanges every element and its inverse. So every group is self-converse or self-dual.

-

An interesting curiosity of the numbers of algebraic identities is that, in several cases (labeled for n) / (isomorphic for n) -> n!

This means that doing a non-identity permutation of the elements will almost always produce a different-looking operation table. That means that most algebras don't have much internal symmetry. Meaning that their automorphism groups (relabelings that look the same) are usually the identity group.

That's evident for general groupoids, unital groupoids, quasigroups, loops, semigroups, and monoids, and for commutative groups and unital groupoids. I don't have anything of the sort for commutative quasigroups or loops, but for commutative semigroups and monoids, I find a clear departure.

(isomorphic sets for n) * n! / (labeled ones for n) ~ 4/3 instead of 1

This means that some nontrivial symmetries are common there.

[1301.6023] The semigroups of order 9 and their automorphism groups -- it didn't consider commutative semigroups, monoids, or commutative monoids. But for the larger semigroups, no symmetry was the most common case by a large margin.
 
Turning to automorphism groups, they are subgroups of the groups of permutations of all an algebra's elements:

Sym(|G|) >= Aut(G)

For groups, the inversion property makes possible "inner automorphisms", those done by conjugation. A IA generated by element x produces for every a, the element x*a*inv(x).

But the group Inn(G) does not necessarily equal G. If one tries this with some element that commutes with every other element, than x*a*inv(x) = a -- the identity automorphism. The set of such elements is called the center of the group, Z(G), and

Inn(G) = G / Z(G)

What about the other side, "outer automorphisms"? They also exist, and the group of them is Out(G) = Aut(G) / Inn(G).

As an example, consider Z3: {0, 1, 2} mod 3. It is abelian, and the only inner automorphism is the identity one. But this group still has some automorphisms: the identity one, and 0 -> 0, 1 -> 2, 2 -> 1 or, for short, x -> 2*x mod 3. The second one, the flip one, when repeated gives the identity one again, giving the group Z2.

In general, the automorphism group of Z(n) is Zx(n), the group of multiplication of all the members of Z(n) that are relatively prime to n. For products of cyclic groups, the automorphism group becomes even more complicated, since for Z(n1)*Z(n2), one can do (x1,x2) -> (a11*x1+a12*x2, x21*x1+a22*x2), for some constants a11, a12, a21, a22.
 
I will now consider homomorphisms again, f(groupoid) = (another groupoid)

f(A) = a', f(B) = b', ... where the domain, the set of the source groupoid, is partitioned into subsets A, B, ... that map onto distinct a', b', ...

F(subset with identity) = identity
f(subset with zero) = zero
f(subset with both identity and zero) = (identity = zero), implying f(groupoid) = (1-element groupoid), a trivial homomorphism. The other one is f(a) = a, the identity homomorphism.

f(subset with left identity) = left identity, f(subset with left zero) = left zero, and likewise for the right-hand side. An ideal may map onto a zero.

Groupoids, semigroups, and monoids have no size constraints on their domain partitioning.

A zero groupoid (a*b = z - the same for all a, b) maps onto another zero groupoid.

A degree-3 nilpotent semigroup maps onto the zero semigroup.
A*A <= B
A*B = B*A = B*B = {z} where z is in B
f(A) = a, f(B) = b
a*a = a*b = b*a = b*b = b

A monoid broken down into a group G and a nonempty semigroup S maps onto the Boolean monoid.
G*G = G
G*S = S*G = S (permutations of the original set)
S*S <= S
f(G) = g, f(S) = s
g*g = g, g*s = s*g = s*s = s
 
I'll now consider homomorphisms of Latin squares.

Consider x*A = A' where A and A' are domain partitions.
From the Latin-square property, |A'| = |A|
From the homomorphism property, A' is either A or disjoint with A.

For all domain partitions A and B, there will be an x such that x*A = B.

This means that f(Latin square L) will also be a Latin square, L', what may be called the quotient square. Much like how a group homomorphism is related to a normal subgroup of that group and its quotient group.

Since the domain partitions all have the same size, |L'| evenly divides |L|, with the quotient being the size of each domain partition.

Consider all the order-4 reduced Latin squares:
1234 1234 1234 1234
2143 2143 2341 2413
3412 3421 3412 3142
4321 4312 4123 4321

The second, third, and fourth ones are isomorphic: 2nd to 3rd: 2->3, 3->2, 4->4 and 4th to 3rd: 2->2, 4->3, 3->4

The first one's automorphisms: 2->2, 3->4, 4->3 and 2->3, 3->2, 4->4 and 2->4, 3->3, 4->2

The other ones' automorphisms: 2->2, 3->4, 4->3 and 2->4, 3->3, 4->2 and 2->3, 3->2, 4->4

The domain partitions of each one: 12,34 13,24 14,23 and 12,34 and 13,24 and 14,23

The quotient square for all of them:
12
21

One cannot do similar reductions for orders 3 or 5 and other prime orders, but one may be able to for order 6 and other composite orders.
 
Turning to subgroupoids, there is no further size constraint for general groupoids, semigroups, or monoids.

Subsquares of Latin squares are a different story, however.

For subsquare L' of Latin square L, consider each a such that a*S = A. All the result sets A have |A| = |L'| and are disjoint.

That means that the order of L' evenly divides the order of L.

This result is familiar from group theory as Lagrange's theorem: |subgroup of group G| evenly divides |G|.

The previous post's Latin squares have proper subsquares
First one: 12, 13, 14, second one: 12, third one: 13, fourth one: 14

Thus, one cannot have proper Latin subsquares for orders 3, 5, and other prime orders, but one may be able to for order 6 and other composite orders.
 
Seems like my previous post about subsquares is mistaken. The order of a subsquare can be any order not greater than half the order of the original square: |L'| <= (1/2)*|L|

Latin square - Encyclopedia of Mathematics

I recently saw an example, but I don't recall it, so I'll reconstruct it, or at least an approximation of it.

12345
21453
35124
43512
54231

It has a subsquare,

12
21

but its order, 2, does not divide the original square's order, 5.

I haven't been able to find much on Latin-square homomorphisms, so I can't say if I made a mistake there. Thinking over my proof, it may be flawed. For a Latin square, subsets A and B yield |A*B| >
 
That Encyclopedia of Mathematics article on Latin squares mentioned this lower bound for their number, a "super factorial":

n! * (n-1)! * ... 2! * 1!

The number of reduced Latin squares goes as (n-2)! * (n-3)! * ... 2! * 1!

The number of Latin squares gets very big very fast. The count of all those from the empty Latin square to order 7:

1, 1, 2, 12, 576, 161280, 812851200, 61479419904000, ...
1, 1, 2, 12, 288, 34560, 24883200, 125411328000, ...

Sources: A002860 - OEIS (Latin squares) and A000178 - OEIS (super factorials)

The super factorial has asymptotic value
exp(zeta'(-1)-3/4-3/4*n^2-3/2*n)*(2*Pi)^(1/2+1/2*n)*(n+1)^(1/2*n^2+n+5/12)

where
zeta'(-1) = 1/12 - log(Glaisher's constant)

Also,
exp(1/12-n*(3*n+4)/4)*n^(n*(n+2)/2+5/12)*(2*Pi)^((n+1)/2)/A

A = Glaisher's constant ~ 1.28243
 Glaisher–Kinkelin constant

Doing some numerical experimenting, I find an asymptotic estimate of the number of Latin squares of order n:

n^((1/2)*n^2 - (3/2)*n + 2)
 
An odd bit of terminology that I must note.

About semigroup homomorphisms, I find something called a semigroup congruence, an equivalence relation that is compatible with the semigroup operation: a ~ x and b ~ y with a*b ~ x*y

A congruence partitions a semigroup into equivalence classes of elements that are congruent to each other. The products of equivalence classes form a "quotient semigroup".

This is another name for a semigroup homomorphism's partition of the semigroup's set into sets that share a homomorphism value.


I've found something called the Rees quotient semigroup. A semigroup S having an ideal J means J*S = S*J = J. The RQS consists of collapsing the ideal into a single element, z, and keeping all the others separate. The element z is the zero of the new semigroup.

As an example, consider the semigroup of multiplication of integers. An ideal of that semigroup is the set of even integers. The RQS is {e, 1, -1, 3, -3, 5, -5, ... every other odd integer} where the even integers were collapsed into e. Thus, e*(anything) = (anything)*e = e, and it's obvious that e is this semigroup's zero.

That looks rather unnatural, but it is a real mathematical object.

A more "natural" one might be to use a homomorphism f(every even integer) = e and f(every odd integer) = o.

Then we get e*e = e*o = o*e = e, o*o = o -- the Boolean semigroup, as I like to call it. Its two elements include its identity, o, and its zero, e.

Wikipedia has a big list of  Special classes of semigroups
 
It's harder for me to estimate the asymptotic behavior of the number of semigroups, but there are formulas for the nil-3 case, and one can use them to come up with estimates.

To lowest order, the number of labeled nil-3 semigroups, both in general and commutative, is, for order n, close to m^((n-m)^2) and m^((1/2)(n-m)^2) respectively, where m is what maximizes these expressions.

For doing that maximization, one must solve for m in m*(2*log(m)+1) = n.

This gives 1.6*10^17 out of actual 3.8*10^19 for order 9, and 2.0*10^8499 out of actual nil-3 1.1*10^8516 for order 100. Being too small by factors of 240 and 5.5*10^16. That's consistent with leaving out terms like n^(O(n)) and e^(O(n)).

-

Turning to monoids, they seem to be dominated by semigroups with an identity adjoined. So to leading order, one gets the same behavior.

-

For groups, the number of them is very sensitive to the prime factorization of their order. I looked at power-of-2 groups, and I attempted to estimate the exponent c in possible expression 2^(n^c) for the count of them for orders 2^n. I found

0., 0.766784, 0.964395, 1.0784, 1.16478, 1.24084, 1.32654, 1.43337, 1.55055

That's with the raw count, not subtracting out reducible and abelian groups, but those subtractions have a negligible impact for larger sizes.

So even if the limiting exponent is (say) 2, that only means something like (number of groups) ~ n^2.
 
I'd mentioned direct products as a form of reducibility, A*B with elements (a,b) with a in A, b in B. It has (a1,b1)*(a2,b2) = (a1*a2,b1*b2) for a1,a2 in A and b1,b2 in B.

But what might an indirect product be? In group theory, there is something called an indirect product, something denoted with a symbol that looks like X|.

For groups A and B, t has the operation law (a1,b1)*(a2,b2) = (a1*a2,b1*P(a1,b2))

where P(a,b) is a permutation on B induced by a. This permutation is an automorphism of B, and permutations must combine: P(a1,P(a2,b)) = P(a1*a2,b). Also, P(e,b) = b and P(a,e) = e.


As an example, what may be called an "inversion-flip group" has Z2 * B, where P(0,b) = b and P(1,b) = inv(b). For this to work, B must be abelian: b1*b2 = b2*b1.

If B = Z(n), then this group is the dihedral group Dih(n), the group of n-fold rotation and reflection symmetries for 2D objects. For rotations only, the group is Z(n).


In like fashion, one can write the group*semigroup operations for a monoid as

g*s = PL(g,s) and s*g = PR(g,s) where g*s and s*g may differ.
 
Another way of expressing that semidirect product:

(a,b) -> b*a
a*b = P(a,b)*a
(a1,b1)*(a2,b2) = b1*a1*b2*a2 = b1*P(a1,b2) * a1*a2

A305858 - OEIS - a(n) = number of near-rings with n elements.
Near-rings are defined like rings but addition need not be commutative and multiplication need only left-distribute over addition (of course, right-distribution leads to an equivalent theory). Also, there need not exist a multiplicative identity.

1, 3, 5, 35, 10, 99, 24
Rings proper (abelian addition, two-sided distributivity)
1, 2, 2, 11, 2, 4, 2, 52, 11, 4, 2, 22, 2, 4, 4, 390, 2, 22, 2, 22, 4, 4, 2, 104, 11, 4, 59, 22, 2, 8, 2


Why only left distributive? Let's consider the case of two-sided distributive with a nonabelian group. The smallest one is order 6: D3 ~ S3, the symmetry group of the equilateral or regular triangle, and also of permutations of 3 symbols.

Its elements are identity, e, two rotations, r1 r2, and three reflections or flips, s1, s2, s3.

The identity: e*a = a*e = a for all a in the group
The rotations: r1*r1 = r2, r1*r2 = r2*r1 = e, r2*r2 = r1 -- group Z3
The reflections:
r1*{s1,s2,s3} = {s2,s3,s1}
r2*{s1,s2,s3} = {s3,s1,s1}
{s1,s2,s3}*r1 = {s3,s1,s2}
{s1,s2,s3}*r2 = {s2,s3,s1}
{s1,s2,s3}*{s1,s2,s3} = { {e,r2,r1}, {r1,e,r2}, {r2,r1,e} }

Every element generates a cyclic group:
R = {r1,r2}, R^3 = {e} -- Z3
S = {s1,s2,s3}, S^2 = {e} -- Z2

The other order-6 group is cyclic group Z6 ~ Z2 * Z3.

The distributive property commutes with addition repetition: a*(n of b) = n of (a*b) and (n of a)*b = n of (a*b)

The additive identity is a multiplicative zero: e*a = a*e = a.
a*(3 of R) is all a*e = e -- 3 of (a*R) meaning that (a*R) is either e or members of R. Likewise for (R*a).
a*(2 of S) is all a*e = e -- 2 of (a*S) meaning that (a*S) is either e or members of S. Likewise for (S*a).

Thus, R*R = {e, subset of R}, S*S = {e, subset of S}, and R*S = S*R = {e}

I now consider a*(s1+s2) = a*r2 = (a*s1) + (a*s2) and the reverse order.

For a in R, we find R*r1 = {e} and thus, R*R = {e}

For a in S, we find (a*s1) + (a*s2) = e meaning that (S*S) = {e, s} for some s in S -- all are equal

So we either have the zero near-ring or
R*R = R*S = S*R = {e}
S*S = {s} for some s in S.
It is easy to show that the multiplication operation is associative.
 
 Ring (mathematics)

Considering nonabelian additive groups further, let's see what happens if a ring contains unity (multiplicative identity).

There is a simple proof. Consider (1 + 1)*(a + b) where 1 + 1 != 0 in the ring. Use the distributive property on both sides.

Left then right:
(1 + 1)*(a + b) = (1 + 1)*a + (1 + 1)*b = a + a + b + b
Right then left:
(1 + 1)*(a + b) = 1*(a + b) + 1*(a + b) = a + b + a + b

Add (additive inverse of a) on the left and (additive inverse of b) on the right. This gives us
a + b = b + a
So the additive group must be commutative in this case.

Let us now consider 1 + 1 = 0 in the additive group (characteristic 2, because 2(1) = 0).
For any a in the ring, (1 + 1)*a = a + a
Thus, every element in the additive group has order 2 except for the identity, 0.

Let us see if the additive group is commutative in that case also. Consider two members a and b. Their sum must have order 2:
a + b + a + b = 0
Add a on the left of both sides:
b + a + b = a
Add b on the right of both sides:
b + a = a + b
Thus, the group is commutative in that case also.

It is also trivially true of the one-element ring.

For no unity, commutativity is only guaranteed for elements that are products of other elements:
(a + b)*(c + d) = (a + b)*c + (a + b)*d = a*c + b*c + a*d + b*d
(a + b)*(c + d) = a*(c + d) + b*(c + d) = a*c + a*d + b*c + b*d
With cancellation, b*c + a*d = a*d + b*c
 
A ring without unity is sometimes called a rng ("rung") (ring - i (identity))

A semiring does not necessarily have additive inverses, thus making the additive operation a monoid over the ring's elements. It is sometimes called a rig (ring - n (negative))

Going even further is a ringoid, where both the additive and multiplicative operations are groupoids, but still having the distributive property.

-

From greater generality to greater specificity, an "integral domain" is a commutative ring with the cancellation property: for a nonzero, a*b = a*c implies b = c, and also b*a = c*a implies b = c. Cancellation means no zero divisors, and for a finite integral domain, cancellation means that the nonzero elements form a group under multiplication, thus making the algebra a field.

Infinite integral domains are not necessarily fields, like the integers under addition and multiplication (Z). Some more integral domains are the rational numbers (Q), the real algebraic numbers (A), the real numbers (R), the Gaussian integers (complex integers, m+n*i, C(Z)), complex rational numbers (C(Q)), complex algebraic numbers (C(A)), and full-scale complex numbers (C). There is another interesting kind of number, the Eisenstein integers: m + n*w with w = (-1+i*sqrt(3))/2. I will call this set CH. Eisenstein integers are like the Gaussian integers, but on a hexagonal grid instead of a square grid.

-

An integrally closed domain is an integral domain with the condition that if a member of its field of fractions satisfies a monic polynomial in it, then that member must also be a member of it. Field of fractions: rational numbers generalized to integral domains.

Field of fractions: (a,b) + (c,d) = (a*d + b*c, b*d) and (a,b)*(c,d) = (a*c, b*d). Equality: (a,b) = (c,d) if a*d = b*c.

There is a similar construction, the Grothendieck construction, a generalization of subtraction, that gets rings from semirings like the integers from the nonnegative integers. (a,b) + (c,d) = (a+c,b+d), (a,b)*(c,d) = (a*c+b*d, a*d+b*c). Equality: (a,b) = (c,d) if a + d + k = b + c + k for some k.

For integers, the field of fractions is the rational numbers, and the rational numbers that satisfy integer-coefficient monic polynomials are integers.

FoF: Z -> Q, Q -> Q, A -> A, R -> R, C(Z) -> C(Q), CH -> C(Q), C(Q) -> C(Q), C(A) -> C(A), C -> C
ICD's: Z, Q, A, R, C(Z), CH, C(Q), C(A), C

An integral domain that is not integrally closed is Z(sqrt(-3)): a + b*sqrt(-3) for integers a, b. Its field of fractions contains (1+sqrt(-3))/4, and that cannot solve any monic polynomial in this domain.

-

GCD domain: an integrally closed domain every pair of elements has a greatest common divisor. That is, there is a minimal principal ideal that contains the ideal generated by these two elements. Principal ideal = ideal with one generator. Here, the GCD of those two elements.

For integers, any two, a and b, will generate ideal m*a + n*b, where m and n are integers. This value is equal to (m*(a/g) + n*(b/g))*g where g is gcd(a,b). The coefficient of g is an integer.

GCDD's: Z, Q, A, R, C(Z), CH, C(Q), C(A), C
 
Unique factorization domain: a GCD domain where every nonzero, nonunit element can be uniquely written as a product of "prime elements". Units, here are those elements in the multiplicative-group part of the domain, like {1, -1} for the integers, {1, i, -1, -i} for the Gaussian integers, {1, w, w-1, -1, -w, 1-w} for the Eisenstein integers, and all nonzero rational, algebraic, real, and complex numbers.

This generalizes factorization of integers greater than one into products of unique sets of prime numbers. So Z is obviously a UFD.

Gaussian and Eisenstein integers have their own kinds of prime numbers.

Gaussian prime: ordinary prime that is 3 mod 4, and a+b*i such that norm a^2+b^2 is a prime that is 2 or 1 mod 4

Eisenstein prime: ordinary prime that is 2 mod 3, and a+b*w such that norm a^2-a*b+b^2 is either 3 or a prime that is 1 mod 3

Q, A, R, C(Q), C(A), and C(R) all have no nonzero and nonunit elements.

Another kind of UFD is polynomial rings over UFD's.

A non-UFD is the ring Z(sqrt(-3)), because factorization is not unique: 4 = 2*2 = (1+sqrt(-3))*(1-sqrt(-3)). It is also not a GCDD or a ICD.

-

Principal ideal domain: a UFD where every ideal is a principal one, an ideal generated by only one element. For ring R and generator g, a principal ideal is g*R (right PI) and R*g (left PI).

Z, Q, A, R, C(Z), CH, C(Q), C(A), C

Any single-variable polynomial ring whose coefficients are in some field. Multiple-variable rings and non-field coefficients make the ring a non-PID.

-

Euclidean domain: a PID that permits a version of Euclid's algorithm for finding greatest common divisors. It must have some function f, a Euclidean function:

For a, b with b nonzero, there exist a = q*b + r with either r = 0 or else f(r) < f(b)

For Z, f(n) = |n|
For C(Z), f(a+b*i) = |a+b*i|^2 = a^2 + b^2
For CH, f(a+b*i) = |a+b*i|^2 = a^2 - a*b + b^2

Q, A, R, C(Q), C(A), C all have f(a) = 1 -- as do fields in general.

Polynomial rings over some field with a single variable.
 
I now get to abstract-algebra fields. They are Euclidean domains with all the nonzero elements forming a commutative multiplicative group.

For a general multiplicative group, one gets a division ring or a skew field.

All the finite fields are known. These fields, the Galois fields, have prime-power order, and they are unique for each other.

Of the algebras I'd mentioned, these are fields:

Q, A, R, C(Q), C(A), C

These are not fields: Z, C(Z), CH --

The integers
The Gaussian integers: C(Z) = Z(i)
The Eisenstein integers: CH = Z(w)

Their group parts: {1,-1}, {1,i,-1,-i}, {1,-w^2,w,-1,w^2,-w}

Likewise, C(Q) = Q(i) and C(A) = A(i)

-

Finally, algebraically closed fields. Every polynomial equation in that field has a solution in that field. Of my examples, only C(A) and C are algebraically closed. For Q, A, and R, the equation x^2+1 = 0 does not have a solution in that field, and in C(Q), the equation x^2-2 = 0 does not have a solution in that field.

From Wikipedia:

rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ algebraically closed fields
 
  • A field as a ring: its elements are a monoid under multiplication, and thus a semigroup.
  • A field as an integral domain: it has cancellation for all nonzero elements.
  • A field is an integrally closed domain: it is its own field of fractions.
  • A field is a GCD domain: it has no nontrivial ideals.
  • A field is a unique factorization domain: all its elements are either zero or a unit (member of its mult-group part).
  • A field is a principal ideal domains: it has no nontrivial ideals.
  • A field is a Euclidean domain: one can always do division in it without remainder.
I've been counting finite algebras, and every finite integral domain is a field, meaning everything finite and in between is also a field.

Returning to the subject of how many there are, finite fields have orders p^m for prime p, with only one field per order.

[math/0608491] The moduli space of commutative algebras of finite rank - contains a conjecture that the asymptotic number of commutative rings with unity is p^((2/27)*n^3 + O(n^(8/3))) for order p^n for prime p. This is the same number as the Higman-Sims limit for p-groups --  Higman–Sims asymptotic formula

The Higman-Sims formula grossly overestimates the number of p-groups, and also the number of commutative rings with unity. The number of p-groups increases super exponentially with the prime power, and the number of rings also increases exponentially or super exponentially.

Isomorphism testing of groups of cube-free order - ScienceDirect has a formula from Higman, Sims, and Pyber for the number of groups for order n: n^(2/27*m^2 + O(log(n))) where m is the maximum power of a prime that appears in n. For a p-group, one gets the Higman-Sims formula.


I looked in math.stackexchange.com and mathoverflow.com for discussions of the Higman-Sims formula, and I couldn't find anything on that gross discrepancy.
 
About semigroups, I've found Semigroup Theory: A Lecture Course by Victoria Gould. It gets arcane rather quickly, but it starts out with some examples of semigroups in addition to what's in  Semigroup It also has some more details about  Green's relations useful for finding the structure of a semigroup.

I must note that document ignored nilpotent semigroups, including the most abundant sort: degree-3 ones: S^3 = {z}.

N0 - nonnegative integers
N1 - positive integers

(N1,+) - neither identity nor zero
(N1,*) - identity: 1
(N0,+) - identity: 0
(N0,*) - zero: 0, identity: 1

Left-zero semigroup: a*b = a -- every element a left zero and a right identity
Right-zero semigroup: a*b = b -- every element a right zero and a left identity

Null semigroup: a*b = z -- degree-2 nilpotent

Trivial semigroup: one-element: a*a = a

Rectangular band: its elements are (i,j) for i in set I and j in set J.
(i,j)*(k,l) = (i,l)

An alternate is (i,j)*(k,l) = (k,j)

Every element is idempotent: x*x = x -- a band is an idempotent semigroup. Left-zero and right-zero semigroups are also bands.
 
Back
Top Bottom