• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Math Thread

So nobody's bitten. Here are the answers.

* What determinant values can an orthogonal matrix have?

Multiplying the definition by M, we get M.transpose(M) = transpose(M).M = I

Now take the determinant. It's easy to show that (det(M))2 = 1, giving det(M) = +- 1

-

* What are all the O(1) matrices? The SO(1) ones?

It is easy to show that a 1*1 matrix has only one element, and that that matrix's determinant is equal to that element. Thus,

O(1): { {{1}}, {{-1}} } -- Z2
SO(1): { {{1}} } -- the identity group, Z1

-

* The O(2) matrices might seem to have 2*2 = 4 parameters, one for each matrix element. Can you show that their matrix elements can be specified with a smaller number of parameters?

We start with M = {{a11, a12}, {a21, a22}}
We find
det(M) = a11*a22 - a12*a21
transpose(M) = {{a11, a21}, {a12, a22}}
inverse(M) = {{a22, -a12}, {-a21, a11}}/det(M)

Set det(M) = s = +-1. Then
a22 = a11*s
a21 = - a12*s
(a11)2 + (a12)2 = 1

This suggests that we can use trigonometric functions:
For det +1: {{cos(a), -sin(a)}, {sin(a), cos(a)}}
For det -1: {{cos(a), sin(a)}, {sin(a), -cos(a)}}

-

* What are all the finite subgroups of O(2)? Of SO(2)?

First, we prove a theorem about groups of orthogonal matrices with both determinant values: that there are as many negative-determinant ones as there are positive-determinant ones.

Start with sets P = {M(+)} the positive-determinant ones and N = {M(-)} the negative-determinant ones.

Is P -> N an injection?

Take an element of N, X, and multiply every element of P by it on one side. The result is a subset of N, since matrices' determinants multiply. Thus, P -> N is an injection, one-to-one.

Is P -> N a surjection?

Take the inverse of X, and multiply every element of N by it on one side. The result is a subset of P. Thus, P -> N is a surjection, onto.

Since both are true, P -> N is a bijection, a one-to-one correspondence, and every element of P can be identified with every element of N.

So one can specify all the elements of a group of orthogonal matrices by specifying all its positive-determinant ones, and by combining them with a negative-determinant one, if any are present, to specify the negative-determinant ones.

-

Now for the main problem. Let us find the all-positive-determinant subgroups first, and then branch off to include negative-determinant ones. Thus, I find all finite subgroups of SO(2), and from them, all the finite subgroups of O(2) that are not in SO(2).

As I'd stated earlier, every element of SO(2) can be stated in form M(+,a) = {{cos(a), -sin(a)}, {sin(a), cos{a}} -- it can be pictured as a 2D rotation. These elements have a nice composition rule: M(+,a).M(+,b) = M(+,a+b), as can easily be verified from trigonometric identities.

We can strip away the matrix to get a (x) b = (a + b) mod 2π. We can get rid of the 2π factor by setting a = (2π)a'. That makes a' (x) b' = (a' + b') mod 1.

Let's see what form the a's are for a finite group. A finite group's subgroups must all be finite, as can easily be proved. So every element must generate a cyclic subgroup. This makes a = (2π)*m/n, where m is a nonnegative integer and n a positive one > n. Taking all the denominators in the a's, we find their least common multiple, and turn m/n into (m*(lcm/n)) / (lcm). All the a's now have only one denominator value.

Let's see about the numerators. There is a theorem that says that for any nonzero integers a and b, there exist integers m and n such that m*a - n*b = gcd(a,b). Doing this operation on the numerators of the a's, we eventually find the value 1, since they are all relatively prime to the denominator. In effect, we have an element a = (2π)*(1/N) that generates all the others, making all the a's equal to (2π)*(k/N) where k is a nonnegative integer less than N. Thus, all finite subgroups of SO(2) are cyclic ones: Z(n).

Turning to the subgroups with positive and negative determinants, the negative-determinant matrix can be expressed in the form M(-,a) = {{cos(a), sin(a)}, {sin(a), -cos(a)}}

We get an interesting multiplication table:
M(+,a).M(+,b) = M(+,a+b)
M(+,a).M(-,b) = M(-,a+b)
M(-,a).M(+,b) = M(-,a-b)
M(-,a).M(-,b} = M(+,a-b)

We find that M(+,a).M(-,b) = M(-,b).M(+,-a) where M(+,-a) is the inverse of M(+,a), and also that M(-,a).M(-,a) = I -- that the negative-determinant elements are their own inverses. Thus, the positive-and-negative cases are dihedral groups.

Finite subgroups:
SO(2): Cyc(n) (~ Z(n))
O(2): Cyc(n), Dih(n)

Positive-determinant elements of O(n): pure rotations
Negative-determinant elements of O(n): rotation-reflections or rotoreflections
 
I'm not sure that's the best approach... And if you try to substitute in with your previous equations then it doesn't work.
The original equation I asked about (in post #166) was an approximation that didn't include the chord's epsilon term, which is why the integral you posted didn't work- I didn't explain the problem in nearly enough detail.

I'll try to re-explain, from the ground up. One section now (just the first part of the problem, with a single chord).

Base chord is yellow
Base chord central angle is 2 theta (aqua colored)
Base chord length = 2*sin(theta)
Green colored angle is pi/2- theta.

Epsilon is a ratio (explained in the next few statements). For the calculations I used an epsilon of 10^-30.

Blue chord central angle is: 2 theta- 2* epsilon * blue chord length
2* Epsilon = ratio of red arc length to blue chord length: 2 *epsilon * blue chord length = red arc length.
Blue chord length = 2* sin (theta - epsilon*blue chord length)
White angle = pi/2 - (theta - epsilon * blue chord length) = pi/2 -theta + epsilon *blue chord length
Orange line = sin (white angle)*blue chord length

sum of sines would use sin (white angle) as one part of the sum.

Red arc and blue chord are not to scale (they are infinitesimally different from the base chords!!!)
arclengthchordratio.jpg

That's one part of it. I'll put up more drawings (and label this one if it would help???) after lunch.
And here one is. This is of BASE chords.

Green central angle = 2* red central angle

Red angle is 2pi/n =2 * (k=1) * pi/n
Green is 4pi/n =2 * (k=2) * pi/n

Aqua arrows show reflection....

pink/purple is 4pi/n + 2 *(n-floor(n)) pi /n = 2 * (k=2) * pi/n +... ????
blue is 6pi/n + 2 * (n-floor(n)) pi/n = 2 * (k=3) * pi/n +.... ???

green%2Bred%2Baqua%2Breflection.jpg


but if you define the perturbed points via small central angle differences \(\phi_k\) then the adjusted chord has length \(2\sin(\theta_k - \phi_k)\) (Note that your angle= epsilon*adjusted chord length equation is pretty much a small angle approximation of this, the angle is phi/2). Then the sine you are looking at is \(\sin(\theta_k + \phi_k)\).
That would be one of the sines- each phi_k would be different, although the theta_k's are reflected over the 0--> pi line (what is the line from 0 to pi called on a circle?).

Also, the sum of the sines can't be zero - unless you're not measuring the angles as shown or counting negative angles somehow?
I meant (sorry about the quality of my descriptions) the sum of sines of chords reflected over the 0 to pi line = 0, unless you shift the chords a bit. In other words, the sin of pi/2 + sin 3pi/2 = 0. Sin pi/3 + 2/3 pi + sin 4/3pi +sin 5/3pi = 0..
 
Last edited:
The original equation I asked about (in post #166) was an approximation that didn't include the chord's epsilon term, which is why the integral you posted didn't work- I didn't explain the problem in nearly enough detail.

I'll try to re-explain, from the ground up. One section now (just the first part of the problem, with a single chord).

Base chord is yellow
Base chord central angle is 2 theta (aqua colored)
Base chord length = 2*sin(theta)
Green colored angle is pi/2- theta.

Epsilon is a ratio (explained in the next few statements). For the calculations I used an epsilon of 10^-30.

Blue chord central angle is: 2 theta- 2* epsilon * blue chord length
2* Epsilon = ratio of red arc length to blue chord length: 2 *epsilon * blue chord length = red arc length.
Blue chord length = 2* sin (theta - epsilon*blue chord length)
White angle = pi/2 - (theta - epsilon * blue chord length) = pi/2 -theta + epsilon *blue chord length
Orange line = sin (white angle)*blue chord length

sum of sines would use sin (white angle) as one part of the sum.

Red arc and blue chord are not to scale (they are infinitesimally different from the base chords!!!)
View attachment 7441

That's one part of it. I'll put up more drawings (and label this one if it would help???) after lunch.

Here's what I think I've parsed, for one chord and translated into my own notation, just cuz.

Take a chord of a unit circle (the yellow one) that subtends an angle 2θ from the center of the circle - it has length 2sin(θ). Rotate this chord about one of its endpoints by an angle of φ, tracing out an arc (the red one) on the circle of length 2φ. The edge of the sweep is the blue chord, and has length 2sin(θ-φ). Then, your epsilon is φ/(2sin(θ-φ)) and I get the length of the orange line as sin(2(θ-φ)).

Now, you want to do this for multiple chords defining the vertices of an n-gon, and then sum the orange lengths? What's the end goal here?

And here one is. This is of BASE chords.

Green central angle = 2* red central angle

Red angle is 2pi/n =2 * (k=1) * pi/n
Green is 4pi/n =2 * (k=2) * pi/n

Aqua arrows show reflection....

pink/purple is 4pi/n + 2 *(n-floor(n)) pi /n = 2 * (k=2) * pi/n +... ????
blue is 6pi/n + 2 * (n-floor(n)) pi/n = 2 * (k=3) * pi/n +.... ???

green%2Bred%2Baqua%2Breflection.jpg

If they're reflected, the pink is at an angle of (2- 4/n)π and the blue at an angle of (2-2/n)π. In general, the reflection of the point an angle θ around the circle would be 2π - θ. No floor functions necessary.

but if you define the perturbed points via small central angle differences \(\phi_k\) then the adjusted chord has length \(2\sin(\theta_k - \phi_k)\) (Note that your angle= epsilon*adjusted chord length equation is pretty much a small angle approximation of this, the angle is phi/2). Then the sine you are looking at is \(\sin(\theta_k + \phi_k)\).
That would be one of the sines- each phi_k would be different, although the theta_k's are reflected over the 0--> pi line (what is the line from 0 to pi called on a circle?).

Sure, that's why I put in the k's. I'd call it the horizontal diameter.

Also, the sum of the sines can't be zero - unless you're not measuring the angles as shown or counting negative angles somehow?
I meant (sorry about the quality of my descriptions) the sum of sines of chords reflected over the 0 to pi line = 0, unless you shift the chords a bit. In other words, the sin of pi/2 + sin 3pi/2 = 0. Sin pi/3 + 2/3 pi + sin 4/3pi +sin 5/3pi = 0..

Yup, for each pair sin(2π - θ) + sin(θ) = 0, but that doesn't really correspond to lengths anymore. So you want to perturb these angles so that the sum is non-zero, or do you want to sum the actual lengths?
 
Here's what I think I've parsed, for one chord and translated into my own notation, just cuz.

Take a chord of a unit circle (the yellow one) that subtends an angle 2θ from the center of the circle - it has length 2sin(θ). Rotate this chord about one of its endpoints by an angle of φ, tracing out an arc (the red one) on the circle of length 2φ. The edge of the sweep is the blue chord, and has length 2sin(θ-φ). Then, your epsilon is φ/(2sin(θ-φ)) and I get the length of the orange line as sin(2(θ-φ)).
That's it. I was doing a summation of sines / ε in the one case, and summation of 2*sines/ (ε (2sin(θ-φ))^2) to extract the total number of chords (because I thought it was cool how the sum added up to exactly the number of chords).

Now, you want to do this for multiple chords defining the vertices of an n-gon, and then sum the orange lengths?
Basically, but not the orange lengths, only the sines/ε in the one case, and 2*sines/ (ε (2sin(θ-φ))^2) in the other.
What's the end goal here?
I was trying to find a closed form equation for the sum of sines at the limit ε→0. It might help me prove or disprove something later. At this point, I still have more research to do before I get back into the original idea, which this is an offshoot of.

If they're reflected, the pink is at an angle of (2- 4/n)π and the blue at an angle of (2-2/n)π. In general, the reflection of the point an angle θ around the circle would be 2π - θ. No floor functions necessary.
lol, yeah.
So you want to perturb these angles so that the sum is non-zero, or do you want to sum the actual lengths?
Perturb.
 
In additional to orthogonal matrices, there are also "unitary" ones, a superset of them. A unitary matrix M is a complex-valued matrix which has inverse(M) = HC(M)
where
HC(M) = Hermitian conjugate of M = complex-conjugate(transpose(M))

The group of all n*n unitary matrices is called U(n) and its subgroup of matrices with determinant 1 is called SU(n) (special unitary).

* Can one show that U(n) = {some matrix group} * SU(n)? If so, what is that group?

* Is there any similar relationship between O(n) and SO(n)?

* How many parameters does one need for SU(2) matrices?

* What is the relationship between those parameters and the quaternions?
 
My solutions:


* Can one show that U(n) = {some matrix group} * SU(n)? If so, what is that group?

Let's take element M of U(n). Find D = det(M) and N = D-1/n*M.

Giving M = D1/n * N

The first part is a complex number with absolute value 1, and the elements of the group U(1) are {{that sort of complex number}}

The second part is an element of SU(n), as is evident from taking its determinant.

So U(n) = U(1) * SU(n)


* Is there any similar relationship between O(n) and SO(n)?

There is, only for odd n. In that case, O(n) = {I,-I} * SO(n)

That is not possible for even n, because in that case, det(-I) = 1. But for odd n, det(-I) = -1, and in general, det(-I) = (-1)n.


* How many parameters does one need for SU(2) matrices?

As with SO(2) ones, consider the general form: M = {{a111+a112*i, a121+a122*i}, {a211+a212*i, a221+a222*i}}
Its inverse: {{a221+a222*i, -a121-a122*i}, {-a211-a212*i, a111+a112*i}}
Its HC: {{a111-a112*i, a211-a212*i}, {a121-a122*i, a221-a222*i}}

Thus, a211 = -a121, a212 = a122, a221 = a111, a222 = -a112

Introducing the Pauli matrices, s1 = {{0,1},{1,0}}, s2 = {{0,-i},{i,0}}, s3 = {{1,0},{0,-1}}, we get

M = a111*I + (i*a122)*s1 + (i*a121)*s2 + (i*a122)*s3
or
M = a0*I + i*a1*s1 + i*a2*s2 + i*a3*s3
with the constraint
a02 + a12 + a22 + a32 = 1

Thus, SU(2) matrices have 3 parameters, just like SO(3) ones.


* What is the relationship between those parameters and the quaternions?

One can show that (-i*s1), (-i*s2), and (-i*s3) behave just like the quaternion units I, J, and K: I2 = J2 = K2 = I*J*K = -1. Try doing so.

Thus, the SU(2) matrices behave like unit-magnitude quaternions.
 
I'll now go from O(2) to O(3). One can get O(3) from SO(3) by taking SO(3) elements and finding both them and - them. So most of the work will be in finding the possible SO(3) elements.

For a SO(3) subgroup R, to get from it to an O(3) subgroup with reflections, there are two possibilities:
R' = {R, -R}

Also if R has subgroup R0 with half of R's elements, with the remaining elements forming a single coset R1, forming
R = {R0, R1}

We can also have:
R' = {R0, -R1}

-

Let's consider what form that a SO(3) matrix can have. Let's break it up into symmetric and antisymmetric parts.
M = {{a11, a12 + b3, a13 - b2}, {a12 - b3, a22, a23 + b1}, {a13 + b2, a23 - b1, a33}}

The a's are the symmetric part, and the b's the antisymmetric part, a 3-vector.

From orthogonality, M.transpose(M) = transpose(M).M = I

For the b's nonzero, a12 = a0*b1*b2, a13 = a0*b1*b3, a23 = a0*b2*b3 for some a0. One can do a similar setup for the b's all zero, for some other vector n. Thus,

a11 = c01 + c1*n1^2, a12 = c1*n1*n2, a13 = c1*n1*n3, a22 = c02 + c1*n2^2, a23 = c1*n2*n3, a33 = c03 + c1*n3^2
b1 = c2*n1, b2 = c2*n2, b3 = c2*n3

It's now necessary to find how c01, c02, and c03 are related.
c2 nonzero:
- at least two n's nonzero: c01 = c02 = c03
- n1 nonzero: c02 = c03, c01 + c1 = 1, consistent with c01 = c02 = c03.
c2 zero:
- all three n's nonzero: c01 = c02 = c03
- two n's nonzero: can set c01 = c02 = c03
- one n nonzero: c01 = c02 = c03 with choice of c1

So we can set c01 = c02 = c03. This gives the elements of M the form

M = c0*I + c1*dyad(n,n) + c2*(ε.n)
or
Mij = c0*δij + c1*ni*nj + c2*εijk*nk
summing over repeated indices in each term

One can then find c0, c1, and c2, and one finds:
M = cos(a)*I + (1-cos(a))*dyad(n,n) + sin(a)*(ε.n)
for n a unit vector and a the rotation angle.

This is the axis-angle representation of the rotation matrices.
 
Quaternion multiplication has the nice property that the quaternions' magnitudes also get multiplied. For quaternion
Q = q0 + q1*I + q2*J + q3*K
its magnitude
|Q|2 = q02 + q12 + q22 + q32

Quaternions can be interpreted as having a scalar part, qs = q0, and a 3-vector part, qv = {q1,q2,q3}. The magnitude here is qs2 - qv.qv

Multiplying two quaternions thus becomes
qs12 = qs1*qs2 - qv1*qv2
qv12 = qs1*qv2 + qs2*qv1 + (qv1)x(qv2)

Here is a quaternion representation of the rotation matrix:
Mij = (qs2 - qv.qv)*δij + 2*qvi*qvj - 2*qs*εijk*qvk

So SU(2) has a subgroup {I,-I} with quotient group or factor group SO(3). This can help us find subgroups of SU(2) from subgroups of SO(3).

From each element of the SO(3) subgroup, find + and - quaternions (there is a sign ambiguity). These are elements of the SU(2) subgroup. In some cases, one can select only one of the signs, but in not all cases.
 
Now for all finite subgroups of SO(3). That is obviously more difficult than for SO(2). But here goes anyway.

First consider a non-identity element. It has an axis n and an angle a: (n,a). But there is a sign ambiguity: (n,a) and (-n,-a) will both work. Each one will generate a cyclic subgroup of the group: (n,2*pi*k/m) for order m and k from 0 to m-1. This subgroup is the element's centralizer in the group, all elements that commute with it.

An element b of a group produces a conjugate of an element a as follows: b.a.b-1. The conjugate of a rotation (n,a) is (rotated n,a). The number of distinct conjugates is (order of group)/(order of centralizer). Using N as the group's order, there are thus N/m conjugate values of n. Adding up the conjugates of each non-identity element of the generated cyclic group, we get N*(m-1)/m elements.

Adding in 1/2 for the sign ambiguity, the total number of elements is thus
N = 1 + (1/2)*(sum over m of N*(m-1)/m)

or after some rearrangement,
1 - 1/N = (1/2)*(sum over m of 1 - 1/m)

Since m is at least 2, that sum is at least (sum over m of 1/2) or (number of m's)/2. Thus,
(1 - 1/N) >= (number of m's)/4

meaning that there are at most three m values.

Let us try one value.

Then,
(1 - 1/N) = (1/2)*(1 - 1/m)
1 - 2/N = - 1/m
2/N = 1 + 1/m
N = 2*m/(m+1)
For m >= 2, N will not be an integer, this this case is out.

Let us now try two values, m1 and m2.

Then,
(1 - 1/N) = (1/2)*(2 - 1/m1 - 1/m2)
N = 2*m1*m2/(m1 + m2)
Since both m1 and m2 must divide N, we can take
(N/m1) = 2*m2/(m1 + m2)
The right-hand side is 2 if m1 = 0, 1 if m1 = m2, and 0 in the limit of m1 -> infinity
Thus, m2 = m1 = m, and N = m

This gets us the cyclic group generated by some rotation.

Now three values.

First, let us find the minimum value of the m's.
1 - 1/N >= (3/2)*(1 - 1/m)
For m = 3, the rhs is 1, thus their minimum value is 2.

Set one of them to 2. What is the minimum value of the remainder?
1 - 1/N >= (1/2)*(1/2 + 2*(1 - 1/m))
For m = 4, the rhs is 1, thus the remainder's minimum value is at most 3.

Let us first try 2 and 2.
1 - 1/N = (1/2)*(1/2 + 1/2 + (1 - 1/m))
giving us
N = 2*m

This is the 3D-rotation dihedral group. Half of its elements are in a cyclic subgroup with order m and axis direction n, while the other half are 180-degree rotations around evenly-spaced axes perpendicular to n.

Now 2 and 3.
1 - 1/N = (1/2)*(1/2 + 2/3 + (1 - 1/m))
1/N = 1/(2*m) - (1/12)

For m = 3,
N = 12
The tetrahedral group, the group of rotational symmetries of the tetrahedron.

For m = 4,
N = 24
The octahedral group, the group of rotational symmetries of the cube and the octahedron.

For m = 5,
N = 6
The icosahedral group, the group of rotational symmetries of the dodecahedron and the icosahedron.
 
We thus have two infinite families and three additional ones. I'll use Schoenflies /Schönflies notation here.
  • C(n) -- cyclic
  • D(n) -- dihedral
  • T -- tetrahedral: -- alternating A4
  • O -- octahedral: -- symmetric S4
  • I -- icosahedral: -- alternating A5
2 infinite families, 3 special.

Now for adding reflections.
  • C(2n,h) -- {I,-I} * C(2n)
  • C(2n+1,h) -- {C(2n+1),-coset} of C(4n+2)
  • S(4n) -- {C(2n),-coset} of C(4n)
  • S(4n+2) -- {I,-I} * C(2n+1)
  • C(n,v) -- {C(n),-coset} of D(n)
  • D(2n,h) -- {I,-I} * D(2n)
  • D(2n+1,h) -- {D(2n+1),-coset} of D(4n+2)
  • D(2n,d) -- {D(2n),-coset} of D(4n)
  • D(2n+1,d) -- {I,-I} * D(2n+1)
  • Th -- {I,-I} * T
  • Td -- {T,-coset} of O
  • Oh -- {I,-I} * O
  • Ih -- {I,-I} * I
5 infinite families, 4 special.

Total: 7 infinite families, 7 special.

The special ones I like to call quasi-spherical, but Wikipedia's contributors call them polyhedral. The groups D2 and D2h may be interpreted as additional quasi-spherical ones.
Elements of D2: diagonal matrices for {1,1,1}, {-1,-1,1}, {1,-1,-1}, {-1,1,-1}
Elements of D2h: D2 ones with diagonal matrices for {-1,-1,-1}, {1,1,-1}, {-1,1,1}, {1,-1,1}

Also C1 and S2:
C1: {I} (identity group)
S2: {I, -I}


I'd prefer C(n,s) for S(2n) in this notation, but that's another story.
 
The rotation groups among them have some elegant SU(2) or quaternionic versions. I'll give them, with their SU(2) -> SO(3) mappings.

Cyclic: QC(m): {cos(a),0,0,sin(a)} where a = 2*pi*k/m and k = 0 to m-1
QC(2m) -> C(m)
QC(2m+1) -> C(2m+1) -- unlike the others, no opposite-sign pairs

Dihedral: QD(m) = QC(2m) + {0,cos(a),sin(a),0} where a = pi*k/m and k = 0 to 2m-1
QD(m) -> D(m)

QD(2) is {+-1, 0, 0, 0}, {0, +- 1, 0, 0}, {0, 0, +-1, 0}, {0, 0, 0, +-1}

Tetrahedral: QT = QD(2) + all sign combinations of {+-1, +-1, +-1, +-1}/2

Octahedral: QO = QT + all permutations and sign combinations of {+-1, +-1, 0, 0}/sqrt(2)

Icosahedral: QI = QT + all *even* permutations and all sign combinations of {+- (sqrt(5)+1)/4, +- 1/2, +- (sqrt(5)-1)/4, 0}


I'm not going to get into SO(4) or O(4) finite subgroups, but they can all be constructed from pairs of SU(2) finite groups. SO(5) and O(5) and beyond? I don't know of any simple or even halfway simple ways of doing those. I won't get into SU(3) and beyond, either.

There is a family of matrices like the orthogonal and unitary ones called the "symplectic matrices". These ones have real matrix M satisfying
M.J.transpose(M) = J

where J is an antisymmetric matrix usually taken to be {{0, I}, {-I, 0}}. The group of them is called the "symplectic group", Sp(2n). One can show that Sp(2) ~ SU(2).

Here are more such isomorphisms:
SO(2) ~ U(1)
Spin(3) ~ SU(2) ~ Sp(2)
Spin(4) ~ SU(2) * SU(2)
Spin(5) ~ Sp(4)
Spin(6) ~ SU(4)

Spin(n) is the "spinor" version of SO(n), "spinor" being short for "spin vector". Spin(n) is the "double cover" of SO(n) -- SO(n) elements can be formed from pairs of Spin(n) elements.
 
I found out the hard way that recursive homing functions are quicker and easier to implement than some forms of iterative homing functions.

I wanted to find an φ = 2*sin(θ-φ)*ε, with 0<t<.1, so set up an iterative homing function that added and subtracted from ε based on whether φ - 2*sin(θ-φ)*ε was positive or negative, and adjusted the amount added to or subtracted from ε based on whether the last |φ| was greater or less than the current |φ|. Complicated...

What is easier is to set an initial value of φ=2*sin(θ)*ε, put it into φ = 2*sin(θ-φ)*ε, and repeat it a couple of times. So much more efficient and easy to code. Seriously... wow. 3 iterations = 100+ iterations of the other code. As my grandfather used to say "duhhhhhhhh".

As to the other part of the problem I was working on (adding up sines), it was easier to represent with cosines.

Instead of sin(φ + π/2 - θ), use cos(θ -φ). When you look at the problem, for one set of chords I was essentially taking cos(θ - φ) - cos(θ + φ) and dividing the result by ε = φ / [2*sin(θ-φ)] ... so you might see where that is going.

For a single chord, you could just calculate [cos(θ - φ) - cos(θ)]/ε... if you multiply it by [2*sin(θ-φ)]^-2 *2 =2 *chordlength^-2....
 
In addition to the unitary, orthogonal, and symplectic matrix groups, there are the just plain linear matrix groups, supergroups of these.

The biggest one is the general linear group, GL(n). It is all nonsingular n*n matrices. If one specifies what algebraic field F that its elements belong to, then it is GL(n,F). GL(n,R) is all nonsingular real n*n matrices. For a finite field GF(p^n), one often uses only the p^n part. Thus, GL(3,2) is the group of all nonsingular 3*3 matrices with elements in {0,1} under addition and multiplication modulo 2.

If one takes an element M and treats the a*M for all nonzero a as equivalent elements, one gets the projective general linear group PGL(n). In group-theory terms, it's the quotient group for GL(n) and its {a*I} subgroup.

If one uses only determinant = 1, then one gets the special linear group SL(n).

If one imposes both conditions, one gets the projective special lienar group PSL(n). It is the quotient group for SL(n) and its {a*I} subgroup. That latter one has the condition that a^n = 1.

-

Interestingly, GL(n,R) is related to U(n) by analytic continuation, and likewise, SL(n,R) is related to SU(n). That may not be obvious from the groups themselves, but it is evident from their Lie algebras.

Another interesting case of analytic continuation is for groups formed much like the symplectic matrix groups, but with a symmetric real matrix instead.

M.g.transpose(M) = g

One can diagonalize this tensor and scale its eigenvalues to +- 1. This permits a convenient classification of this sort of group. It is O(n1,n2), for n1 +1's and n2 -1's in g. It is related by analytic continuation to O(n1+n2).

There is an important application of this sort of group. The Lorentz group, the group of relativistic rotations and boosts, is O(3,1). The 3 is for space dimensions and the 1 for time dimensions. It reduces to SO(3,1), or more properly, SO+(3,1) with not one, but two kinds of reflections, splitting O(3,1) into four parts: no reflection, space reflection (parity), time reflection, both of them reflection.
 
I've found some table of isomorphisms of low-order Lie algebra in various places, mostly with Google Books, and I've checked them by calculating appropriate transformation matrices for them.

I've grouped them by groups related by analytic continuation.

On the left is the SO algebra, or more precisely, its spinor algebra. On the right is the reality of the spinor irreducible representations ("irreps"): R = real, H (Hamilton) = quaternionic or pseudoreal, C = complex

SO(2) ~ U(1) -- CC
SO(1,1) ~ GL(1,R+) -- RR

SO(3) ~ SU(2) ~ Sp(2) ~ SL(1,H) -- H
SO(2,1) ~ SU(1,1) ~ Sp(2,R) ~ SL(2,R) -- R

SO(4) ~ SU(2) * SU(2) -- HH
SO(2,H) ~ SU(2) * SU(1,1) -- RH
SO(3,1) ~ SL(2,C) -- CC
SO(2,2) ~ SU(1,1) * SU(1,1) -- RR

SO(5) ~ Sp(4) -- H
SO(4,1) ~ Sp(2,2) -- H
SO(3,2) ~ Sp(4,R) -- R

SO(6) ~ SU(4) -- CC
SO(3,H) ~ SU(3,1) -- CC
SO(5,1) ~ SL(2,H) -- HH
SO(4,2) ~ SU(2,2) -- CC
SO(3,3) ~ SL(4,R) -- RR
 
Question for people in this thread:

What is it that keeps you interested in advanced math like this?

Because it's interesting. Because it's useful. Because it's fun. :D

Do you know the feeling you get when you figure something out, solve a problem, discover something new, or notice a pattern you hadn't seen before? That flash of 'whoa, that makes so much sense' with a side of contentment? Math is that, condensed and purified - it's the poetry of logical ideas.

Mathematicians are pattern junkies. Most people are satisfied with caffeine-level patterns and puzzles, but we need meth. ;)
 
Question for people in this thread:

What is it that keeps you interested in advanced math like this?

Because it's interesting. Because it's useful. Because it's fun. :D

What do you find interesting about it? In what ways have you found it useful?

Do you know the feeling you get when you figure something out, solve a problem, discover something new, or notice a pattern you hadn't seen before? That flash of 'whoa, that makes so much sense' with a side of contentment? Math is that, condensed and purified - it's the poetry of logical ideas.

I'm a programmer, so I definitely get that.

Mathematicians are pattern junkies. Most people are satisfied with caffeine-level patterns and puzzles, but we need meth. ;)

Advanced math is one of those things that I've looked into a small amount, have a bunch of books on my e-reader, but I guess I'd say my over-arching personal interest is in people and human systems, so the payback I got when reading about math seemed way too far removed from my usual interests.

So that's what prompted the original question: if I can't commit the time to learning math itself, I'm still curious about what draws mathematicians themselves into the field.

As an aside I was the local math whiz going through school, but my interest waned when I ran into introductory calculus professors with thick accents. The interest never sparked again. If I had unlimited money and time, now, I'd definitely dabble a bit in the field.
 
Because it's interesting. Because it's useful. Because it's fun. :D

What do you find interesting about it? In what ways have you found it useful?

I'm a geometer - I like figuring out the relationships and properties of shapes. Most of my research is in the standard low dimensions 2 and 3, so I get to work on problems that are easily visualizable and intuitively explainable. I get to ask basic questions about curves, surfaces, points, circles, etc, questions that I can easily explain to a 5 year old, yet they are questions where no one on the planet knows the answer. Sometimes I get to be the first to ever find that answer, which is pretty cool.

It's hard to find math that isn't useful in some way. Hardy was completely wrong, no math is solely pure and no part of the modern world is free of mathematics, even the highly abstract variety. Personally, my work is somewhat non-abstract and applied anyway, and is already being used. I haven't bothered coding them, but it's been pointed out that several of my results would make a good basis for mobile games as they can be used to generate fun geometric puzzles. More impressive examples are some of my PhD adviser's results, which have been coded and are part of standard graphics libraries in multiple programming languages.

Do you know the feeling you get when you figure something out, solve a problem, discover something new, or notice a pattern you hadn't seen before? That flash of 'whoa, that makes so much sense' with a side of contentment? Math is that, condensed and purified - it's the poetry of logical ideas.

I'm a programmer, so I definitely get that.

Mathematicians are pattern junkies. Most people are satisfied with caffeine-level patterns and puzzles, but we need meth. ;)

Advanced math is one of those things that I've looked into a small amount, have a bunch of books on my e-reader, but I guess I'd say my over-arching personal interest is in people and human systems, so the payback I got when reading about math seemed way too far removed from my usual interests.

So that's what prompted the original question: if I can't commit the time to learning math itself, I'm still curious about what draws mathematicians themselves into the field.

As an aside I was the local math whiz going through school, but my interest waned when I ran into introductory calculus professors with thick accents. The interest never sparked again. If I had unlimited money and time, now, I'd definitely dabble a bit in the field.

It doesn't seem like you got to any of the really interesting parts of math before losing interest. Most people don't, and it's a real shame. It's like spending years learning nothing but grammar and spelling, yet never getting to read a book. Of course you'd get a skewed perspective on how interesting the subject is.

At this point, you don't need to go in any particular order; you can use your outside interests to inform the aspects of math to study - e.g. if you're interested in social networks you can study social network analysis, which involves graph theory, statistics, etc.
 
What do you find interesting about it? In what ways have you found it useful?

I'm a geometer - I like figuring out the relationships and properties of shapes. Most of my research is in the standard low dimensions 2 and 3, so I get to work on problems that are easily visualizable and intuitively explainable. I get to ask basic questions about curves, surfaces, points, circles, etc, questions that I can easily explain to a 5 year old, yet they are questions where no one on the planet knows the answer. Sometimes I get to be the first to ever find that answer, which is pretty cool.

It's hard to find math that isn't useful in some way. Hardy was completely wrong, no math is solely pure and no part of the modern world is free of mathematics, even the highly abstract variety. Personally, my work is somewhat non-abstract and applied anyway, and is already being used. I haven't bothered coding them, but it's been pointed out that several of my results would make a good basis for mobile games as they can be used to generate fun geometric puzzles. More impressive examples are some of my PhD adviser's results, which have been coded and are part of standard graphics libraries in multiple programming languages.

Do you know the feeling you get when you figure something out, solve a problem, discover something new, or notice a pattern you hadn't seen before? That flash of 'whoa, that makes so much sense' with a side of contentment? Math is that, condensed and purified - it's the poetry of logical ideas.

I'm a programmer, so I definitely get that.

Mathematicians are pattern junkies. Most people are satisfied with caffeine-level patterns and puzzles, but we need meth. ;)

Advanced math is one of those things that I've looked into a small amount, have a bunch of books on my e-reader, but I guess I'd say my over-arching personal interest is in people and human systems, so the payback I got when reading about math seemed way too far removed from my usual interests.

So that's what prompted the original question: if I can't commit the time to learning math itself, I'm still curious about what draws mathematicians themselves into the field.

As an aside I was the local math whiz going through school, but my interest waned when I ran into introductory calculus professors with thick accents. The interest never sparked again. If I had unlimited money and time, now, I'd definitely dabble a bit in the field.

It doesn't seem like you got to any of the really interesting parts of math before losing interest. Most people don't, and it's a real shame. It's like spending years learning nothing but grammar and spelling, yet never getting to read a book. Of course you'd get a skewed perspective on how interesting the subject is.

At this point, you don't need to go in any particular order; you can use your outside interests to inform the aspects of math to study - e.g. if you're interested in social networks you can study social network analysis, which involves graph theory, statistics, etc.

I know I'd find it interesting, it's just low on the priority list at this time. Instead I've always gravitated to biology, physiology, and history, now moreso history than any other discipline. I read a lot of interesting stuff and the discoveries I make change how I see the world it seems on an almost month by month basis. Now that I think of it, math probably underlies a lot of the discoveries, but it's usually the discovery and not the process that led to the discovery that I'm interested in.

The statistical analysis surrounding aspects of software (data science) piqued my interest in the last year, but pretty quickly I noticed that the field revolved around reducing people to commodifiable, data points, and I threw up in my mouth a little and decided to work for a hospital instead. :p

I still like statistics a lot, though, and have already dabbled a bit in it in the bio realm in my undergrad. Might be something I take another look at again some time down the road.
 
What are real, pseudoreal, and complex representations? What's a representation, anyway?

A group {a} has a representation {D(a)} of matrices D that realize the group: D(a).D(b) = D(a.b).

There is a trivial one, the identity representation D(a) = I.

There is one that always exists, or at least is well-defined for finite groups, the "regular representation":
D(b)ac = 1 if c = a.b and 0 otherwise

A representation or "rep" is said to be irreducible if a matrix X that satisfies
X.D(a) = D(a).X

for all X is proportional to the identity matrix: X = x*I. Otherwise, it is reducible, and it can be reduced to some irreducible representations or "irreps".

-

Now for what makes reps D and D' equivalent. if they are, then there is some matrix Z such that D'(a) = Z.D(a).Z-1 for all a.

If an irrep is equivalent to its complex conjugate, then there is some nonsingular Z such that
D*(a) = Z.D(a).Z-1
for all a.

Repeating this operation gives
D(a) = Z*.D*(a).Z*-1
D(a).(Z*.Z) = (Z*.Z).D(a)

From Irreducibility,
Z*.Z = z*I = Z.Z*

Taking determinants,
|det(Z)|2 = zn

If z > 0, then the rep is real, even if its matrices are complex-values. If z < 0, then the rep is "pseudoreal" or "quaternionic". If no Z exists, then the rep is complex.

-

I will now prove that every irrep of an abelian group has dimension 1.

Abelian = commutative. For every a and b, a.b = b.a

Translate that into a rep: D(a).D(b) = D(b).D(a)
From the definition of irreducibility, D(b) = d(b)*I for all b
The relation D(a).X = X.D(a) is thus true for all X and all a, and the only way that X can be proportional to the identity matrix is if the rep has dimension 1.

-

Also, if a group has a quotient group, then its irreps include that quotient group's irreps.
 
Back
Top Bottom